2024-12-20 13:38 |
Development of Sampling Modules for the Upgrade II of the LHCb ECAL
Reference: Poster-2024-1230
Created: 2021. -1 p
Creator(s): Martinazzoli, Loris
The LHCb experiment is a single-arm forward particle detector located at the Large Hadron Collider at CERN. After the Upgrade II, it will run at a luminosity of up to $1.5 \cdot 10^{34} cm^{−2}s^{−1}$ to collect 300 fb$^{-1}$ to collect 300 fb$^{-1}$ of data. A major revision of the LHCb Electromagnetic Calorimeter is required due to the increased particle densities and radiation doses. One option for the central part is a sampling spaghetti calorimeter (SPACAL) comprising radiation-hard crystal scintillators and a Tungsten absorber, whereas a SPACAL with plastic scintillators and Lead absorber is candidate for the outer region. A prototype was assembled with fibres of Cerium-doped YAG and GAGG. This contribution presents the development of the SPACAL prototypes, including scintillators and photodetectors studies, the test beam results, and Monte Carlo simulations identifying the materials requirements in a high-rate environment.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-18 15:12 |
SiPM development for LHCb SciFi Upgrade II
Reference: Poster-2024-1229
Created: 2024. -1 p
Creator(s): Ronchetti, Federico
The Scintillating Fibre (SciFi) tracker of LHCb will be operated during LHC Run 3 and 4 in the current LHCb experiment design. Due to the high radiation, detector parts as the scintillating fibres and the Silicon Photo-Multipliers (SiPMs) are aging. The reduced light yield and the increased noise, will decrease the hit detection efficiency. Therefore, the SciFi detector will undergo a major upgrade in the framework of the LHCb Upgrade II, in LS4 to cope with the expected higher delivered luminosity and the consequent increase in radiation damage. We present here the work on the SiPMs in view of the new detector under development. Microlens-enhanced SiPMs will allow to improve photo-detection efficiency. Cryogenic cooling with LN2 will allow to reduce significantly the noise and therefore ensure high hit detection efficiency at increased radiation. Finally, the monitoring of the radiation damage of the current detector is presented which is important to assess the life time of the current detector and to gain information for the final design of the future one.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-18 14:35 |
Flavour Tagging at the LHCb experiment
Reference: Poster-2024-1228
Created: 2020. -1 p
Creator(s): Fuhring, Quentin
Decay-time-dependent measurements of $B$ mesons require knowledge of the initial $B$ flavour. At LHCb an ensemble of various Flavour Tagging algorithms is used to determine the initial $B$ flavour. The higher luminosity in LHC Run 3 will be challenging for the LHCb Flavour Tagging and is expected to show an impact on the Flavour Tagging performance. As an alternative to the existing ensemble of Flavour Tagging algorithms an inclusive Flavour Tagging algorithm is under development. This inclusive tagger uses a recurrent neural network to estimate the initial $B$ flavour by evaluating all tracks of an event.“
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-18 12:08 |
Real-time alignment and calibration in Run 3 at LHCb
Reference: Poster-2024-1227
Created: 2024. -1 p
Creator(s): Breer, Nils
The real-time alignment and calibration procedure is a fully automatic procedure at LHCb that is executed within each fill of the LHC. This allows identical alignment and calibration constants to be used in the online and offline reconstruction, ensuring consistency between triggered and offline selected events. The alignment estimates the position of detector elements and is essential to achieve the best data quality. The procedure is implemented for the full LHCb tracking system and the event reconstruction is run using multithreaded processes. The operational and technical aspects of this procedure during data-taking is discussed with the focus on the performance in the 2024 data-taking period where the first global tracker alignment was obtained.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-18 09:12 |
A Neural-Network-defined Gaussian Mixture Model for particle identification applied to the LHCb fixed-target programme
Reference: Poster-2024-1226
Created: 2021. -1 p
Creator(s): Mariani, Saverio
An innovative approach to particle identification (PID) analyses employing machine learning techniques and its application to a physics case from the fixed-target programme at the LHCb experiment at CERN are presented. In general, a PID classifier is built by combining the response of specialized subdetectors, exploiting different techniques to guarantee redundancy and a wide kinematic coverage. At analysis level, the efficiency of PID selections changes thus as a function of several experimental observables, notably the particle momentum, the collision geometry and the experimental conditions. To precisely model the distribution of the PID classifier overcoming the unavoidable imperfections of the simulation, large samples of calibration channels reconstructed and selected in data are needed. In the presented approach, conceived for all applications where the collection of sufficiently-large-size calibration samples is not possible, the PID classifier is modeled on another high-statistics training sample using a Gaussian Mixture Model whose parameters are determined by Multi Layer Perceptrons. These are fed with the relevant experimental features and the non-trivial dependencies of the PID classifier are learned and predicted for the lower-statistics sample. Thanks to its speed and easy configuration, the presented approach, demonstrated on a proof-of-principle physics case to perform as or better than the detailed simulation, is expected to be employable on a large variety of use cases dealing with experimental observables depending on a sizeable number of experimental features.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-17 17:02 |
The LHCb Upstream Tracker: Operations and Performance in Run3
Reference: Poster-2024-1225
Created: 2024. -1 p
Creator(s): Cesare, Sara
The Upstream Tracker is a critical component of the tracking system in the LHCb Upgrade I detector. It has been commissioned and is now operational, collecting data during Run 3. The tracker enhances the reconstruction of charged particle tracks and long-lived particles, such as $\Lambda$ and $K^0_S$, thereby improving the performance of the software trigger. This presentation will cover experiences with detector operations and data acquisition, along with the performance results based on Run 3 data.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-17 15:44 |
Measurement of the $W$ boson mass at LHCb
Reference: Poster-2024-1224
Created: 2021. -1 p
Creator(s): Hunter, Ross John
Constraints on new physics in the electroweak sector are limited by the precision of direct measurements of the $W$ boson mass ($m_W$). A new measurement is hereby reported, using proton-proton collision data recorded by the LHCb experiment in 2016 at $\sqrt{s}=13\ \text {TeV}$, corresponding to roughly 1.7 $\textrm{fb}^{-1}$ of integrated luminosity. From a simultaneous fit of the muon $p_{\textrm{T}}$ distribution from $W \to \mu\nu$ decays and the $\phi^{\ast}$ distribution from $Z \to \mu\mu$ decays, $m_W$ is measured to be \begin{equation*} m_W = 80354 \pm 23_{\textrm{stat}} \pm 10_{\textrm{exp}} \pm 17_{\textrm{theory}} \pm 9_{\textrm{PDF}} \, \text {MeV} ,\end{equation*} where the uncertainties are due to statistical, experimental systematic, theoretical and parton distribution function sources respectively. This is an average of results based on three recent global parton distribution function sets, and is compatible with previous measurements as well as the prediction from the global electroweak fit. This measurement is a pathfinder for a full Run-2 (2016-2018) measurement from LHCb, which is expected to be competitive with current world-leading measurements, and to make a substantial contribution to an LHC-wide average due to the complementary acceptance of LHCb with respect to ATLAS and CMS.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-17 15:17 |
Time-dependent $CP$ violation in $B^0_{(s)} \to h^+h^-$ decays
Reference: Poster-2024-1223
Created: 2018. -1 p
Creator(s): Fazzini, Davide
Charmless charged two-body b-hadron decays to final states containing kaons and pions represent an important test-bed for the Standard Model. The time-dependent $CP$ asymmetries of $B^0\to \pi^+\pi^-$ and $B_s^0\to K^+K^-$ decays are measured using a data sample of $pp$ collisions collected in Run~1, corresponding to an integrated luminosity of 3 fb$^−1$. The same data sample is used to measure the time-integrated $CP$ asymmetries of $B^0\to K^+\pi^-$ and $B_s^0\to K^- \pi^+$.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-17 15:01 |
Quantum Machine Learning at LHCb
Reference: Poster-2024-1222
Created: 2021. -1 p
Creator(s): Nicotra, Davide
Quantum Machine Learning (QML) provides new opportunities for data analysis in high-energy physics. This study investigates the use of QML techniques for b-jet charge tagging at LHCb, focusing on a Variational Quantum Classifier algorithm. By utilising quantum entanglement and correlations among jet particles, the quantum approach aims to improve tagging performance compared to traditional methods. A dataset of b-jets at a center-of-mass energy of 13 TeV was used. Quantum models were compared against a Deep Neural Network (DNN) and conventional muon-based technique (muon-tag). The results show that quantum models perform competitively on reduced datasets, demonstrating their potential for computationally constrained tasks. This work represents an important step in combining quantum computing with machine learning for particle physics, opening pathways for future studies on forward-central b-jet asymmetry and related applications.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
2024-12-17 14:46 |
Luminosity measurement at LHCb
Reference: Poster-2024-1221
Created: 2021. -1 p
Creator(s): Van Dijk, Maarten
The LHCb detector, designed to measure the decays of heavy hadrons, is a forward-arm spectrometer. Its efficiency can be degraded by collisions with high occupancy: therefore, a technique known as "luminosity levelling" has been used since the start of the LHC Run 1, allowing to control and stabilize the instantaneous luminosity with a precision of 5%. During LHC Runs 1 and 2, this technique employed data from the hardware-based trigger level to determine the instantaneous luminosity. These counters are calibrated in dedicated data taking runs a few times per year. The combination of van der Meer scans and of beam profiles obtained in beam-gas interactions, unique to LHCb, allowed LHCb to obtain in Run 1 the most precise luminosity measurement ever achieved at a bunched hadron collider. During LHC Run 3, the upgraded LHCb detector will see a 5x increase of luminosity. Dedicated luminosity detectors have been designed and are being commissioned for use in Run 3 and Run 4. This talk will review the methods used in Run 1 and introduce the new approach being developed for the coming LHC runs.
|
© CERN Geneva
Fulltext
|
詳細記錄 - 相似記錄
|
|
|
|