Data-driven modeling of beam loss in the LHC
In the Large Hadron Collider, the beam losses are continuously measured for machine protection. By design, most of the particle losses occur in the collimation system, where the particles with high oscillation amplitudes or large momentum error are scraped from the beams. The level of particle losses typically is optimized manually by changing multiple control parameters, among which are, for example, currents in the focusing and defocusing magnets along the collider. It is generally challenging to model and predict losses based on the control parameters due to various (non-linear) effects in the system, such as electron clouds, resonance effects, etc, and multiple sources of uncertainty. At the same time understanding the influence of control parameters on the losses is extremely important in order to improve the operation and performance, and future design of accelerators. Existing results showed that it is hard to generalize the models, which assume the regression model of losses depending on control parameters, from fills carried out throughout one year to the data of another year. To circumvent this, we propose to use an autoregressive modeling approach, where we take into account not only the observed control parameters but also previous loss values. We use an equivalent Kalman Filter (KF) formulation in order to efficiently estimate models with different lags.
READ FULL TEXT