New LZ results offer insight to dark matter scientists

Centre members Robert James of The University of Melbourne and Theresa Fruth of The University of Sydney are part of an experiment that has set the tightest limits yet on the properties of dark matter.

The LUX-ZEPLIN experiment is being undertaken by a group of 250 scientists who announced their results this week.

Below, Robert explains the results and his role in the research. More details are available in Theresa’s article in The Conversation.

Can you explain the LZ research and what the results mean?

This is the largest exposure (product of detector mass and runtime) ever taken with a liquid xenon TPC and allows us to probe WIMP dark matter with greater sensitivity than anyone has done to date. In this case, we have a result approximately five times more sensitive than our 2022 result, which at the time was the world's best.

We see no evidence for WIMPs with an interaction strength between our last limit and our new limit. While that may not sound very exciting, it rules out a lot more so-called parameter space for WIMP dark matter, informing better what sorts of detectors we might want to build in the future. It also brings us much closer to our goal of fully ruling out WIMP parameter space down to the neutrino floor, when WIMPs and neutrinos become almost indistinguishable in these instruments.

It's a strong indication that we can successfully deploy these detectors to achieve higher and higher exposures, and should continue to do so. There's a lot more than just WIMP science that we can get out of these: future searches will probe astrophysical neutrinos, neutrinoless double beta decay channels, alternative dark matter scenarios, and more.

What was your involvement in the research?

I led the statistical inference for the result. This is the final stage of the analysis, where we quantify the extent to which our data is consistent with our background model, with and without a WIMP signal present, in order to quantify the parameter space we can exclude with our data.

Preparing for this step of the result involved deploying a new software tool which I developed during my PhD. I used this software to implement our 'likelihood', which is the mathematical representation of our model, containing inputs from simulation and background characterisation work.

In the case of this analysis, we had a number of novel features in this likelihood. One of these is a radon tagging approach, which allows us to flag periods in our data where we expect decays from our main background (radon) to lie, and incorporate that information without excluding those periods. This reduces the impact of this radon background.

Another is improving our modelling of one of our internal backgrounds, so-called Xe124 double electron capture decays. This improved modelling allowed us to incorporate previously unaccounted for subtleties on this background component into our model, something that was only possible with an exposure this large. We also included information in our likelihood from our veto detectors, constraining better our neutron background, and our previous science run, allowing us to do a combined analysis with our previous result.