by Eoin O’Colgain
The field of cosmology appears to be in flux. The standard model ΛCDM, which is based on the cosmological constant Λ and cold dark matter (CDM), is subject to increasingly strenuous stress tests as data quality improves. Concretely, discrepancies have been found between data sets on the assumption that the ΛCDM model is correct. Early and late Universe observables appear to lead to different results. This either points to observational errors, called systematics, in data or physics missing from the ΛCDM model. What makes cosmology so fascinating is that any data set could be giving one a bum steer. In principle, one may simply be drinking tea and interpreting the tea leaves if one relies too much on any given observation. Ultimately, deciphering the problem, or even establishing there is a problem, is tricky.
The ΛCDM model is special. It represents a minimal collection of assumptions that fits the Cosmic Microwave Background (CMB), the relic radiation from the early Universe. The assumptions are packaged into cosmological parameters, which cosmologists fit not only to CMB data, but also a host of other independent data sets. Obviously, these cosmological parameters are fitting constants in a model since we fit them to data. The working assumption in cosmology is that these fitting constants are bona fide constants. In other words, if one determines a given cosmological parameter using data at different epochs, one expects to get the same result.
Nevertheless, this no longer appears to be the case. Data sets at different epochs, when fitted to the ΛCDM model, are returning results that appear to differ. At this juncture, there are two actions a cosmologist may take. The first response is to assume that the ΛCDM model has broken down and look for a replacement model. The second is to look at the assumptions in the observations and try to find an observational systematic that could explain the disagreement. Both responses to a potential crisis are evident in the literature.
The problem with the first response is that cosmology offers very few good clues or leads even when missing physics is on the table. Put differently, data quality can be poor, so there are an infinite number of models that one could construct with new physics that could explain the data. Indeed, simply adding additional parameters to the ΛCDM model in an ad hoc manner is guaranteed to inflate errors enough so that all observations become consistent. The problem with the second is that searching for systematics is an endless pursuit. All observations in astrophysics and cosmology rest upon assumptions, any one of which could be wrong. Thus, the only way one is confident of any inference in astrophysics or cosmology is if one sees the same feature in multiple independent data sets. To put this in context, in 2011 the Nobel Prize in Physics was awarded for the discovery of late-time accelerated expansion, aka dark energy. Dark energy is supported by multiple observations, so it is a robust result. In the same vein, the anomalies we see between data sets assuming the ΛCDM model are now supported by independent data sets, which makes the discrepancies compelling and worth studying.
This leaves us with only one avenue to make progress on the problem. We need a definition of model breakdown. This sounds simple, but traditionally cosmology has embraced a Bayesian framework, whereby the favoured model is the model that fits the data the best. Since models with additional fitting parameters can fit data better in a trivial way, a penalty is assigned to models for each additional parameter added. In this framework, the ΛCDM model has a distinct advantage in the sense that it typically has fewer parameters. So, any replacement to the ΛCDM model needs to be as physically well motivated as the current model yet provide a better fit to all reputable data sets. Note, in this Bayesian framework, there is no concept of model breakdown. Model A and model B can both be physically flawed, but the model with the fewest parameters that fits the data the best is preferred.
Here we can transplant an idea from elsewhere in science and bring it into the cosmology literature. The basic gist is that any dynamical model, a model one confronts to time sensitive data, worth its salt must always return the same values of the fitting parameters. Bluntly put, any model that returns different values of the fitting parameters at different times is toast. What this means is that the model no longer makes valid predictions, so it is no longer of wide scientific interest. Cosmological data comes from astronomy and astronomers do not measure time, but they record redshift, shifts in the wavelength of light due to the expansion of the Universe. Redshift is a proxy for time in cosmology. Thus, the research programme is straightforward, one can test the ΛCDM model by fitting it to data in different redshift ranges and then compare the results. It should be stressed that the anomalies we now see in cosmology are discrepancies between early and late Universe observables, so we are clearly looking at a signature of model breakdown if systematics can be housed away. As explained, eliminating all systematics in any observable is a Herculean task.
Another problem with the status of the discrepancies is that one is comparing different observables at different redshifts. These different observables can in principle have different systematics or observational errors, which we have overlooked. In contrast, if one can compare the same observable at different redshifts, and yet find a difference in cosmological parameters, this is a much sharper contrast. It either says one of the assumptions common to all the data is wrong or the ΛCDM model is incorrect. Moreover, if one sees the same feature across independent observables with different underlying assumptions, then one has a conflict between a set of observables and the ΛCDM model. Returning to the science case for dark energy, where results resting on multiple observations demand credence, this argument can tip the debate in favour of ΛCDM model breakdown. The argument is simple and effective.
Surprisingly, these simple tests have not been done. Inspired by an observation of a descending Hubble constant (H0) in a sample of H0LiCOW/TDCOSMO strong lenses, my collaborators and I have started to perform these tests. Note, a descending H0, while apparently contradictory, simply says that the model is flawed (if the trend is not due to systematics). An early port of call was to study Type Ia supernovae, arguably the best distance indicator at our disposal. It had previously been observed by Maria Dainotti and collaborators that the same descending H0 trend can be found in the Pantheon supernovae (SN) sample. While it sounds simple to split a Type Ia SN sample into low and high redshift subsamples and compare the results within the ΛCDM model, one quickly runs into a regime of the model where distributions are non-Gaussian, and this impacts traditional error estimation. This biases Markov Chain Monte Carlo, which is the de facto technique employed by cosmologists to estimate errors. For this reason, we have been resorting to simulations of mock data and other techniques that overcome this bias. We have identified the same decreasing H0 trend in three independent observables, thereby providing us with a hallmark signature of model breakdown in one of the parameters where we see an early versus late Universe disagreement. Convincing our referees of these results has been an uphill battle. We attribute this to a difference in cultures between physics and astronomy, whereby the prevalent scientific culture in observational cosmology is closer to astronomy. We are in the process of reproducing the results using techniques that are closer to the tastes of observational cosmologists.
More recently, on the assumption that the matter density parameter is constant in the ΛCDM model, we showed that another parameter, the so-called S8 parameter, where we see early versus late Universe disagreements evolves with redshift. If true, this conforms to our expectation that the model is breaking down. This result is fascinating, as it rests on simple assumptions and straightforward data analysis. We initially showed the result in smaller data set of 20 data points, before confirming it in a larger data set of 60 odd data points. Of course, in such a large sample of historical data, one can expect that some of the data points are not correct, but the question now is what would it take to stop the S8 parameter from evolving with redshift? Hopefully we will not have to wait long as the Dark Energy Spectroscopic Instrument (DESI) is going to give us a set of the same constraints from a single survey, where if there is a systematic, it is a systematic common to all data points. Returning to our dark energy argument, one would still need to confirm this evolution in an independent observable, but a combination of weak and CMB lensing may eventually deliver the result, modulo the fact that one has different observables, so differing systematics.
In summary, we see early versus late Universe disagreements in cosmological parameters. Unfortunately, we compare different observables at different epochs, so we have an apple versus orange comparison. Our approach is to stick to one fruit and to probe for disagreements in ΛCDM cosmological parameters in different redshift ranges, which are a proxy for different times. If one sees the same disagreements in different observables, one is done, and one can call time on the ΛCDM model. If we do not see this evolution, then this deepens the mystery. It should be stressed that the task of identifying a replacement model is not addressed by this process, however if one finds evolution in a redshift range, then it implies that physics in missing in that epoch of the Universe. This narrows down the new physics that could be causing the evolution.