MENU

Follow us

Learn about Cosmology

Learn about Cosmology
Lambda-Cold_Dark_Matter_Accelerated_Expansion_of_the_Universe_Big_Bang-Inflation

There are no "coincidences" in the universe

by Vivian Poulin

These are interesting times for cosmologists. On the one hand, general relativity and the cosmological principle of homogeneity and isotropy on sufficiently large scales has led to an incredibly powerful cosmological model (Lambda Cold Dark Matter — LCDM) whose predictions have been confirmed to extremely high precision with the latest generation of surveys of the cosmic microwave background (the Planck satellite, the South Pole Telescope, and Atacama Cosmology Telescope) and of large scale structures (the Sloan Digital Sky Survey). On the other hand, there are immense questions to be answered, and in particular: what is the nature of the mysterious dark matter and dark energy that pervade the universe, and that has left cosmologists wondering whether there could be a massive hole in their apparently so perfect model?

I think most cosmologists would agree, that even though the LCDM model is so good at explaining and predicting a wide-variety of observations, something deep is missing at least in the fundamental understanding of these components (i.e. currently the LCDM model is only a parametric description of nature) and that one expects* that at some point, with larger and larger datasets of ever increasing precision, something will break down. Particularly exciting today is that we may be on the verge of (yet another) breakthrough in cosmology: our ever-so-perfect cosmological model is failing at describing recent measurements of two fundamental quantities describing the universe. First and foremost, the expansion rate of the universe known as “the Hubble constant” has been measured to be significantly larger than the prediction, and second, the amplitude of fluctuations (quantifying the “clumpiness”) in the local universe is much smaller than the prediction. These “cosmic tensions” may hold the first hints of the true nature of Dark Matter and Dark Energy.

There are some important caveats to be said about these tensions before claiming that
LCDM has failed. First, one may ask whether the “statistical significance” of the discrepancies justify qualifying these measurements to be in tension with LCDM. In other words, could it be just a mere chance fluctuation, as it can occur in life, that when performing a measurement of a given quantity, it is estimated slightly off from its true value. Interestingly, this is becoming increasingly unlikely. In fact, it is now basically impossible to attribute the “Hubble Tension” to a statistical fluke, as the chance of a random fluctuation are now below one in a million. The “sigma8 tension” (as cosmologist calls the “clumpiness of the universe” tension) on the other hand is still at a statistical level of (roughly) one in a hundred chance of a random fluctuation, and therefore in all fairness, could still be due to an unlucky measurement. In addition, there could still be intrinsic measurement errors that bias the current estimates, and make it look like there is an issue with LCDM, when in reality one should just correct its instrument or its analysis. This possibility is extremely actively investigated by cosmologists, and so-far, there is no consensus on whether a systematic error could explain away this measurement. In fact, it seems that one would need to invoke multiple systematic errors acting in conjunction to fool cosmologists in order to explain these measurements, giving hopes that these discrepancies are truly due to a failure of our model.

With that, one may ask whether there are models that could explain these data, and what it could mean for our understanding of dark matter and dark energy. The answer is: it’s complicated. Just as there is no consensus on whether a measurement or modeling error could be at play, there is yet also no consensus on what models can (if any) explain these measurements. In fact, one must admit that it is very hard to explain the value of the Hubble constant precisely because we have so many other high-precision datasets that seem to fit our LCDM model so perfectly. When one tries to adjust one handle to match the Hubble constant measurement, another dataset typically becomes discrepant with the resulting model, leaving poor cosmologists in a state of dissatisfaction (and I try!). I will leave the discussion of models attempting to resolve the Hubble Tension for another time, or another author, and focus for now
on the topic of the recent seminar I gave in the context of the CosmoVerse series, that deals rather with a model dedicated to resolving the “sigma8 tension”.

On paper, contrarily to the Hubble Tension, resolving the sigma8 tension is fairly “easy”. By this I mean that, while we have a large number of high-accuracy datasets, there is a window left open where the freedom to play with the model is large, and can be used to (at least theoretically) explain the fact that our local universe appears less clumpy than the LCDM model predicts. This is because our best measurement, those which agree well with the LCDM model, span mostly large-scales (here, scales or distances greater than ~30 Millions light-years), while measurements that attempt at measuring the “clumpiness” of the local universe are mostly sensitive to a range of scales precisely smaller than that, down to about 1 Millions light-years or so. In practice, however, there are limitations to this statement of simplicity. This is because on the one hand making predictions at very small scales is complicated (gravity becomes “non-linear”, coupling multiple scales in the game, and the effect of forces beyond gravity due to “baryons” such as supernovae, active galactic nuclei etc., may become important) and second, because it is not fully true that the window at small-scales is fully open. There are important other measurements at those scales that may constrain your favourite model that one should check (I think in particular of the “Lyman-alpha forest” or the milky way “satellite galaxies”), although the rule-of-thumb is that these are less constraining or at least robust than the usual large scales ones. Models that have been suggested typically suggest new properties of dark matter and/or dark energy, or modifying gravity. One of my favourite explanations is that Dark Matter may be unstable over cosmological time-scales, decay away into a lighter particle, and as it decays, acquire some velocity, that would allow it to escape gravitational potentials and reduce the local “clumpiness”. Another possibility, is that in the past, Dark Matter interacted with some light particles, similar to what happened to standard model baryons (the mixture of electrons and protons) which interacted with photons. This interaction damps structure at small-scales, but leave the very large scales unaffected, leaving us today with less structures in the local-universe. All these models are being actively studied by cosmologists.

Filamentous structures of galaxy and galaxy clusters in the universe. If dark matter in the universe is hot (left panel) there are less structures than if it is warm (middle panel) or cold (right panel).
Credit: ITC @ University of Zurich

However, I want to focus on a third interesting possibility: what if Dark Matter and Dark Energy could “talk” to each other? This sort of scenario was long thought of as a way to explain why the measured density of dark energy is so similar to that of Dark Matter specifically in the current epoch. Indeed, given that we know very little about dark energy, and that its impact can be dramatic for the universe (and for the existence of life!), it is quite natural to ask whether there is a dynamical mechanism explaining its current value rather than mere chance, or a “cosmic coincidence” (as it is called nowadays). As it turns out, there are strong constraints on models in which the energy density of Dark Matter and Dark Energy varies significantly over time in a way different than the typical behaviour expected in LCDM, because as said before, our observations (except for these cosmic tensions) are in very good agreement with LCDM. It is in fact debated whether a transfer of energy from Dark Matter to Dark Energy is permitted by the data as a way to explain the Hubble Tension. Currently, the data indicates that it is disfavoured but this remains an interesting possibility. However, the constraints are much weaker when it comes to allowing for a momentum-exchange (as opposed to energy-exchange) between the two components. The reason is that Dark Matter is cold, and a momentum exchange does not significantly affect its energy density. However, it does affect how the small perturbations at the origin of the structures we see today grow, and such interactions can be used to reduce the amplitude of fluctuations in the local universe- therefore helping to explain the “sigma8 tension”. This is an interesting possibility, and in fact it has been shown by several authors that this may work! The momentum exchange can pass the stringent constraints from the cosmic microwave background and seems to adjust the observations made in the local universe.

Moreover, as argued during my talk, there is an interesting “coincidence” that I believe to be of particular interest, and remind of the first “cosmic coincidence” mentioned previously: our datasets are such that they confine the interaction to be significant specifically at the time at which Dark Energy starts dominating the energy density of the universe (i.e., specifically when the energy density of dark matter and dark energy becomes of the same order). Had it been much stronger, the amplitude of fluctuations would be so low, that there would be no structures in the local universe. Had it been much weaker, we would not see its impact on the data. This second “coincidence” suggests that the sigma8 tension may be connected to the much older first “coincidence problem”, providing us with new hints towards model-building dark matter and dark energy. This hint is just a first step that needs much more data and accurate theoretical predictions to be tested, and may very well be invalidated with better data. However, if it is confirmed, it could have tremendous implications for our understanding of the Dark Universe and would pave the way toward building a theory that unifies dark matter and dark energy. We are still only at the dawn of building and testing such a theory. But one thing is certain, whether or not this specific model is correct, if the cosmic tensions that we see today withstand the test of continuously improving data — as these are coming within the next decade with the plethora of surveys that just started (DESI, JWST, Euclid, Vera Rubin Observatory among many others)— our beautiful LCDM model is in big trouble. And yet, from the likely failure of a model, we will be in a position to make tremendous progress in our understanding of the mysteries of our big and dark universe. Interesting times indeed.

*to be fair, it is possible that dark energy is just a cosmological constant, and that dark matter, even though its fundamental nature is unknown, behaves at the cosmological level *exactly* like cold dark matter, down to the smallest scales, with only gravitational interaction. This “nightmare” scenario, although possible, is not fun. Let’s hope the universe is fun

Read more
EoinFigure2

Reading the tea leaves in ΛCDM tensions

by Eoin O’Colgain

The field of cosmology appears to be in flux. The standard model ΛCDM, which is based on the cosmological constant Λ and cold dark matter (CDM), is subject to increasingly strenuous stress tests as data quality improves. Concretely, discrepancies have been found between data sets on the assumption that the ΛCDM model is correct. Early and late Universe observables appear to lead to different results. This either points to observational errors, called systematics, in data or physics missing from the ΛCDM model. What makes cosmology so fascinating is that any data set could be giving one a bum steer. In principle, one may simply be drinking tea and interpreting the tea leaves if one relies too much on any given observation. Ultimately, deciphering the problem, or even establishing there is a problem, is tricky.

The ΛCDM model is special. It represents a minimal collection of assumptions that fits the Cosmic Microwave Background (CMB), the relic radiation from the early Universe. The assumptions are packaged into cosmological parameters, which cosmologists fit not only to CMB data, but also a host of other independent data sets. Obviously, these cosmological parameters are fitting constants in a model since we fit them to data. The working assumption in cosmology is that these fitting constants are bona fide constants. In other words, if one determines a given cosmological parameter using data at different epochs, one expects to get the same result.

Nevertheless, this no longer appears to be the case. Data sets at different epochs, when fitted to the ΛCDM model, are returning results that appear to differ. At this juncture, there are two actions a cosmologist may take. The first response is to assume that the ΛCDM model has broken down and look for a replacement model. The second is to look at the assumptions in the observations and try to find an observational systematic that could explain the disagreement. Both responses to a potential crisis are evident in the literature.

The problem with the first response is that cosmology offers very few good clues or leads even when missing physics is on the table. Put differently, data quality can be poor, so there are an infinite number of models that one could construct with new physics that could explain the data. Indeed, simply adding additional parameters to the ΛCDM model in an ad hoc manner is guaranteed to inflate errors enough so that all observations become consistent. The problem with the second is that searching for systematics is an endless pursuit. All observations in astrophysics and cosmology rest upon assumptions, any one of which could be wrong. Thus, the only way one is confident of any inference in astrophysics or cosmology is if one sees the same feature in multiple independent data sets. To put this in context, in 2011 the Nobel Prize in Physics was awarded for the discovery of late-time accelerated expansion, aka dark energy. Dark energy is supported by multiple observations, so it is a robust result. In the same vein, the anomalies we see between data sets assuming the ΛCDM model are now supported by independent data sets, which makes the discrepancies compelling and worth studying.

This leaves us with only one avenue to make progress on the problem. We need a definition of model breakdown. This sounds simple, but traditionally cosmology has embraced a Bayesian framework, whereby the favoured model is the model that fits the data the best. Since models with additional fitting parameters can fit data better in a trivial way, a penalty is assigned to models for each additional parameter added. In this framework, the ΛCDM model has a distinct advantage in the sense that it typically has fewer parameters. So, any replacement to the ΛCDM model needs to be as physically well motivated as the current model yet provide a better fit to all reputable data sets. Note, in this Bayesian framework, there is no concept of model breakdown. Model A and model B can both be physically flawed, but the model with the fewest parameters that fits the data the best is preferred.

Here we can transplant an idea from elsewhere in science and bring it into the cosmology literature. The basic gist is that any dynamical model, a model one confronts to time sensitive data, worth its salt must always return the same values of the fitting parameters. Bluntly put, any model that returns different values of the fitting parameters at different times is toast. What this means is that the model no longer makes valid predictions, so it is no longer of wide scientific interest. Cosmological data comes from astronomy and astronomers do not measure time, but they record redshift, shifts in the wavelength of light due to the expansion of the Universe. Redshift is a proxy for time in cosmology. Thus, the research programme is straightforward, one can test the ΛCDM model by fitting it to data in different redshift ranges and then compare the results. It should be stressed that the anomalies we now see in cosmology are discrepancies between early and late Universe observables, so we are clearly looking at a signature of model breakdown if systematics can be housed away. As explained, eliminating all systematics in any observable is a Herculean task.

Another problem with the status of the discrepancies is that one is comparing different observables at different redshifts. These different observables can in principle have different systematics or observational errors, which we have overlooked. In contrast, if one can compare the same observable at different redshifts, and yet find a difference in cosmological parameters, this is a much sharper contrast. It either says one of the assumptions common to all the data is wrong or the ΛCDM model is incorrect. Moreover, if one sees the same feature across independent observables with different underlying assumptions, then one has a conflict between a set of observables and the ΛCDM model. Returning to the science case for dark energy, where results resting on multiple observations demand credence, this argument can tip the debate in favour of ΛCDM model breakdown. The argument is simple and effective.

Surprisingly, these simple tests have not been done. Inspired by an observation of a descending Hubble constant (H0) in a sample of H0LiCOW/TDCOSMO strong lenses, my collaborators and I have started to perform these tests. Note, a descending H0, while apparently contradictory, simply says that the model is flawed (if the trend is not due to systematics). An early port of call was to study Type Ia supernovae, arguably the best distance indicator at our disposal. It had previously been observed by Maria Dainotti and collaborators that the same descending H0 trend can be found in the Pantheon supernovae (SN) sample. While it sounds simple to split a Type Ia SN sample into low and high redshift subsamples and compare the results within the ΛCDM model, one quickly runs into a regime of the model where distributions are non-Gaussian, and this impacts traditional error estimation. This biases Markov Chain Monte Carlo, which is the de facto technique employed by cosmologists to estimate errors. For this reason, we have been resorting to simulations of mock data and other techniques that overcome this bias. We have identified the same decreasing H0 trend in three independent observables, thereby providing us with a hallmark signature of model breakdown in one of the parameters where we see an early versus late Universe disagreement. Convincing our referees of these results has been an uphill battle. We attribute this to a difference in cultures between physics and astronomy, whereby the prevalent scientific culture in observational cosmology is closer to astronomy. We are in the process of reproducing the results using techniques that are closer to the tastes of observational cosmologists.

More recently, on the assumption that the matter density parameter is constant in the ΛCDM model, we showed that another parameter, the so-called S8 parameter, where we see early versus late Universe disagreements evolves with redshift. If true, this conforms to our expectation that the model is breaking down. This result is fascinating, as it rests on simple assumptions and straightforward data analysis. We initially showed the result in smaller data set of 20 data points, before confirming it in a larger data set of 60 odd data points. Of course, in such a large sample of historical data, one can expect that some of the data points are not correct, but the question now is what would it take to stop the S8 parameter from evolving with redshift? Hopefully we will not have to wait long as the Dark Energy Spectroscopic Instrument (DESI) is going to give us a set of the same constraints from a single survey, where if there is a systematic, it is a systematic common to all data points. Returning to our dark energy argument, one would still need to confirm this evolution in an independent observable, but a combination of weak and CMB lensing may eventually deliver the result, modulo the fact that one has different observables, so differing systematics.

In summary, we see early versus late Universe disagreements in cosmological parameters. Unfortunately, we compare different observables at different epochs, so we have an apple versus orange comparison. Our approach is to stick to one fruit and to probe for disagreements in ΛCDM cosmological parameters in different redshift ranges, which are a proxy for different times. If one sees the same disagreements in different observables, one is done, and one can call time on the ΛCDM model. If we do not see this evolution, then this deepens the mystery. It should be stressed that the task of identifying a replacement model is not addressed by this process, however if one finds evolution in a redshift range, then it implies that physics in missing in that epoch of the Universe. This narrows down the new physics that could be causing the evolution.

Read more
PicAntonio

Quantum effects in the Universe

by Antonio Ferreiro

Einstein’s theory of General Relativity (GR) has not only changed our view on gravity but also had deep impact on the construction of modern physical theories itself. Indeed, GR, initially conceived as a framework for describing the gravitational field in a relativistic manner, has produced novel conceptual entities. Good examples are black holes and gravitational waves, which in the last decades have been the main engine of fundamental theoretical physics investigations, due to the high-tech machinery for their detection and observationally analysis. Historically though, the first real consequence of GR came from the study of the spacetime associated to our whole Universe.

The main entity of GR is the spacetime metric g(x), which is a solution of the well known Einstein’s field equations. The metric gives us the correct motion of a particle propagating through a (non-flat) space and time. In this framework, gravity is an intrinsic feature in this geometrical interpretation through the metric g(x).

The first attempts to find solutions of the Einstein’s equations were performed by studying how the homogeneous matter content of the Universe sourced the spacetime. These were worked out by Einstein and De Sitter, and then extended by Friedman and Lemaitre, to include the possibility for a dynamical evolution of space, i.e., a time dependent metric. Lemaitre was also responsible to give a concrete interpretation of the physical solution entailing an expanding universe and how it related with the observed redshift of the galaxies. Following this result many other important features of the dynamics and evolution of the Universe became clear during the following decades, culminating in today’s Lambda-CDM model, where features like Dark Matter and Dark Energy are included, from which we still know little information and current research is ongoing.

Parallel to these investigations, the construction of the Standard Model of Particles was successfully achieved. The theoretical framework, Quantum Field Theory, correctly described all the available experimental data of elementary particles and their interactions. One of the fundamental pillars of this framework is the assumption of a Minkowski spacetime. This particular solution of the Einstein’s equations contains symmetries which allows to construct a consistent notion of particle state and consequently of scattering amplitude which then can be tested in the laboratory. An important question is what happens in the case of a particle/field, e.g., the electron in the presence of a curved non-Minkowskian spacetime.

At the beginning of the sixties, Leonard Parker decided to work for his PhD thesis on the intersection between the above-mentioned frameworks: quantum field theory and general relativity. When dealing with non-flat solutions of Einstein equations, such as the expanding spacetime of our Universe, the construction of particle states is no longer possible. Parker obtained consequently that the spacetime, which is now dynamically evolving in time, spontaneously produces particles from the vacuum. In particular, if we imagine a Universe that started with an early time Minkowski spacetime, evolving them to an expanding type of spacetime, and then finishing again in a flat Minkowskian region, one can calculate the net particle number produced due to this expansion. This was exactly what Parker did, and he found that particles where indeed produced.

At that time, this result was thought to be just a curiosity since a time dependent gravitational field such as our own Universe, evolves too slow to produce a sufficient amount of particles to be detected in the first place. However, the later introduction of the Inflationary phase during the first instants of our Universe introduced the perfect playground for this effect. Indeed, the rapid expansion of spacetime during Inflation generates quantum fluctuations analogue to the particle production phenomena that Parker predicted in his thesis. The interesting feature of this case is that these (quantum) fluctuations have an imprint in current observations and are believed to give rise to the small inhomogeneities observed in the cosmic microwave background and that are the gravitational seeds that generated the different observed structure in the Universe.

The importance of the generation of quantum effects during the early Universe has been realized during the last decades, and a lot of effort has been done to clarify the role of this in the formation of standard model matter, also known as reheating, production of gravitational or creation of Dark Matter candidates, among others. One of the main open problems of current research in Cosmology is how to quantify these phenomena and compute potential observables that can be measured in upcoming and future detectors and telescopes.

References:

  1. Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Helge Kragh. Princeton, N.J., Princeton University Press, 1996 2. Creation of quantized particles, gravitons and scalar pertubations by the expanding universe, Leonard Parker, J. Phys. Conf. Ser. 600, 2015
  2. Fifty years of cosmological particle creation, Leonard Parker and Jose Navarro Salas, arXiv:1702.07132
Read more
BlackHole

Black Holes, Windows to New Observational and Theoretical Frontiers

by Shahin Sheikh-Jabbari

Physics is the science of understanding Nature through mathematical formulations. Here understanding amounts to make a model within a given theoretical framework, analyzing the model to make predictions and performing observations or experiments of natural phenomena to check the predictions. Once predictions of a model pass this test, physicists claim to understand the phenomenon. This notion of science and in particular physics was mainly initiated by Galileo Galilei and is continued to date. Newton was the first to formulate mechanics of bodies and objects. According to Newtonian dynamics, we have a notion of force which quantifies how particles and systems interact and see each other. In this setting, therefore, one in principle knows everything about the dynamics of a system once one can formulate the forces acting on the system and its constituents. Since the time of Newton, physicists have tried to formulate forces, or in modern terminology interactions.

Gravity and Einstein General Relativity

Gravity is the force of Nature historically formulated first. Nevertheless, it still remains very much challenging to date. Newtonian gravity[1] is the force which is exerted on any mass, like us, the Earth, the Sun or any other object, and is proportional to the inverse-square of the distance between the two bodies. Gravity is a universal force which seems to be present everywhere and among all physical objects. Despite the successes of the inverse-square law of gravity in explaining dynamics of objects on Earth and celestial bodies, like planets of the solar system, it was challenged, to be more precise, improved and corrected, over two centuries later by Einstein’s theory of General Relativity (GR).  Einstein’s relativity states that time and space are closely related concepts and it is hence more fruitful to treat them on the same footing, and use the notion of space-time rather than time and space individually. According to the Einstein GR a force as universal as gravity should be linked with something as fundamental and universal, that is the space-time itself. In Einstein GR, gravity is tied with a property of space-time called metric which is a measure of the distances in space-time and strength of gravity is measured by another property of space-time called curvature; the bigger the curvature, the stronger gravitational pull one would feel. Here curvature has essentially the same meaning as one intuitively understands about curvature in surfaces we deal with in daily life. However, instead of two dimensional surfaces, here we have four dimensional space-times.

Einstein GR was introduced in 1915 and like all the other physical theories or models it is described by an equation. Einstein’s equation takes the matter and energy distribution as an input and its solutions determine the space-time which is caused as a result of the gravitational pull of the matter and energy inside it.  Einstein equations are nonlinear coupled partial differential equations and no general method to solve them exists to date. Nonetheless, from the early days these equations were introduced, physicists and mathematicians alike, have been working hard to find solutions to these equations. Among the first solutions were the one introduced in 1915  by German physicist Karl Schwarzchild, and the one given by Russian mathematician and physicist Alexander Alexandrovich Friedmann in 1922. The corrections introduced by Einstein GR to Newtonian gravity are generically tiny in scales we deal with in our daily life, while they are more pronounced in larger scales, or extreme conditions of very high matter density or temperature, or very short, sub-atomic distances which we have not been able to probe so far. So, GR may be put to test in some different fronts and distance scales, cosmic scales (distance as large as galaxy cluster size or bigger) where a Friedmann-type solution is expected to be at work; astrophysical scales, the Solar system physics and eventually very-very short distances.

The first success of Einstein GR was made through an analysis by Einstein himself. Considering the corrections GR provides to Newtonian gravity, it was shown that GR, and in particular the Schwarzchild solution, is capable of explaining the mismatch of observations accumulated for a couple of centuries to early 1910s about the orbit of Mercury with the predictions of Newtonian gravity. However, the first experiment designed to test GR came in 1919 and was based on Schwarzchild’s solution and the so-called light-bending or more technically as it is known today, gravitational lensing, the fact that within GR setup light will bend while passing around a massive object, like the Sun. Later on some other solutions of GR (other than Schwarzchild) passed observational tests, and to date GR has appeared very successful in confronting observational data.

Black holes and horizons

Schwarzchild’s solution has some peculiar features and theoretical issues which in particular made Einstein very unsatisfied and unhappy: the solution has a singularity in some specific locus of space-time, where gravity is very strong and curvature blows up. In addition and as pointed out, presence of gravity generically changes the causal structure of space-time and how light rays travel in the space-time. In particular, this causal structure may be distorted to the extent that there exists a region of space-time which only allows for one-sided communications: one may be able to receive signals in that region but not send signals out. The border beyond which one can’t make two-sided communications is called horizon. Fortunately, in the Schwarzchild case and presumably in all physically relevant cases, the singular region is dressed and is not naked; it is hidden behind the horizon. In other words, singularity is inaccessible to anybody outside. This may make presence of a singularity more tolerable.
John Wheeler coined the name black hole in 1967 for space-times that have horizons with observers living outside the horizon. Schwarzchild’s solution is the simplest black hole predicted by the GR theory. Black holes, as predicted by the theory, are indeed very odd objects: if one approaches the horizon one may simply pass through without feeling anything special, however, once one passed it, there is no escape and no return. Distortion of causal structure of the space-time are such that it will take infinite time, no matter how hard you try, to go back across the horizon after you crossed it once. Moreover, according to the theory, once one passed the horizon the destined fate is falling into the singularity. Close to the singularity gravity effects and tidal forces are so strong that one would be torn apart, not of course a pleasant fate. So, the key question is whether black holes are just theoretical fantasies or they can actually form from ordinary matter we find in the world? Another key question is whether such a solution of the GR theory can actually exist in the real world? These two questions have both theoretical and observational aspects, which we discuss next.

This computer-simulated image shows gas from a tidally shredded star falling into a black hole. Astronomers observed the flare in ultraviolet light using NASA’s Galaxy Evolution Explorer.

Where do black holes come from?

Recall that source of gravity is the mass and as we increase the mass the gravity pull increases. Using naïve intuition coming from Newton’s inverse-square law, increasing mass its gravitational pull can compress it to a region smaller than the corresponding horizon size. The question is whether in reality such a squeezing can take place. One may make a thought-experiment. Let us consider a lump of gas at low temperature and leave it for a long time. This is in fact very close to the situation almost all stars we see today have started from. Under gravitational pull, the molecules of the gas start condensing and squeezing in. If there is no other force acting on these molecules, there is in principle nothing to forbid or stop the squeezing process and the system undergoes the gravitational collapse and a black hole forms.
In reality, however, there are other forces. These forces set in when the matter is squeezed to the extent that individual atoms begin to get pushed and squeezed into one another. So, the internal structure of atoms and in fact even the atomic nuclei also become important. This makes the nuclear forces become important much before the black hole can form. This is the simplest model for the star evolution: when a lump of gas squeezes in under its own gravity it reaches densities and temperatures where nuclear fusion can start and it becomes a star like our own Sun. This leads to repulsive forces which are generically strong enough to withstand gravitational collapse. The evolution path a star follows very much depends on its mass and one should hence modify or improve the above simple minded picture. There is a critical mass above which these other forces, which are of genuine quantum nature, cannot withstand the collapse. This critical mass is about 8-9 times the mass of our own Sun, see chapter 1 of [1] and references therein. That is, starting with a heavy star, at the last stages of its evolution it undergoes the dramatic supernova explosion and what remains after the explosion is a black hole.
The above core-collapse scenario leads to formation of stellar mass black holes with masses 1-10 times mass of our Sun. However, theoretically, black holes can exist in a very wide range of masses, from microscopic black holes (as light as 10^(-8) kg) all the way to supermassive black holes (as heavy as a typical galaxy like the Milky way). These black holes are not formed through the core-collapse scenario. Some of them can be primordial black holes, remnants from the early Universe. Some heavier ones can be a result of black hole mergers which happen in the late universe and within galaxies. As yet the origin of supermassive black holes remains an open question.

Black holes in the sky?

Existence and physical properties of black holes, depending on their mass, can be checked either in terrestrial experiments or astrophysical observations. On the other hand, we can see things only when a light (or some other signal or messenger) can come out of it and reach us. But, we just argued that no emission comes out of black holes, so how can we test this prediction?
The answer can come from the fact that black holes affect their surroundings. One proposal is that black hole can eat up and absorb in stuff: Before falling into the black hole and passing the no-return point, matter leaves behind a trace; the matter accreting into the hole speeds up, heats up and starts radiating off X-ray or more energetic γ-rays with specific characteristics and this radiation may be observed. Another proposal to detect black holes is through the light-bending and gravitational lensing the black hole may cause when the light or a generic electromagnetic wave from other sources pass in vicinity of a black hole. In the last decade another window to observing black holes has opened, observing them through gravitational waves emitted if we have merger processes which involve black holes.
The emerging picture is that at the center of a galaxy we have a (super)massive black hole and the whole galaxy is rotating around this black hole whose mass can be as large as the mass of the rest of the objects in the same galaxy. Moreover, we now know that black holes of masses few to 200 mass of the Sun are very frequent and all over the place [2]. Multi-messenger astrophysics will continue to provide a lot of more information about black holes in the sky, their origins, formation and evolution in the years to come.

Black holes, Quantum theory and information problem

Physics of black holes still holds a lot of mysteries and many physicists are working hard to open up this black box and uncover its mysteries. One of these mysteries on the theoretical side arose from seminal works of Stephen Hawking since the early 1970s, indicating that black holes are not completely black and actually radiate (known as Hawking radiation), once we combine Einstein’s GR with laws of quantum theory. Hawking radiation can lead to a total evaporation of a black hole, leaving one with an almost blackbody radiation. The process of formation and evaporation of black holes leads to the information problem: all different kinds of matter undergoing gravitational collapse yield the same aftermass, after the black hole has evaporated. This has brought many more questions and mysteries about black holes which are actively under study by theoretical high energy physicists. Black hole information problem has led to ideas that quantum information theory and its concepts are a very natural framework to (re)formulate space-time and gravity [1,3]. We hope these studies bear fruit and have a more coherent and clear picture of black holes, their formation and evolution, sometime soon.

by M.M. Sheikh-Jabbari, School of Physics, Institute for research in fundamental science (IPM), Tehran, Iran


[1] It seems that Hooke,  Wren and Halley, who were contemporaries of Newton, had independently discovered this law, but it is largely known as Newtonian gravity.

References:

[1] D. Grumiller and M.M. Sheikh-Jabbari, Black Hole Physics: from collapse to evaporation, Springer graduate textbooks, 2022, ISBN 978-3-031-10342-1, doi:10.1007/978-3-031-10343-8

[2] Ligo black hole gallery

[3] J. Maldacena, Black holes and quantum information, Nature Rev. Phys.2 (2020) no.3, 123-125

Read more
GroupGalaxiesDavidBenisty

How our galaxy and its neighbours help us understand dark energy

by David Benisty

Have you ever wondered about the forces that shape our universe and drive its expansion? I am here to share with you some fascinating insights into a concept called dark energy—a theoretical form of energy that has captivated scientists worldwide. Join me on this journey as we delve into the mysteries of the Cosmological Constant and its role in our ever-expanding universe. In the late 1990s, astronomers made an astonishing discovery. Observations of distant supernovae revealed something unexpected: the universe was not just expanding—it was accelerating in its expansion! This finding challenged our previous assumptions that the expansion would gradually slow down due to the gravitational pull of matter.

To explain this accelerated expansion, scientists proposed the existence of dark energy. Dark energy is thought to be a form of energy that fills space uniformly and drives the cosmic acceleration. The simplest explanation for dark energy is the cosmological constant Λ—a constant energy density that pervades the universe. Initially introduced by the brilliant Albert Einstein in his general theory of relativity, the cosmological constant was later abandoned as unnecessary. However, the accelerating expansion observations revived interest in it as a representation of dark energy. Dark energy remains one of the most enigmatic and perplexing concepts in the field of astrophysics. Its nature and origin continue to elude us, leaving scientists eager to uncover its secrets. In my recent exploration, I sought to measure the Cosmological Constant on a new scale, unveiling a potential breakthrough in our understanding.

A study conducted by Partridge et al. in 2013 shed light on the impact of the cosmological constant Λ within our own local galaxy, the Local Group (LG). This group comprises familiar neighbours such as our Milky Way and the Andromeda galaxy (M31), along with a retinue of over 78 dwarf galaxies. By envisioning the LG as a simplified two-body dynamical system, researchers employed a method called the Timing Argument to estimate its mass.

Credit: Mark Garlick/Science Photo Library (https://www.skyatnightmagazine.com/space-science/local-group-guide-galaxy-neighbourhood)

The Timing Argument is an elegant approach that examines the separation and velocities of the Milky Way and M31 galaxies from the time of the Big Bang to their subsequent motion away from each other. Astonishingly, this analysis revealed that the cosmological constant can significantly influence the inferred mass of the system by up to 13%.

Driven by a thirst for knowledge, I delved deeper into these findings, seeking to understand the broader conditions under which the cosmological constant Λ dominates in binary motion. Through rigorous analysis and the development of an analytical solution to the two-body problem, I discovered a fascinating relationship: the ratio between the Keplerian period and 63.2 billion holds the key to determining the importance of the Cosmological Constant. Given that the Andromeda-Milky Way orbit has a period of approximately 20 billion years, it became evident that Dark Energy must be considered.
Building upon these exciting revelations, I embarked on a quest to determine which value of the cosmological constant Λ aligns with the known measured mass of the Local Group, specifically that of the Milky Way and Andromeda. In my forthcoming publication (2306.14963), I present strong boundaries on the Cosmological Constant based on the motion of Andromeda—a value approximately five times greater than the measured value from Planck. These findings open up thrilling new avenues for investigating the Cosmological Constant on previously unexplored scales.

These ground-breaking discoveries serve as catalysts for further exploration and research. I am eager to delve into alternative theories and expand my studies to other LG-like galaxies, in order to further constrain dark energy in new scales. These endeavours promise to yield fresh and valuable insights, bringing us closer to unravelling the intricate workings of our vast and awe-inspiring universe. Together, let us embark on this captivating journey of discovery. The cosmos beckons, and there is so much more to uncover. By pushing the boundaries of our knowledge and embracing the mysteries of dark energy, we take another step towards gaining a deeper understanding of our Universe and our place within it.

Read more
KrishnaImage

Lurking in the dark

The hunt for new physics in the dark sector

By Krishna Naidoo

The standard model of particle physics, which describes the nature and interaction of sub-atomic particles, is one of the most celebrated theories in science. Over the years, its predictions have been verified to incredible precision and few discrepancies have persisted. But despite its many successes, we know the model is incomplete, among other things it fails to explain the presence of the dark sector – such as the mysterious dark matter and dark energy, which are the main components of the standard cosmological model, and massive neutrinos. Direct measurements of dark matter and dark energy remain out of reach, their ‘dark’ nature means they hardly interact, making them extremely difficult to detect through particle physics experiments. Their presence has only been detected in astronomy through their effect on galaxies and the distribution of matter on the very largest scales. Experiments in cosmology have shown us that we need dark matter, specifically cold dark matter, to explain observations of galaxy rotation curves, the cosmic microwave background, and to provide the seeds for the growth of structures (and eventually galaxies like our own Milky Way). On the other hand, Dark energy, in the form of a cosmological constant, provides an explanation for the accelerated expansion of the Universe at late times.

What can we expect to learn about the dark sector from the next generation of cosmological experiments?

Dark Matter is an essential component of the standard model of cosmology, without it we cannot explain how galaxies form and how the universe looks as it does today. Despite its necessity, we really have no real idea about what it really is. There is plenty of room for new and exotic physics here, which for me makes it the most intriguing component of the dark sector. Is dark matter a single particle, or many? Is dark matter interactive or is it boring (as assumed in the standard model)? The absence of any direct measurements suggests dark matter interactions with other particles and forces other than gravity must be very weak. Assuming this is the case, then we are left with dark matter models that only influence things, distinctively, through their mass. In the standard model, dark matter is assumed to be cold, meaning massive and slowly moving. This allows dark matter to form clumps on small scales, since the dark matter particles can efficiently coalesce. But this need not be the case, warm dark matter, formed of slightly lighter particles and moving at greater speeds will not be able to form very small clumps. By measuring the number of small clumps in the universe, we can begin to understand how heavy dark matter particles really are. There are of course more exotic models, take for example fuzzy dark matter, made of very light particles, so light that they begin to exhibit quantum mechanical effects on galactic scales – here they will produce distinctive quantum interference patterns. Unfortunately, at small scales it becomes rather difficult to disentangle the effects of dark matter from baryonic feedback, a term collectively describing the effect of supernovae, black holes, gas, etc. This is a major obstacle, and one which only sophisticated simulations with carefully calibrated galaxy formation models can help to distinguish.

All of this assumes dark matter is rather boring, and while the evidence suggests dark matter interacts very weakly with known particles and forces, we have little evidence to suggest whether dark matter interacts with other components of the dark sector – whether that be with itself (if dark matter is not a single particle) or with dark energy. The latter is an extremely interesting proposition, as it can resolve several discrepancies seen in cosmology today – such as the Hubble tension and S8 tension. These describe discrepancies between measurements of the expansion rate and “clumpy-ness” from early and late universe measurements. Furthermore, It would be able to explain a seemingly unrelated anomaly. For some time, we have found that voids produce larger than expected imprints on the very earliest light in the universe, the cosmic microwave background. So far, this excess has been a puzzle but interactions between the dark sector can explain this signal. Measuring this signal more precisely will enable us to make a unique test for interactions in the dark sector. This would have a profound impact on our understanding of fundamental physics and the dark sector, particularly as the latter would imply dark energy is not a cosmological constant (more on this later).

Dark Energy. Since the early 20th century, and the work of Edwin Hubble, we have known the universe is expanding. This empirical discovery is what led cosmologists away from the more prevalent static universe theories of the past to models where the universe evolves over time. If we run the clock backwards, we get to a stage where the universe is infinitely hot and dense – the big bang. By the early 90s, theories of the big bang were well established, and cosmologist began to try to measure how much the universe’s expansion rate was slowing down. A universe full of matter, as was believed at the time, would resists expansion due to the forces of gravity, but to their great surprise the universe’s expansion rate was accelerating. Although there had been signs of dark energy before, this measurement was the definitive proof. Dark energy is used generally to describe a component of the universe responsible for driving the accelerated expansion. In the standard model we assume dark energy is a cosmological constant, meaning It is represented by a single constant value over all of space time. While a cosmological constant is rather easy to explain in general relativity (Einstein’s theory of gravity), its small value is very difficult to reconcile with fundamental physics. This has led many to wonder whether dark energy is something else. The next generation of cosmological experiments will test for deviations from the cosmological constant, by exploring whether dark energy evolves over time or whether dark energy is really a symptom of an incorrect theory of gravity. For the moment, the cosmological constant appears to be standing strong, but we may look back to the discrepancies and anomalies present is cosmology as early signs that dark energy is really something else.

A comparison of haloes (clumps of matter) for different dark matter models. On the left is a halo in a model with cold dark matter and on the right is a model with warm dark matter. Cold dark matter produces more small clumps than warm dark matter. Figure from Lovell et al. (2012), MNRAS, 420, 3, 2318–2324, https://doi.org/10.1111/j.1365-2966.2011.20200.x

Neutrinos are a rather unique component of the dark sector. They are chargeless cousins to the sub-atomic electron particle, and until the late 90s (and the discovery of neutrino oscillations) were thought to be massless. Their elusive nature has made determining their exact mass difficult, with particle physics only able to provide lower limits. Cosmology has so far only been able to provide upper bounds, but these upper bounds are slowly approaching particle physics lower bounds, meaning a measurement should be made in the near future. Massive neutrinos influence the distribution of matter by preventing structures at small scales from forming, an effect caused by their high thermal velocities.

The one concern with this detection, when/if it is made, is that it will be dependent on a cosmological model. But what if this model is wrong? Current tensions in cosmology suggest there may be issues with the assumptions of the standard model of cosmology, so what do we do if these persists? Are neutrino mass determinations destined to come with a caveat? It appears, at least for the immediate future, that this may be the case. This of course assumes we are confined to standard techniques. Another approach is to look for unique neutrino effects on other cosmological observables – such as its effects on the cosmic web, and in particular filaments, which could prove more accurate probes of its mass and provide additional consistency checks for the assumed cosmological model.

So, what is lurking in the dark? Is dark matter interactive? Is dark energy changing over time or is it a symptom of an incorrect theory of gravity? Will we measure neutrino mass reliably? Over the next few years large galaxy surveys (such as DESI, Euclid and LSST) will shed further light on the dark sector. Will they reveal new physics – or will the simple assumptions of the resilient standard model of cosmology triumph again? Only time will tell.

Read more
Loading