Measure problem (cosmology)
The measure problem in cosmology concerns how to compute fractions of universes of different types within a multiverse. It typically arises in the context of eternal inflation. The problem arises because different approaches to calculating these fractions yield different results, and it's not clear which approach is correct.
Measures can be evaluated by whether they predict observed physical constants, as well as whether they avoid counterintuitive implications, such as the youngness paradox or Boltzmann brains. While dozens of measures have been proposed, few physicists consider the problem to be solved.
The problem
Infinite multiverse theories are becoming increasingly popular, but because they involve infinitely many instances of different types of universes, it's unclear how to compute the fractions of each type of universe. Alan Guth put it this way:Sean M. Carroll offered another informal example:
Different procedures for computing the limit of this fraction yield wildly different answers.
One way to illustrate how different regularization methods produce different answers is to calculate the limit of the fraction of sets of positive integers that are even. Suppose the integers are ordered the usual way,
At a cutoff of "the first five elements of the list", the fraction is 2/5; at a cutoff of "the first six elements" the fraction is 1/2; the limit of the fraction, as the subset grows, converges to 1/2. However, if the integers are ordered such that any two consecutive odd numbers are separated by two even numbers,
the limit of the fraction of integers that are even converges to 2/3 rather than 1/2.
A popular way to decide what ordering to use in regularization is to pick the simplest or most natural-seeming method of ordering. Everyone agrees that the first sequence, ordered by increasing size of the integers, seems more natural. Similarly, many physicists agree that the "proper-time cutoff measure" seems the simplest and most natural method of regularization. Unfortunately, the proper-time cutoff measure seems to produce incorrect results.
The measure problem is important in cosmology because in order to compare cosmological theories in an infinite multiverse, we need to know which types of universes they predict to be more common than others.
Proposed measures
Proper-time cutoff
The proper-time cutoff measure considers the probability of finding a given scalar field at a given proper time. During inflation, the region around a point grows like in a small proper-time interval.This measure has the advantage of being stationary in the sense that probabilities remain the same over time in the limit of large. However, it suffers from the youngness paradox, which has the effect of making it exponentially more probable that we'd be in regions of high temperature, in conflict with what we observe; this is because regions that exited inflation later than our region, spent more time than us experiencing runaway inflationary exponential growth. For example, observers in a Universe of 13.7 billion years old are outnumbered by observers in a 13.0 billion year old Universe by a factor of. This lopsidedness continues, until the most numerous observers resembling us are "Boltzmann babies" formed by improbable fluctuations in the hot, very early, Universe. Therefore, physicists reject the simple proper-time cutoff as a failed hypothesis.
Scale-factor cutoff
Time can be parameterized in different ways than proper time. One choice is to parameterize by the scale factor of space, or more commonly by. Then a given region of space expands as, independent of.This approach can be generalized to a family of measures in which a small region grows as for some and time-slicing approach. Any choice for remains stationary for large times.
The scale-factor cutoff measure takes, which avoids the youngness paradox by not giving greater weight to regions that retain high energy density for long periods.
This measure is very sensitive to the choice of because any yields the youngness paradox, while any yields an "oldness paradox" in which most life is predicted to exist in cold, empty space as Boltzmann brains rather than as the evolved creatures with orderly experiences that we seem to be.
De Simone et al. consider the scale-factor cutoff measure to be a very promising solution to the measure problem. This measure has also been shown to produce good agreement with observational values of the cosmological constant.
Stationary
The stationary measure proceeds from the observation that different processes achieve stationarity of at different times. Thus, rather than comparing processes at a given time since the beginning, the stationary measure compares them in terms of time since each process individually become stationary. For instance, different regions of the universe can be compared based on time since star formation began.Andrei Linde and coauthors have suggested that the stationary measure avoids both the youngness paradox and Boltzmann brains. However, the stationary measure predicts extreme values of the primordial density contrast and the gravitational constant, inconsistent with observations.
Causal diamond
marks the end of inflation. The causal diamond is the finite four-volume formed by intersecting the future light cone of an observer crossing the reheating hypersurface with the past light cone of the point where the observer has exited a given vacuum. Put another way, the causal diamond isThe causal diamond measure multiplies the following quantities:
- the prior probability that a world line enters a given vacuum
- the probability that observers emerge in that vacuum, approximated as the difference in entropy between exiting and entering the diamond.
An attraction of this approach is that it avoids comparing infinities, which is the original source of the measure problem.
Watcher
The watcher measure imagines the world line of an eternal "watcher" that passes through an infinite number of Big Crunch singularities.Guth-Vanchurin paradox
In all "cutoff" schemes for an expanding infinite multiverse, a finite percentage of observers reach the cutoff during their lifetimes. Under most schemes, if a current observer is still alive five billion years from now, then the later stages of his life must somehow be "discounted" by a factor of around two compared to his current stages of life. For such an observer, Bayes' theorem may appear to break down over this timescale due to anthropic selection effects; this hypothetical breakdown is sometimes called the "Guth-Vanchurin paradox". One proposed resolution to the paradox is to posit a physical "end of time" that has a fifty percent chance of occurring in the next few billion years. Another, overlapping, proposal is to posit that an observer no longer physically exists when it passes outside a given causal patch, similar to models where a particle is destroyed or ceases to exist when it falls through a black hole's event horizon. Guth and Vanchurin have pushed back on such "end of time" proposals, stating that while " stages of my life will contribute to multiversal averages" than earlier stages, this paradox need not be interpreted as a physical "end of time". The literature proposes at least five possible resolutions:- Accept a physical "end of time"
- Reject that probabilities in a finite universe are given by relative frequencies of events or histories
- Reject calculating probabilities via a geometric cutoff
- Reject standard probability theories, and instead posit that "relative probability" is, axiomatically, the limit of a certain geometric cutoff process
- Reject eternal inflation