We are searching data for your request:

**Forums and discussions:**

**Manuals and reference books:**

**Data from registers:**

**Wait the end of the search in all databases.**

Upon completion, a link will appear to access the found materials.

Upon completion, a link will appear to access the found materials.

I'm trying to model neutrinos in the Friedmann Equation. I've covered the case of the Benchmark Model where we have matter, radiation, curvature, and the cosmological constant, Lambda. I know my coding of the Friedmann equation works because I get the correct plots at different parameters, as you'll see attached below.

Including neutrinos, the Friedmann Equation becomes

$$ egin{eqnarray} H(z)^2 & = & H_0^2 Big[ (Omega_c + Omega_b) ( 1 + z)^3 + Omega_gamma ( 1 + z) ^4 & + & Omega_{DE} ( 1 + z)^{3(1+w)} + Omega_k ( 1 + z) ^2 + frac{ ho_{ u, tot}(z)}{ ho_{crit,0}} Big]. end{eqnarray} $$

To solve for the energy density as a function of the scale factor (or redshift), we can solve for the energy density by the following expression for the energy density of a single neutrino species:

$$ ho_ u (T_ u) = frac{g}{(2pi)^3} int frac{sqrt{p^2 + m^2}}{e^{p/T_ u} + 1} d^3 p. $$

The energy density critical is $4870$ Mev/m$^3$. Energy density of the single species can be written as function of scale factor by writing the temperature as a function of scale factor. $T$ is simply the expression shown below divided by a:

In equation 17, we can write $d^3p$ as $4pi p^2 dp$ and $g=2$ for a neutrino species. Another thing to notice is that (17) is written in natural units where $c = h = k = 1$. I've tried to fix the units and no matter what I do, the density parameter of the neutrino species is always very small (order of $10^{-9}$) where it should be between 0.0013 and 0.007 from Ryden, Intro to Cosmology equation (7.54).

**I was really hoping someone can help me with the unit conversion from the natural units to the proper units.** Everything else I've figured out, I just can't seem to fix the units for equation (17).

Without neutrinos, I get the following plot consisting of various universe models, and they are correct so coding is not the problem. The problem is the unit conversion to proper SI units of (17).

Once I get the neutrinos figured out, I want to see how they affect the universe models. Any help is greatly appreciated!

The energy density of a Fermi gas is $$ ho_{ u}= int ho(p) dp = int E(p)F(p)g(p) dp $$ $$ ho_{ u} = int left(sqrt{p^2 c^2 + m^2 c^4} ight)left(exp (E/k_BT) + 1 ight)^{-1} left(g_s 4pi p^2/h^3 ight) dp$$ in units of energy per unit volume.

Before neutrino decoupling at $k_B T sim 1$ MeV, the neutrinos are ultrarelativistic with $pc gg m_{ u}c^2$. After decoupling, the shape of the occupation index function $F(p)$ does not change - so $F(p) = left(pc/k_BT_{ u} +1 ight)^{-1}$ in subsequent evolution.

Thus $$ ho_{ u} = frac{g_s c}{h^3} int frac{ sqrt{p^2 + m_{ u}^2c^2}} {exp(pc/k_BT_{ u}) +1} 4pi p^2 dp$$

I don't understand where your $(2pi)^3$ comes from, other than to suggest that the unit system is actually $hbar = 1$.

## Matter term in Friedmann’s equation

It is clear that for a = ε << 1:

Q_{0} 2 << 1, and F(a) ≈ a -3 .

Also, for 1-a = e << 1:

Q_{0} 2 ≈ 1, and F(a) ≈ a -4 .

The following is the derivation of the modified F(a).

## Contents

The Friedmann equations start with the simplifying assumption that the universe is spatially homogeneous and isotropic, i.e. the cosmological principle empirically, this is justified on scales larger than

100 Mpc. The cosmological principle implies that the metric of the universe must be of the form

where is a three-dimensional metric that must be one of **(a)** flat space, **(b)** a sphere of constant positive curvature or **(c)** a hyperbolic space with constant negative curvature. The parameter discussed below takes the value 0, 1, −1 in these three cases respectively. It is this fact that allows us to sensibly speak of a "scale factor", .

Einstein's equations now relate the evolution of this scale factor to the pressure and energy of the matter in the universe. From FLRW metric we compute Christoffel symbols, then the Ricci tensor. With the stress–energy tensor for a perfect fluid, we substitute them into Einstein's field equations and the resulting equations are described below.

## Astroparticles and Primordial Cosmology_S3

1. Sources and transport of particles in the Universe

• Sources and their vicinity: production and acceleration mecanisms

• Examples of sources: Supernova Remnants, Binary systems, Active Galactic Nuclei, Gamma-ray bursters.

• Transport: General aspects, case of cosmic-ray interaction with the CMB: "GZK cut-off", case of propagation of gamma-rays.

2. Cosmic rays at Earth

• Primary cosmic rays: Composition and Flux. Experimental aspects: satellites, balloons.

• Secondary cosmic rays: Atmospheric showers, secondary particles at sea level and underground. Experimental aspects: detection ( examples: KASCADE, AUGER).

3. Gamma-ray astronomy

• Methods: Satellites: example of FERMI Ground based detectors: Imaging Atmospheric Cherenkov Telescopes (example of H.E.S.S.), arrays of detectors ( example of HAWC),

• Multi-wavelength studies: combining observations from radio to gamma rays.

4. Other messengers

• Search for astrophysical neutrinos: neutrino telescopes: ICECUBE, ANTARES,

• Gravitational waves: LIGO, VIRGO,

• Multi-messenger aspects.

5. Dark Matter (DM)

• Phenomenological context: why the DM ? What is DM ?

• Detection techniques and current limits: direct and indirect detection.

#### Primordial Cosmology (20 h)

1. Thermodynamics of primordial universe:

• Friedmann models (recap).

• The early universe: equilibrium thermodynamics, entropy, phase transitions and thermal history.

• Big-bang nucleosynthesis: Numerical modelling and comparison to recent observations,

• Thermodynamics in expanding universe: Boltzmann equation, freeze-out and origin of species (CDM, HDM, WIMPS), out-of-equilibrium decay, recombination. Neutrino cosmology. Baryogenesis.

Applications (learning-by-doing): Lithium abundance, abundance of WIMPZILLA's and UHECR, Lee-Weinberg bound.

2. Quantum fluctuations during inflation:

• Klein-Gordon equation in expanding universe, linear perturbations and quantization of massless and massive inflaton, gauge invariance.

• Metric fluctuations, gauge invariance, quantum-to-classical transition, curvature and matter perturbations, gravitational waves, scalar and tensor power spectra, consistency relations.

• Primordial non-gaussianities (fNL, gNL). Reheating, pre-heating.

Applications (learning-by-doing): numerical solution of KG equation for some

inflationary model (power-law, lambda phi^4 , hybrid, natural), study of the dynamical system.

3. Cosmic Microwave Background:

• Recombination and decoupling.

• Monopole, dipole and residual fluctuations. Spherical statistics.

• Temperature fluctuations: kinetic description, Sachs-Wolfe plateau, acoustic peaks, secondary anisotropies. Sources of noise and map-making: dust absorption, synchrotron radiation and Bremsstrahlung. Polarization: E- and B-modes, gravitational waves.

Applications (learning-by-doing): use of Boltzmann codes (CAMB, CLASS) to simulate CMB spectra and maps.

4. From post-recombination Universe to large-scale structure:

• From CMB to dark ages, ionization sources of H and He. Lyman systems and LyA-forest, IGM fluctuations, Gunn-Peterson effects. 21-cm cosmology.

• Density and velocity fields: Jeans modelling, Zel'dovich approximation.

• Statistics of fluctuations on large scales: counts, correlation functions, power spectrum.

• Spherical collapse, mass function, bias halo model.

Applications (learning-by-doing): numerical solution of Jeans equation in neutrino cosmology, estimation of massive clusters' counts in cosmologies with pNG (fNL).

5. Statistical analysis of cosmological models:

• Combination of probes to extract cosmological parameters. Degeneracies.

• Frequentist and Bayesian approaches: grid method, gradient method, MCMC.

• Forecasts: Fisher analysis and Monte Carlo simulation. Modelling of systematics.

Applications (learning-by-doing): fitting the Hubble diagram from supernovae

(Union 2) and CMB TT power spectrum (WMAP or Planck), Fisher matrix of

cluster counts for fNL.

## Neutrino Modelling in Friedmann Equation - Astronomy

**3.2 Dynamics of the Expansion**

EXPANSION AND GEOMETRY The equation of motion for the scale factor can be obtained in a quasi-Newtonian fashion. Consider a sphere about some arbitrary point, and let the radius be *R (t) r*, where *r* is arbitrary. The motion of a point at the edge of the sphere will, in Newtonian gravity, be influenced only by the interior mass. We can therefore write down immediately a differential equation **Friedmann's equation**) that expresses conservation of energy: *r*) 2 / 2 - *GM / (Rr)* = constant. In fact, to get this far, we do require general relativity: the gravitation from mass shells at large distances is not Newtonian, and so we cannot employ the usual argument about their effect being zero. In fact, the result that the gravitational field inside a uniform shell is zero does hold in general relativity, and is known as **Birkhoff's theorem** (see chapter 2). General relativity becomes even more vital in giving us the constant of integration in Friedmann's equation [problem 3.1]:

Note that this equation covers all contributions to , i.e. those from matter, radiation and vacuum it is independent of the equation of state. A common shorthand for relativistic cosmological models, which are described by the Robertson-Walker metric and which obey the Friedmann equation, is to speak of **FRW models**.

The Friedmann equation shows that a universe that is **spatially closed** (with *k* = +1) has negative total ``energy'': the expansion will eventually be halted by gravity, and the universe will recollapse. Conversely, an unbound model is **spatially open** (*k* = -1) and will expand forever. This is marvelously simple: the dynamics of the entire universe are the same as those of a cannonball fired vertically against the Earth's gravity. Just as the Earth's gravity defines an escape velocity for projectiles, so a universe that expands sufficiently fast will continue to expand forever. Conversely, for a given rate of expansion there is a **critical density** that will bring the expansion asymptotically to a halt:

This connection between the rate of expansion of the universe and its global geometry is an astonishing and deep result. The proof of the equation quoted above is ``only'' a question of inserting the Robertson-Walker metric into the field equations [problem 3.1], but the question inevitably arises of whether there is a quasi-Newtonian way of seeing that the result must be true the answer is ``almost''. First note that any open model will evolve towards undecelerated expansion provided its equation of state is such that *R* 2 is a declining function of *R* - the potential energy becomes negligible by comparison with the total and tends to a constant. In this mass-free limit, there can be no spatial curvature and the open RW metric must be just a coordinate transformation of Minkowski spacetime. We will exhibit transformation later in this chapter and show that it implies *R* = *ct* for this model, proving the *k* = -1 case.

An alternative line of attack is to rewrite the Friedmann equation in terms of the Hubble parameter:

Now consider holding the local observables *H* and fixed but increasing *R* without limit. Clearly, in the RW metric this corresponds to going to the *k* = 0 form: the scale of spatial curvature goes to infinity and the comoving separation for any given proper separation goes to zero, so that the comoving geometry becomes indistinguishable from the Euclidean form. This case also has potential and kinetic energy much greater than total energy, so that the rhs of the Friedmann equation is effectively zero. This establishes the *k* = 0 case, leaving the closed universe as the only stubborn holdout against Newtonian arguments.

It is sometimes convenient to work with the time derivative of the Friedmann equation, for the same reason that acceleration arguments in dynamics are sometimes more transparent than energy ones. Differentiating with respect to time requires a knowledge of , but this can be eliminated by means of conservation of energy: *d* [ *c* 2 *R* 3 ] = -*pd* [*R* 3 ]. We then obtain

Both this equation and the Friedmann equation in fact arise as independent equations from different components of Einstein's equations for the RW metric [problem 3.1].

DENSITY PARAMETERS ETC. The ``flat'' universe with *k* = 0 arises for a particular **critical density**. We are therefore led to define a **density parameter** as the ratio of density to critical density:

Since and *H* change with time, this defines an epoch-dependent density parameter. The current value of the parameter should strictly be denoted by _{0}. Because this is such a common symbol, we shall keep the formulae uncluttered by normally dropping the subscript the density parameter at other epochs will be denoted by (*z*). The critical density therefore just depends on the rate at which the universe is expanding. If we now also define a dimensionless (current) Hubble parameter as

then the current density of the universe may be expressed as

A powerful approximate model for the energy content of the universe is to divide it into pressureless matter ( *R* -3 ), radiation ( *R* -4 ) and vacuum energy ( constant). The first two relations just say that the number density of particles is diluted by the expansion, with photons also having their energy reduced by the redshift the third relation applies for Einstein's **cosmological constant**. In terms of observables, this means that the density is written as

(introducing the normalized scale factor *a* = *R / R*_{0}). For some purposes, this separation is unnecessary, since the Friedmann equation treats all contributions to the density parameter equally:

Thus, a flat *k* = 0 universe requires _{i} = 1 at all times, whatever the form of the contributions to the density, even if the equation of state cannot be decomposed in this simple way.

In terms of the **deceleration parameter**,

the form of the Friedmann equation says that

which implies *q* = 3_{m} / 2 + 2_{r} -1 for a flat universe. One of the classical problems of cosmology is to test this relation experimentally.

Lastly, it is often necessary to know the present value of the scale factor, which may be read directly from the Friedmann equation:

The Hubble constant thus sets the **curvature length**, which becomes infinitely large as approaches unity from either direction. Only in the limit of zero density does this length become equal to the other common measure of the size of the universe - the **Hubble length**, *c / H*_{0}.

SOLUTIONS TO THE FRIEDMANN EQUATION The Friedmann equation is so named because Friedmann was the first to appreciate, in 1922, that Einstein's equations admitted cosmological solutions containing matter only (although it was Lemaître who in 1927 both obtained the solution and appreciated that it led to a linear distance-redshift relation). The term **Friedmann model** is therefore often used to indicate a matter-only cosmology, even though his equation includes contributions from all equations of state.

The Friedmann equation may be solved most simply in ``parametric'' form, by recasting it in terms of the conformal time *d* = *c dt / R* (denoting derivatives with respect to by primes):

Because *H*_{0} 2 *R*_{0} 2 = *kc* 2 / ( - 1), the Friedmann equation becomes

which is straightforward to integrate provided _{v} = 0. Solving the Friedmann equation for *R (t)* in this way is important for determining global quantities such as the present age of the universe, and explicit solutions for particular cases are considered below. However, from the point of view of observations, and in particular the distance-redshift relation, it is not necessary to proceed by the direct route of determining *R (t)*.

To the observer, the evolution of the scale factor is most directly characterised by the change with redshift of the Hubble parameter and the density parameter the evolution of *H (z)* and (*z*) is given immediately by the Friedmann equation in the form *H* 2 = 8 *G* / 3 - *kc* 2 / *R* 2 . Inserting the above dependence of on *a* gives

This is a crucial equation, which can be used to obtain the relation between redshift and comoving distance. The radial equation of motion for a photon is *R dr* = *c dt* = *c dR* / = *c dR* / (*RH*). With *R* = *R*_{0} / (1 + *z*), this gives

This relation is arguably the single most important equation in cosmology, since it shows how to relate comoving distance to the observables of redshift, Hubble constant and density parameters. The comoving distance determines the apparent brightness of distant objects, and the comoving volume element determines the numbers of objects that are observed. These aspect of observational cosmology are discussed in more detail below in section 3.4.

Lastly, using the expression for *H (z)* with (*a*) - 1 = *kc* 2 / (*H* 2 *R* 2 ) gives the redshift dependence of the total density parameter:

This last equation is very important. It tells us that, at high redshift, all model universes apart from those with only vacuum energy will tend to look like the = 1 model. This is not surprising given the form of the Friedmann equation: provided *R* 2 -> as *R* -> 0, the -*kc* 2 curvature term will become negligible at early times. If 1, then in the distant past (*z*) must have differed from unity by a tiny amount: the density and rate of expansion needed to have been finely balanced for the universe to expand to the present. This tuning of the initial conditions is called the **flatness problem** and is one of the motivations for the applications of quantum theory to the early universe that are discussed in later chapters.

MATTER-DOMINATED UNIVERSE From the observed temperature of the microwave background (2.73 K) and the assumption of three species of neutrino at a slightly lower temperature (see later chapters), we deduce that the total relativistic density parameter is _{r} *h* 2 4.2 x 10 -5 , so at present it should be a good approximation to ignore radiation. However, the different redshift dependences of matter and radiation densities mean that this assumption fails at early times: _{m} / _{r} (1 + *z*) -1 . One of the critical epochs in cosmology is therefore the point at which these contributions were equal: the redshift of **matter-radiation equality**

At redshifts higher than this, the universal dynamics were dominated by the relativistic-particle content. By a coincidence discussed below, this epoch is close to another important event in cosmological history: **recombination**. Once the temperature falls below 10 4 K, ionized material can form neutral hydrogen. Observational astronomy is only possible from this point on, since Thomson scattering from electrons in ionized material prevents photon propagation. In practice, this limits the maximum redshift of observational interest to about 1000 (as discussed in detail in chapter 9) unless is very low or vacuum energy is important, a matter-dominated model is therefore a good approximation to reality.

By conserving matter, we can introduce a characteristic mass *M*_{*}, and from this a characteristic radius *R*_{*}:

where we have used the expression for *R*_{0} in the first step. When only matter is present, the conformal-time version of the Friedmann equation is simple to integrate for *R* (), and integration of *dt* = d / *R* gives *t* ():

This **cycloid solution** is a special case of the general solution for the evolution of a spherical mass distribution: *R* = *A* [1 - *C _{k}* ()],

*t*=

*B*[ -

*S*()], where

_{k}*A*3 =

*GMB*2 and the mass

*M*need not be the mass of the universe. In the general case, the variable is known as the

**development angle**it is only equal to the conformal time in the special case of the solution to the Friedmann equation. We will later use this solution to study the evolution of density inhomogeneities. The evolution of

*R (t)*in this solution is plotted in figure 3.4. A particular point to note is that the behaviour at early times is always the same: potential and kinetic energies greatly exceed total energy and we always have the

*k*= 0 form

*R*

*t*2/3 .

The parametric solution cannot be rearranged to give *R (t)*, but it is clearly possible to solve for *t (R)*. This is most simply expressed in terms of the density parameter and the age of the universe at a given stage of its development:

When we insert the redshift dependences of *H (z)* and (*z*),

this gives us the time-redshift relation. An alternative route to this result would have been to use the general differential expression for comoving distance *dr / dz* since *c dt* = [*R*_{0} / (1 + *z*)] *dr*, this gives the age of the universe as an integral over *z*.

An accurate and very useful approximation to the above exact result is

which interpolates between the exact ages of *H* -1 for an empty universe and 2/3 *H* -1 for a critical-density = 1 model.

MATTER PLUS RADIATION BACKGROUND The parametric solution can be extended in an elegant way for a universe containing a mixture of matter and radiation. Suppose we write the mass inside *R* as

reflecting the *R* -3 and *R* -4 dependencies of matter and radiation densities respectively. Now define dimensionless masses of the form *y* *GM* / (*c* 2 *R*_{0}), which reduce to *y*_{m, r} = *k* _{m, r} / [2( - 1)]. The parametric solutions then become

MODELS WITH VACUUM ENERGY The solution of the Friedmann equation becomes more complicated if we allow a significant contribution from vacuum energy - i.e. a non-zero cosmological constant. Detailed discussions of the problem are given by Felten & Isaacman (1986) and Carroll, Press & Turner (1992) the most important features are outlined below.

The Friedmann equation itself is independent of the equation of state, and just says *H* 2 *R* 2 = *kc* 2 / ( - 1), whatever the form of the contributions to . In terms of the cosmological constant itself, we have

STATIC UNIVERSE The reason that the cosmological constant was first introduced by Einstein was not simply because there was no general reason to expect empty space to be of zero density, but because it allows a non-expanding cosmology to be constructed. This is perhaps not so obvious from some forms of the Friedmann equation, since now *H* = 0 and = if we cast the equation in its original form without defining these parameters, then zero expansion implies

Since can have either sign, this appears not to constrain *k*. However, we also want to have zero acceleration for this model, and so need the time derivative of the Friedmann equation: = -4 *GR* ( + 3*p*) / 3. A further condition for a static model is therefore that

Since = -*p* for vacuum energy, and this is the only source of pressure if we ignore radiation, this tells us that = 3_{vac} and hence that the mass density is twice the vacuum density. The total density is hence positive and *k* = 1 we have a closed model.

Notice that what this says is that a positive vacuum energy acts in a repulsive way, balancing the attraction of normal matter. This is related to the idea of + 3*p* as the effective source density for gravity. This insight alone should make one appreciate that the static model cannot be stable: if we perturb the scale factor by a small positive amount, the vacuum repulsion is unchanged whereas the ``normal'' gravitational attraction is reduced, so that the model will tend to expand further (or contract, if the initial perturbation was negative). Thinking along these lines, a tidy history of science would have required Einstein to predict the expanding universe in advance of its observation. However, it is perhaps not so surprising that this prediction was never clearly made, despite the fact that expanding models were studied by Lemaître and by Friedmann in the years prior to Hubble's work. In those days, the idea of a quasi-Newtonian approach to cosmology was not developed the common difficulty of obtaining a clear physical interpretation of solutions to Einstein's equations obscured the meaning of the expanding universe even for its creators.

DE SITTER SPACE Before going on to the general case, it is worth looking at the endpoint of an outwards perturbation of Einstein's static model, first studied by de Sitter and named after him. This universe is completely dominated by vacuum energy, and is clearly the limit of the unstable expansion, since the density of matter redshifts to zero while the vacuum energy remains constant. Consider again the Friedmann equation in its general form 2 - 8 *G* *R* 2 / 3 = -*kc* 2 : since the density is constant and *R* will increase without limit, the two terms on the lhs must eventually become almost exactly equal and the curvature term on the rhs will be negligible. Thus, even if *k* 0, the universe will have a density that differs only infinitesimally from the critical, so that we can solve the equation by setting *k* = 0, in which case

An interesting interpretation of this behaviour was promoted in the early days of cosmology by Eddington: the cosmological constant is what *caused* the expansion. In models without , the expansion is merely an initial condition: anyone who asks why the universe expands at a given epoch is given the unsatisfactory reply that it does so because it was expanding at some earlier time, and this chain of reasoning comes up against a barrier at *t* = 0. It would be more satisfying to have some mechanism that set the expansion into motion, and this is what is provided by vacuum repulsion. This tendency of models with positive to end up undergoing an exponential phase of expansion (and moreover one with = 1) is exactly what is used in inflationary cosmology to generate the initial conditions for the big bang.

THE STEADY-STATE MODEL The behaviour of de Sitter space is in some ways reminiscent of the **steady-state universe**, which was popular in the 1960s. This theory drew its motivation from the philosophical problems of big-bang models - which begin in a singularity at *t* = 0, and for which earlier times have no meaning. Instead, Hoyle, Bondi and Gold suggested the **perfect cosmological principle** in which the universe is homogeneous not only in space, but also in time: apart from local fluctuations, the universe appears the same to all observers at all times. This tells us that the Hubble constant really is constant, and so the model necessarily has exponential expansion, *R* exp(*Ht*), exactly as for de Sitter space. Furthermore, it is necessary that *k* = 0, as may be seen by considering the transverse part of the Robertson-Walker metric: *d* 2 = [*R (t)* *S _{k} (r) d* ] 2 . This has the convention that

*r*is a dimensionless comoving coordinate if we divide by

*R*

_{0}and change to physical radius

*r'*, the metric becomes

*d*2 = [

*a (t) R*

_{0}

*S*(

_{k}*r' / R*

_{0})

*d*] 2 . The current scale factor

*R*

_{0}now plays the role of a curvature length, determining the distance over which the model is spatially Euclidean. However, any such curvature radius must be constant in the steady-state model, so the only possibility is that it is infinite and that

*k*= 0. We thus see that de Sitter space is a steady-state universe: it contains a constant vacuum energy density, and has an infinite age, lacking any big-bang singularity. In this sense, some aspects of the steady-state model have been resurrected in inflationary cosmology. However, de Sitter space is a rather uninteresting model because it contains no matter. Introducing matter into a steady-state universe violates energy conservation, since matter does not have the

*p*= -

*c*2 equation of state that allows the density to remain constant. This is the most radical aspect of steady-state models: they require

**continuous creation**of matter. The energy to accomplish this has to come from somewhere, and Einstein's equations are modified by adding some ``creation'' or ``

*C*-field'' term to the energy-momentum tensor:

The effect of this extra term must be to cancel the matter density and pressure, leaving just the overall effective form of the vacuum tensor, which is required to produce de Sitter space and the exponential expansion. This *ad hoc* field and the lack of any physical motivation for it beyond the cosmological problem it was designed to solve was always the most unsatisfactory feature of the steady-state model, and may account for the strong reactions generated by the theory. Certainly, the debate between steady-state supporters and protagonists of the big bang produced some memorable displays of vitriol in the 1960s. At the start of the decade, the point at issue was whether the proper density of active galaxies was constant as predicted by the steady-state model. Since the radio-source count data were in a somewhat primitive state at that time, the debate remained inconclusive until the detection of the microwave background in 1965. For many, this spelled the end of the steady-state universe, but doubts lingered on about whether the radiation might originate in interstellar dust. These were perhaps only finally laid to rest in 1990, with the demonstration that the radiation was almost exactly Planckian in form (see chapter 9).

BOUNCING AND LOITERING MODELS Returning to the general case of models with a mixture of energy in the vacuum and normal components, we have to distinguish three cases. For models that start from a big bang (in which case radiation dominates completely at the earliest times), the universe will either recollapse or expand forever. The latter outcome becomes more likely for low densities of matter and radiation, but high vacuum density. It is however also possible to have models in which there is no big bang: the universe was collapsing in the distant past, but was slowed by the repulsion of a positive term and underwent a ``bounce'' to reach its present state of expansion. Working out the conditions for these different events is a matter of integrating the Friedmann equation. For the addition of , this can only in general be done numerically. However, we can find the conditions for the different behaviours described above analytically, at least if we simplify things by ignoring radiation. The equation in the form of the time-dependent Hubble parameter looks like

and we are interested in the conditions under which the lhs vanishes, defining a turning point in the expansion. Setting the rhs to zero yields a cubic equation, and it is possible to give the conditions under which this has a solution (see Felten & Isaacman 1986), which are as follows.

If is positive and _{m} 1, recollapse is only avoided if _{v} exceeds a critical value

If is large enough, the stationary point of the expansion is at *a* (3.56)

where the function *f* is similar in spirit to *C _{k}*: cosh if

_{m}(3.57)

A reasonable lower limit for _{m} of 0.1 then rules out a bounce once objects are seen at *z* > 2.

The main results of this section are summed up in figure 3.5.. Since the radiation density is very small today, the main task of relativistic cosmology is to work out where on the _{matter} - _{vacuum} plane the real universe lies. The existence of high-redshift objects rules out the bounce models, so that the idea of a hot big bang cannot be evaded. As subsequent chapters will show, the data favour a position somewhere near the point (1,0), which is the worst possible situation: it means that the issues of recollapse and closure are very difficult to resolve.

FLAT UNIVERSE The most important model in cosmological research is that with *k* = 0 -> _{total} = 1 when dominated by matter, this is often termed the **Einstein-de Sitter** model. Paradoxically, this importance arises because it is an unstable state: as we have seen earlier, the universe will evolve away from = 1, given a slight perturbation. For the universe to have expanded by so many **e-foldings** (factors of *e* expansion) and yet still have

1 implies that it was very close to being spatially flat at early times. Many workers have conjectured that it would be contrived if this flatness was other than perfect - a prejudice raised to the status of a prediction in most models of inflation.

Although it is a mathematically distinct case, in practice the properties of a flat model can usually be obtained by taking the limit -> 1 for either open or closed universes with *k* = ± 1. Nevertheless, it is usually easier to start again from the *k* = 0 Friedmann equation, 2 = 8 *G* img src="../GIFS/rho2.gif" align=middle> *R* 2 / (3*c* 2 ). Since both sides are quadratic in *R*, this makes it clear that the value of *R*_{0} is arbitrary, unlike models with 1: the comoving geometry is Euclidean, and there is no natural curvature scale.

It now makes more sense to work throughout in terms of the normalized scale factor *a (t)*, so that the Friedmann equation for a matter-radiation mix is

which may be integrated to give the time as a function of scale factor:

this goes to 2/3 *a* 3/2 for a matter-only model, and to 1/2 *a* 2 for radiation only.

One further way of presenting the model's dependence on time is via the density. Following the above, it is easy to show that

The whole universe thus always obeys the rule-of-thumb for the collapse from rest of a gravitating body: the collapse time 1 / sqrt(*G* ).

Because _{r} is so small, the deviations from a matter-only model are unimportant for *z* 1000, and so the distance-redshift relation for the *k* = 0 matter plus radiation model is effectively just that of the _{m} = 1 Einstein-de Sitter model. An alternative *k* = 0 model of greater observational interest has a significant cosmological constant, so that _{m} + _{v} = 1 (radiation being neglected for simplicity). This may seem contrived, but once *k* = 0 has been established, it cannot change: individual contributions to must adjust to keep in balance. The advantage of this model is that it is the only way of retaining the theoretical attractiveness of *k* = 0 while changing the age of the universe from the relation *H*_{0} *t*_{0} = 2/3, which characterises the Einstein-de Sitter model. Since much observational evidence indicates that *H*_{0} *t*_{0} 1 (see chapter 5), this model has received a good deal of interest in recent years. To keep things simple we shall neglect radiation, so that the Friedmann equation is

and the *t (a)* relation is

The *x* 4 on the bottom looks like trouble, but it can be rendered tractable by the substitution *y* = sqrt(*x* 3 |_{m} - 1| / _{m}), which turns the integral into

Here, *k* in *S _{k}* is used to mean sin if

_{m}> 1, otherwise sinh these are still

*k*= 0 models. This

*t (a)*relation is compared to models without vacuum energy in figure 3.6. Since there is nothing special about the current era, we can clearly also rewrite this expression as

where we include a simple approximation that is accurate to a few % over the region of interest (_{m} 0.1). In the general case of significant but *k* 0, this expression still gives a very good approximation to the exact result, provided _{m} is replaced by 0.7_{m} - 0.3_{v} + 0.3 (Carroll, Press & Turner 1992).

HORIZONS For photons, the radial equation of motion is just *c dt* = *R dr*. How far can a photon get in a given time? The answer is clearly

i.e. just the interval of conformal time. What happens as *t*_{0} -> 0 in this expression? We can replace *dt* by *dR* / , which the Friedmann equation says is *dR* / sqrt( *R* 2 ) at early times. Thus, this integral converges if *R* 2 -> as *t*_{0} -> 0, otherwise it diverges. Provided the equation of state is such that changes faster than *R* -2 , light signals can only propagate a finite distance between the big bang and the present there is then said to be a **particle horizon**. Such a horizon therefore exists in conventional big bang models, which are dominated by radiation at early times.

A particle horizon is not at all the same thing as an event horizon: for the latter, we ask whether *r* diverges as *t* -> . If it does, then seeing a given event is just a question of waiting long enough. Clearly, an event horizon requires *R (t)* to increase more quickly than *t*, so that distant parts of the universe recede ``faster than light''. This does not occur unless the universe is dominated by vacuum energy at late times, as discussed above. Despite this distinction, cosmologists usually say **the horizon** when they mean the particle horizon.

There are some unique aspects to the idea of a horizon in a closed universe, where you can in principle return to your starting point by continuing for long enough in the same direction. However, the related possibility of viewing the back of your head (albeit with some time delay) turns out to be more difficult once dynamics are taken into account. For a matter-only model, it is easy to show that the horizon only just reaches the stage of allowing a photon to circumnavigate the universe at the point of recollapse - the ``big crunch''. A photon that starts at *r* = 0 at *t* = 0 will return to its initial position when *r* = 2 , at which point the conformal time = 2 also (from above) and the model has recollapsed. Since we live in an expanding universe, it is not even possible to see the same object in two different directions, at radii *r* and 2 - *r*. This requires a horizon size larger than but conformal time = is attained only at maximum expansion, so antipodal pairs of high-redshift objects are visible only in the collapse phase. These constraints do not apply if the universe has a significant cosmological constant loitering models should allow one to see antipodal pairs at approximately the same redshift. This effect has been sought, but without success. *****

## Neutrino Modelling in Friedmann Equation - Astronomy

The discovery 40 years ago of the cosmic microwave background radiation (CMB) ended, for most people, the old debate about Steady-State vs. the Hot Big Bang. Ten years ago, support for the Hot Big Bang was fortified by the COBE satellite which demonstrated that the CMB has a Planck spectrum to extremely high precision it is, quite literally, the most perfect black body observed in nature [16]. This makes any model in which the CMB is produced by some secondary process, such as thermal re-radiation of starlight by hot dust, seem extremely difficult, if not impossible, to contrive.

Not only does the background radiation have a thermal spectrum, it is now evident that this radiation was hotter in the past than now as expected for adiabatic expansion of the Universe. This is verified by observations of neutral carbon fine structure lines as well as molecular hydrogen rotational transitions in absorption line systems in the spectra of distant quasars. Here, the implied population of different levels, determined primarily by the background radiation field, is an effective thermometer for that radiation field. One example is provided by a quasar with an absorption line system at *z* = 3.025 which demonstrates that the temperature of the CMB at this redshift was 12.1 +1.7 _{-8.2} K, consistent with expectations (*T* 1 + *z*) [17].

However, the most outstanding success story for the Hot Big Bang is generally considered to be that of Big Bang Nucleosynthesis (BBN) which, for a given number of relativistic particle species, predicts the primordial abundances of the light isotopes with, effectively, one free parameter: the ratio of baryons-to-photons, [18]. I want to review this success story, and point out that there remains one evident inconsistency which may be entirely observational, but which alternatively may point to new physics.

We saw above in the Friedmann equation (eq. 3.7) that radiation, if present, will always dominate the expansion of the Universe at early enough epochs (roughly at *z* 2 × 10 4 _{m}.) This makes the expansion and thermal history of the Universe particularly simple during this period. The Friedmann equation becomes

here *a* is the radiation constant and *N*(*T*) is the number of degrees of freedom in relativistic particles. The scale factor is seen to grow as *t* 1/2 which means that the age of the Universe is given by *t* = 1/2*H*. This implies, from eq. 4.1, an age-temperature relation of the form *t* *T* -2 . Putting in numbers, the precise relation is

where the age is given in seconds and *T*_{MeV} is the temperature measured in MeV. It is only necessary to count the number of relativistic particle species:

where the sums are over the number of bosonic degrees of freedom (*g*_{B}) and fermionic degrees of freedom (*g*_{F}). The factor 7/8 is due to the difference in Bose-Einstein and Fermi-Dirac statistics. Adding in all the known species - photons, electrons-positrons (when *T*_{MeV} > 0.5), three types of neutrinos and anti-neutrinos - we find

for the age-temperature relation in the early Universe.

When the Universe is less than one second old (*T* > 1 MeV) the weak interactions

are rapid enough to establish equilibrium between these various species. But when T falls below 1 MeV, the reaction rates become slower than the expansion rate of the Universe, and neutrons "freeze out" - they fall out of thermal equilibrium, as do the neutrinos. This means the equilibrium ratio of neutrons to protons at *T* 1 MeV is frozen into the expanding soup: *n* / *p* 0.20 - 0.25. You all know that neutrons outside of an atomic nucleus are unstable particles and decay with a half-life of about 15 minutes. But before that happens there is a possible escape route:

that is to say, a neutron can combine with a proton to make a deuterium nucleus and a photon. However, so long as the mean energy of particles and photons is greater than the binding energy of deuterium, about 86 Kev, the inverse reaction happens as well as soon as a deuterium nucleus is formed it is photo-dissociated. This means that it is impossible to build up a significant abundance of deuterium until the temperature of the Universe has fallen below 86 KeV or, looking back at eq. 4.4, until the Universe has become older than about 2.5 minutes. Then all of the remaining neutrons are rapidly processed into deuterium.

But the deuterium doesn't stay around for long either. Given the temperature and particle densities prevailing at this epoch, there are a series of two-body reactions by which two deuterons combine to make He 4 and trace amounts of lithium and He 3 . These reactions occur at a rate which depends upon the overall abundance of baryons, the ratio of baryons to photons:

So essentially all neutrons which survive until T = 86 KeV become locked up in He 4 . Therefore, the primordial abundance of helium depends primarily upon the expansion rate of the Universe: the faster the expansion (due, say, to more neutrino types or to a larger constant of gravity) the more helium. The abundance of remaining deuterium, however, depends upon the abundance of baryons, : the higher the less deuterium. This is why it is sometimes said [18] that the abundance of primordial helium is a good chronometer (it measures the expansion rate), while the abundance of deuterium is a good baryometer (it measures _{b}). This is evident in Figs. 1 and 2 where we see first the predicted abundances of various light isotopes as a function of , and secondly, the predicted abundance of He vs. that of deuterium for two, three and four neutrino types.

The determination of primordial abundances is not a straightforward matter because the abundance of these elements evolves due to processes occurring within stars ("astration"). In general, the abundance of helium increases (hydrogen is processed to helium providing the primary energy source for stars), while deuterium is destroyed by the same process. This means that astronomers, when trying to estimate primordial abundances of deuterium or helium, must try to find pristine, unprocessed material, in so far as possible. One way to find unprocessed material is to look back at early times, or large redshift, before the baryonic material has been recycled through generations of stars. This can be done with quasar absorption line systems, where several groups of observers have been attempting to identify very shallow absorption lines of deuterium at the same redshift as the much stronger hydrogen Lyman alpha absorption line systems [19, 20, 21, 22]. It is a difficult observation requiring the largest telescopes the lines identified with deuterium might be mis-identified weak hydrogen or metal lines (incidentally, for an astronomer, any element heavier than helium is a metal). Taking the results of various groups at face value, the weighted mean value [18] is D/H 2.6 ± 0.3 × 10 -5 . Looking back at Fig. 1, we see that this would correspond to = 6.1 ± 0.6 × 10 -10 or _{b} *h* 2 = 0.022 ± 0.003.

A word of caution is necessary here: the values for the deuterium abundance determined by the different groups scatter by more than a factor of two, which is considerably larger than the quoted statistical errors ( 25%). This indicates that significant systematic effects are present. But it is noteworthy that the angular power spectrum of the CMB anisotropies also yields an estimate of the baryon abundance this is encoded in the ratio of the amplitudes of the second to first peak. The value is _{b} *h* 2 = 0.024 ± 0.001. In other words, the two determinations agree to within their errors. This is quite remarkable considering that the first determination involves nuclear processes occurring within the first three minutes of the Big Bang, and the second involves oscillations of a photon-baryon plasma on an enormous scale when the Universe is about 500,000 years old. If this is a coincidence, it is truly an astounding one.

So much for the baryometer, but what about the chronometer - helium? Again astronomers are obliged to look for unprocessed material in order to estimate the primordial abundance. The technique of looking at quasar absorption line systems doesn't work for helium because the absorption lines from the ground state are far in the ultraviolet - about 600 Å for neutral helium and, more likely, 300 Å from singly ionized helium. This is well beyond the Lyman limit of hydrogen, where the radiation from the background quasar is effectively absorbed [23]. Here the technique is to look for He emission lines from HII regions (ionized gas around hot stars) in nearby galaxies and compare to the hydrogen emission lines. But how does one know that the gas is unprocessed? The clue is in the fact that stars not only process hydrogen into helium, but they also, in the late stages of their evolution, synthesize heavier elements (metals) in their interiors. Therefore the abundance of heavier elements, like silicon, is an indicator of how much nuclear processing the ionized gas has undergone. It is observed that the He abundance is correlated with the metal abundance so the goal is to find HII regions with as low a metal abundance as possible, and then extrapolate this empirical correlation to zero metal abundance [24, 25]. The answer turns out to be He/H 0.24, which is shown by the point with error bars in Fig. 2.

This value is embarrassingly low, given the observed deuterium abundance. It is obviously more consistent with an expansion rate provided by only two neutrino types rather than three, but we know that there are certainly three types. Possible reasons for this apparent anomaly are:

1) Bad astronomy: There are unresolved systematic errors in determination of the relative He abundance in HII regions indicated by the fact that the results of different groups differ by more than the quoted statistical errors [18]. The derivation of the helium to hydrogen ratio from the observed He + /H + ratio requires some understanding of the structure of the HII regions. If there are relatively cool ionizing stars (*T* < 35000 K) spatially separated from the hotter stars, there may be relatively less He + associated with a given abundance of H + . Lines of other elements need to be observed to estimate the excitation temperature it is a complex problem.

2) New neutrino physics: There may be an asymmetry between neutrinos and anti-neutrinos (something like the baryon-antibaryon asymmetry which provides us with the observed Universe). This would manifest itself as a chemical potential in the Boltzmann equation giving different equilibrium ratios of the various neutrino species [26].

3) New gravitational physics: any change in the gravitational interaction which is effective at early epochs (braneworld effects?) could have a pronounced effect on nucleosynthesis. For example, a lower effective constant of gravity would yield a lower expansion rate and a lower He abundance. The standard minimal braneworld correction term, proportional to the square of the density [27], goes in the wrong direction.

It is unclear if the low helium abundance is a serious problem for the standard Big Bang. But it is clear that the agreement of the implied baryon abundance with the CMB determination is an impressive success, and strongly supports the assertion that the Hot Big Bang is the correct model for the pre-recombination Universe. *****

## Answers and Replies

It's not really a matter of modifying the Friedmann Equations, it's a matter of starting over again with the Einstein Field Equation with different assumptions.

If you don't make any assumption at all about symmetry, I'm not sure how you would obtain any solution the distribution of stress-energy could be anything.

If you are assuming that the universe is still axisymmetric, and that it is still homogeneous (so the axisymmetry is the same everywhere), that would constrain the stress-energy tensor and a solution might still be possible. There are known axisymmetric solutions to the EFE, but I don't know if any of them describe an expanding universe.

**Summary:** What will the Friedmann Equations be if we assume an anisotropic universe?

The Friedman Equations is based on the cosmological principle, which states that the universe at sufficiently large scale is homogeneous and isotropic.

But what if, as an hypothesis, the universe was anisotropic and the clustering of masses are aligned to an arbitrary axis (axial pole), how would Friedman Equations be modified.

I guess we would have to redefine the Friedmann metric tensor. But how?

The Friedmann metric tensor is:

$g = -dt otimes dt + (- frac

And the Friedmann Equations are:

The first one:

And the derived Hubble parameter is:

So how exactly would those equations need to be modified to account for anisotropy and an axial pole?

As PeterDonis said, you have to start over. Unfortunately, nobody knows how to do that well. There are exact spherically-symmetric solutions to the Einstein Field Equations that might be usable, but there's no way to do it in general except through approximations.

This is the basic concept behind perturbation theory as it's used in cosmology, where you have a background spacetime that is homogeneous and isotropic, but layered on top of that are some inhomogeneous fluctuations. It's a big, complex topic. But it underlies a lot of important cosmology regarding the formation of structure in the universe.

What does this mean mathematically? For example, what kind of symmetry transformations would leave the universe looking the same?

In the homogeneous and isotropic FRW spacetimes, the set of those symmetry transformations is: all spatial translations, and all spatial rotations about any axis. Obviously those can't all leave your hypothetical universe looking the same. But which ones *would* leave it looking the same?

I am not sure! When we say "looking the same" must mean our own perception as to what the universe looks like and how we have modeled it within the framework of the Friedmann Equations (Friedmann metric). However what if, our perception is faulty to a certain extent, and that we would need a new model for expansion, anisotropic and inhomogeneity for very large scales and isotropic and homogeneity for local scales. See this article about the Planck results, the first 4 paragraphs are of interest: anisotropic .

But I think a spherical symmetry can still be used for it.

More or less but "what the universe looks like" can be characterized by things like the density of matter, so it can be given a concrete meaning.

No, that's not necessary you don't have to have an Friedmann metric to define things like the density of matter. You just have to think about, for the kind of model you are describing, how would the density of matter vary in space? What kinds of transformations could you do that would leave the distribution of density of matter in space unchanged?

This is something different from what you've been saying up to now. If the universe has a preferred axis, that would be true on all scales.

Ok. But then I don't know. That is why I'm asking in this thread.

No, I'm just adding or theorizing based on the planck article: which says that for large enough scales the isotropic properties start to break down (in my own words). So I was thinking that it must be the universe looks homogeneous and isotropic in local scales, but for very large scales we see anisotropic and a possible inhomogeneity, but in any case the universe is expanding and in acceleration.

I don't have a clue as to what the metric should be in these cases.

The Planck article says nothing about an "axial pole" so I don't know where you got that from. If you're interested in spacetimes with an "axial pole" that would be a separate discussion from a discussion of the Planck results.

If you are interested in what kinds of spacetime models cosmologists are looking at because of what they see in the Planck data, the article mentions the Bianchi models I think they're talking about these:

http://www.scholarpedia.org/article/Bianchi_universes

But, as the article notes, nobody understands at this point how to construct a model that looks like one of these Bianchi models on very large scales but looks like an FRW model on smaller scales.

Can't I mixed the two? The one with the Planck reports and one of my own curiosity? Anyway, I prefer to talk about the axial pole and the anisotropic aspects of a hypothetical but expanding universe!

What kind of symmetry do you think would go for these criteria?

Not in the same thread, that just makes the discussion unfocused.

Then let's keep this thread focused on that and you can start a separate thread if you want to discuss the Planck results and implications based on the link you gave earlier.

Not in the same thread, that just makes the discussion unfocused.

Then let's keep this thread focused on that and you can start a separate thread if you want to discuss the Planck results and implications based on the link you gave earlier.

The obvious symmetry would be axial symmetry, meaning that rotation about some fixed axis in space leaves everything looking the same. Unfortunately pretty much anything you will find about axially symmetric spacetimes in the literature will be about Kerr spacetime, i.e., the spacetime of a rotating black hole, which is axially symmetric but also stationary, not expanding.

Back to the original issue, the problem, fundamentally, is that you're asking us to define something that you haven't clearly specified. And the details really do matter here.

For example, here are three types of axis-aligned universes that one could write down on paper:

1) Matter has a tendency to be aligned along parallel rods. The universe is still homogeneous, but the way in which the matter is distributed picks out a particular direction. This kind of universe expands just like ours does (following the FRW equations), because it is homogeneous on large scales. The preferred direction would indicate that there were some interesting physics at work in the very early universe that aligned those initial perturbations along a particular direction.

2) There's a central line through the universe that is its maximum-density region. Density of the universe drops gradually as you move away from this line. This is a cylindrically-symmetric universe which does *not* behave like FRW. The Einstein equations would have to be reworked with this symmetry set up.

3) The universe is rotating around an axis. This is another type of cylindrical symmetry that would not behave like FRW. It might start homogeneous, but the rotation may force this universe out of homogeneity on large scales.

Back to the original issue, the problem, fundamentally, is that you're asking us to define something that you haven't clearly specified. And the details really do matter here.

For example, here are three types of axis-aligned universes that one could write down on paper:

1) Matter has a tendency to be aligned along parallel rods. The universe is still homogeneous, but the way in which the matter is distributed picks out a particular direction. This kind of universe expands just like ours does (following the FRW equations), because it is homogeneous on large scales. The preferred direction would indicate that there were some interesting physics at work in the very early universe that aligned those initial perturbations along a particular direction.

2) There's a central line through the universe that is its maximum-density region. Density of the universe drops gradually as you move away from this line. This is a cylindrically-symmetric universe which does *not* behave like FRW. The Einstein equations would have to be reworked with this symmetry set up.

3) The universe is rotating around an axis. This is another type of cylindrical symmetry that would not behave like FRW. It might start homogeneous, but the rotation may force this universe out of homogeneity on large scales.

Just to add a little here:

Suppose one assumes just axial symmetry about a single axis, as naturally generalized from spherical symmetry about a point. This is not enough to make looking for solutions much easier than the fully general Einstein Field equations. To reach a plausibly tractable family of solutions involving matter (as well as vacuum, if desired) , one needs to further assume non-rotation and stationary character. In this case, you can characterize the solutions in terms of 3 general functions of radial and axial coordinates, with explicit formulas for the Ricci tensor in terms of these (and thus the stress energy tensor). You can then further impose the dominant energy condition for classical plausibility. [For a typical textbook discussion of these issues, see Synge, "Relativity, the General Theory", section VIII.1 remarkably, MTW and Carroll have no discussion of general families axisymmetric solutions. Wald does have such a discussion, with more rigor and a different emphasis than Synge, in particular, allowing rotation plus matter - but with the result that solutions are *hard* find, despite still assuming stationary + axisymmetry].

However, all of this is useless for cosmology because of the stationary assumption. In short, any way of doing anisotropic cosmology is way beyond the level of a standard graduate GR textbook (and I have no idea of what research may have been done in this area).

Well, I need the metric for the following assumptions:

1) Universe is expanding as it currently does.

2) Universe started from a Big Bang

3) Universe is anisotropic at large scales with a central axe

4) Universe is non-rotating

5) Universe looks homogeneous from local vantage points.

6) Non-locality (QM) is valid.

7) present density parameters (dark energy, matter, neutrino) are valid

## Neutrino Modelling in Friedmann Equation - Astronomy

PHY 524 - Graduate Cosmology

Welcome to the home page for the course “Cosmology” (PHY 524) in Spring 2020.

Overview of Course: This course covers the standard model of cosmology, including both the homogeneous Universe and perturbation theory, and the main observational tests of this mod el.

Class Times: Tuesdays and Thursdays 12:30pm - 1:50pm via Zoom meeting

Office Hours: Mondays 3:00 - 4:00pm via Zoom meeting

Course grade: Homework Problem Sets (25%)

Homeworks and Exams are to be submitted electronically via Blackboard

Course Text Book: Modern Cosmology by Scott Dodelson, Academic Press, 2003

Other Useful Text Books: Cosmology by Steven Weinberg, Oxford University Press, 2008

Introduction to the Theory of the Early Universe

by D. Gorbunov and V. Rubakov, World Scientific, 2011

Introduction to Cosmology by Barbara Ryden, Addison Wesley, 2003

(the last book is an undergrad text and may provide useful review)

Prerequisites: Knowledge of standard undergraduate physics (classical mechanics, electrodynamics, quantum mechanics, and thermodynamics) is assumed. Some knowledge of general relativity is also assumed (e.g. read Dodelson pages 23 - 33 prior to the start of class). No prior knowledge of quantum field theory, astronomy, or cosmology is assumed.

• Friedmann-Robertson-Walker metric and Friedmann equations

• Expansion history and distance measures

• Standard candles and standard rulers

• Relativistic degrees of freedom and the neutrino background

• Perturbed metric and Boltzmann equation for photons in real space

• Boltzmann equation for photons in Fourier-multipole space

• Perturbation equations for neutrinos, dark matter, and baryons

• Einstein equations and gauge transformations

• Initial conditions and adiabatic vs. isocurvature modes

• Generation of perturbations in inflation

• Inhomogeneities and the matter power spectrum

• Understand the physics and evolution of the smooth, homogeneous Universe starting from the Big Bang

• Understand the physics of perturbations about a smooth Universe

• Understand the observational consequences

Student Accessibility Support Center Statement:

If you have a physical, psychological, medical, or learning disability that may impact your course work, please contact the Student Accessibility Support Center, 128 ECC Building, (631) 632-6748, or at [email protected] They will determine with you what accommodations are necessary and appropriate. All information and documentation is confidential.

Academic Integrity Statement:

Each student must pursue his or her academic goals honestly and be personally accountable for all submitted work. Representing another person's work as your own is always wrong. Faculty is required to report any suspected instances of academic dishonesty to the Academic Judiciary. Faculty in the Health Sciences Center (School of Health Technology & Management, Nursing, Social Welfare, Dental Medicine) and School of Medicine are required to follow their school-specific procedures. For more comprehensive information on academic integrity, including categories of academic dishonesty please refer to the academic judiciary website at http://www.stonybrook.edu/commcms/academic_integrity/index.html

Critical Incident Management:

Stony Brook University expects students to respect the rights, privileges, and property of other people. Faculty are required to report to the Office of University Community Standards any disruptive behavior that interrupts their ability to teach, compromises the safety of the learning environment, or inhibits students' ability to learn. Faculty in the HSC Schools and the School of Medicine are required to follow their school-specific procedures. Further information about most academic matters can be found in the Undergraduate Bulletin, the Undergraduate Class Schedule, and the Faculty-Employee Handbook.

## 2 - Overview of the Standard Cosmological Model

Cosmology is the quantitative study of the properties and evolution of the universe as a whole. Since the discovery of the redshift–distance relationship by Hubble in 1929, observations have supported the idea of an expanding universe, which can be beautifully described in terms of the Friedmann and Lemaître solution of the Einstein equations. The basis of this solution is the empirical observation that on sufficiently large scales, and at earlier times, the universe is remarkably homogeneous and isotropic. This experimental fact has been promoted to the role of a guiding assumption, the Cosmological Principle. Assuming that our observation point is not privileged, in the spirit of the Copernican revolution, one is naturally led to the conclusion that all observations made at different places in the universe should look pretty much the same independent of direction. Homogeneity and isotropy single out a unique form for the spacetime metric, the basic ingredient of Einstein theory. Cosmological models can then be quantitatively worked out after specification of the matter content, which acts as the source for curvature. Results can be then compared with astrophysical data, which in the last decades have reached a remarkable precision.

## Aleksandr Aleksandrovich Friedmann

**Alexander Friedmann**'s date of birth is often given as 29 June. However this is an error which came about in converting the "Old Style" Russian date to the "New Style" date, which requires an addition of 12 days. Rather strangely Friedmann wrongly converted his own date of birth to 17 June ( it should have been 4 + 12 = 16) . Then, not realising that the date he gave had already been converted, it was converted again (17 + 12 = 29) .

Friedmann's father was a ballet dancer and his mother was a pianist. However the parents divorced when Alexander was nine years old. Records show that the church sided with the father and Alexander stayed with his father who soon remarried. Alexander entered the Second St Petersburg Gymnasium in August 1897 and his record shows a quite ordinary school performance at first. Soon however Friedmann became one of the top two pupils in his class. The other outstanding pupil was Yakov Tamarkin, also an extraordinary mathematician, and the two boys were close friends, almost always together during their years at school and university.

In 1905 Friedmann and Tamarkin wrote a paper on Bernoulli numbers and submitted the paper to Hilbert for publication in *Mathematische Annalen*. The paper was accepted and appeared in print in 1906 . The year 1905 was not only one of great scientific importance for Friedmann, it was also one where he was extremely active politically. Friedmann and Tamarkin were student leaders of strikes at the school in protest at the government's repressive measures against schools.

Friedmann graduated from school in 1906 and entered the University of St Petersburg in August of that year. There he was strongly influenced by Steklov who had taken up an appointment at St Petersburg in the year Friedmann entered and shared Friedmann's political views. Friedmann was also influenced by Ehrenfest who moved to St Petersburg in 1906 . By 1907 Ehrenfest had set up a modern physics seminar which was attended by a number of young physicists and by the two young mathematicians Friedmann and Tamarkin. This group discussed quantum theory, relativity and statistical mechanics.

While Friedmann was an undergraduate at St Petersburg his father died. After he completed his studies in 1910 his scientific advisor, Steklov, wrote a reference for Friedmann to continue his studies. The death of Friedmann's father clearly had financial implications as the reference indicated, see [ 3 ] :-

Friedmann began to study for his Master's Degree and, in 1911 , became involved with a circle formed to study mathematical analysis and mechanics. In addition to Friedmann, other members of the circle included Tamarkin, Smirnov, Petelin, Shokhat and, a little later, Besicovitch joined the circle. Friedmann lectured on Clebsch's work on elasticity and other topics including Goursat's books. While studying for his Master's Degree Friedmann lectured at the Mining Institute, cooperating there with Nikolai Krylov, and he also taught at the Railway Engineering Institute. Through this work Friedmann became interested in aeronautics and in 1911 he published an article surveying the area describing, in particular, the contributions of Zhukovsky and Chaplygin.

By 1913 Friedmann had completed the necessary examinations for the Master's Degree having been examined by Markov, Steklov and others. In February 1913 he was appointed to a position in the Aerological Observatory in Pavlovsk, a suburb of St Petersburg, where he was to study meteorology. In 1914 Friedmann went to Leipzig to study with Vilhelm Bjerknes, the leading theoretical meteorologist of the time. Friedmann left Leipzig in the summer of 1914 and took part in several flights in airships to make observations.

When Austria gave Serbia an ultimatum after the June 1914 assassination of Archduke Francis Ferdinand, Russia supported Serbia, so Germany came to the support of Austria. World War I broke out on 1 August 1914 and Friedmann soon sought permission from the Head of the Observatory to join the volunteer aviation detachment. He began flying aircraft and was soon involved in bombing raids. He continued to study mathematics, writing and exchanging mathematical ideas with Steklov by letter. In a letter to Steklov written on 5 February 1915 Friedmann writes, see [ 3 ] :-

Friedmann was awarded the George Cross for bravery with his flights over Przemysl. In the summer of 1915 the Russian army retreated on its south west front. Friedmann was sent to Kiev and there he gave lectures on aeronautics for pilots. In March 1916 he was appointed Head of the Central Aeronautical Station in Kiev. In Kiev, Friedmann joined the Mathematical Society which had among its members Ch T Bialobzeski, P V Voronets, B N Delone, D A Grave, A P Kotelnikov, V P Linnik ( I V Linnik's father ) and O Yu Schmidt. In April 1917 the Central Aeronautical Station moved to Moscow, and Friedmann moved there.

The Revolution of October 1917 became inevitable when Alexander Kerensky, the prime minister, sent troops to close down two Bolshevik newspapers. Lenin, who had been in hiding, made a public appearance telling the Bolsheviks to overthrow the Government. On the morning of October 26 , after hardly any bloodshed, Lenin proclaimed that the Soviets were in power. After this, the work of the Central Aeronautical Station was stopped and Friedmann began to look for another post, but he was unsure of the direction he should take, particularly since his health had suffered as a result of the war. He wrote to Steklov saying:-

On 13 April 1918 Friedmann was elected an extraordinary professor in the Department of Mathematics and Physics at the University of Perm. Among the young colleagues he had there were A S Besicovitch, I M Vinogradov, N M Gunter and R O Kuzmin. At Perm Friedmann set up an Institute of Mechanics and became a member of the editorial board of the Journal of the newly founded Physico-Mathematical Society of Perm University.

The Russian nation was plunged into civil war. The Red Army had been formed in February 1918 with Trotsky as its leader. The Reds opposed the White Army formed of anticommunists led by former imperial officers. In fact Friedmann had commented on the Red Army in Perm on 27 April 1918 when he wrote:-

In the spring of 1920 , with the Civil war still raging, Friedmann returned to St Petersburg ( now named Petrograd ) to take up a post at the Main Geophysical Observatory. Friedmann was never one to take life easy and he took up an impressive number of appointments in 1920 in Petrograd. He began teaching mathematics and mechanics at Petrograd University, became a professor in the Physics and Mathematics Faculty of the Petrograd Polytechnic Institute, worked in the Department of Applied Aeronautics at Petrograd Institute of Railway Engineering, worked at the Naval Academy and undertook research at the Atomic Commission at the Optical Institute.

In 1922 , nine years after completing the examinations for this Master's Degree, Friedmann submitted his Master's dissertation. The dissertation was entitled *The Hydromechanics of a Compressible Fluid* and was in two parts, the first on the kinematics of vortices and the second on the dynamics of a compressible fluid. It was this work which stimulated the later work on hydrodynamics by Kochina.

Friedmann had taken up a new interest soon after returning to Petrograd. Einstein's general theory of relativity, although published in 1915 , was not known in Russia due to World War I and the Civil War. By late 1920 , Friedmann wrote in a letter to Ehrenfest:-

In July 1925 Friedmann made a record-breaking ascent in a balloon to 7400 metres to make meteorological and medical observations. He returned to Leningrad ( Petrograd had been renamed Leningrad in 1924) . Near the end of August 1925 Friedmann began to feel unwell. He was diagnosed as having typhoid and taken to hospital where he died two weeks later.