Astronomy

Milky Way stellar number density : is the stated equation in this paper incorrect?

Milky Way stellar number density : is the stated equation in this paper incorrect?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The paper is : http://www.astro.washington.edu/users/ivezic/Publications/tomographyI.pdf

The equation is equation #23 in the paper. It's a model for the density of stars in the Milky Way's disk. It has an exponential dependence on both $R$ and $Z$. $R$ is the distance from the center of the galaxy, and $Z$ is the distance above/below the plane of the disk.

The relation is roughly this:

$$varrho= ext{constant} imes e^{-frac{R}{L}-frac{Z+Z_0}{H}}$$

The problem I'm seeing is the $Z$-dependence of the formula. $R$ and $Z$ here are standard cylindrical coordinates. Thus, $Z$ can be positive or negative. The problem is that the formula blows up when $Z$ is negative. This is unphysical, since the number density must generally decrease going further away from the disk.

Am I missing something? Or should the equation really have an absolute magnitude of $|Z + Z_0|$?

[Added Later:] I finished making a 3D demo of the stellar number density of the Milky Way. Note that your browser needs to support WebGL.


It may make more sense to convert the exponential into the product of the individual exponential terms: $$ exp(-frac{R}{L}) exp(-frac{Z}{H}) exp(frac{Z_{odot}}{H}) $$ This makes it clearer that the $Z_{odot}$ term is a constant, which is just part of the overall normalization (similar to the $exp(R_{odot}/L)$ term earlier in the original equation).

And, yes, $Z$ is assumed to be always positive, so $|Z|$ would be slightly more correct.


A Model of Habitability Within the Milky Way Galaxy

Department of Physics & Astronomy, Trent University, Peterborough, Ontario, Canada.

Department of Computing & Information Systems, Trent University, Peterborough, Ontario, Canada.

Department of Information & Computer Sciences and NASA Astrobiology Institute, University of Hawaii-Manoa, Honolulu, Hawaii, USA.

Department of Physics & Astronomy, Trent University, Peterborough, Ontario, Canada.

Department of Computing & Information Systems, Trent University, Peterborough, Ontario, Canada.


Abstract

We report the results of a systematic search for ultra-faint Milky Way satellite galaxies using data from the Dark Energy Survey (DES) and Pan-STARRS1 (PS1). Together, DES and PS1 provide multi-band photometry in optical/near-infrared wavelengths over

80% of the sky. Our search for satellite galaxies targets

25,000 deg 2 of the high-Galactic-latitude sky reaching a 10σ point-source depth of 22.5 mag in the g and r bands. While satellite galaxy searches have been performed independently on DES and PS1 before, this is the first time that a self-consistent search is performed across both data sets. We do not detect any new high-significance satellite galaxy candidates, recovering the majority of satellites previously detected in surveys of comparable depth. We characterize the sensitivity of our search using a large set of simulated satellites injected into the survey data. We use these simulations to derive both analytic and machine-learning models that accurately predict the detectability of Milky Way satellites as a function of their distance, size, luminosity, and location on the sky. To demonstrate the utility of this observational selection function, we calculate the luminosity function of Milky Way satellite galaxies, assuming that the known population of satellite galaxies is representative of the underlying distribution. We provide access to our observational selection function to facilitate comparisons with cosmological models of galaxy formation and evolution.

Export citation and abstract BibTeX RIS


Hubble Data Help Show How Milky Way Galaxy Got Its Spiral Shape

The shape of the Milky Way galaxy, our solar system's home, may look a bit like a snail, but spiral galaxies haven't always had this structure, scientists say.

In a recent report, a team of researchers said they now know when and how the majestic swirls of spiral galaxies emerged in the universe. Galaxies are categorized into three main types, based on their shapes: spiral, elliptical and irregular. Almost 70 percent of those closest to the Milky Way are spirals. But in the early universe, spiral galaxies didn't exist.

A husband and wife team of astronomers, Debra Meloy Elmegreen at Vassar College in Poughkeepsie, N.Y., and Bruce Elmegreen at IBM's T.J. Watson Research Center in Yorktown Heights, N.Y., analyzed an image from the Hubble Space Telescope known as the Ultra Deep Field. It was taken over a four-month period in late 2003 and early 2004. The picture shows about 10,000 galaxies of different ages, some nearly as old as the universe itself. [Galaxies of the Universe Explained by Type (Infographic)]

To analyze this image, the researchers first sorted the galaxies into several basic types, "such as disk-like, clumpy, elliptical, tadpole-shape and double," said Debra Elmegreen. "We did this for all of the galaxies larger than 10 pixels in diameter, which we thought were large enough to classify, [which came to] about 1000 galaxies."

The scientists then used these classifications to study the most peculiar type of galaxy, a "very clumpy" type that does not really occur anymore in the current universe. However, the researchers established that most young galaxies were born very clumpy, because of gravitational instabilities in a highly turbulent, gas-rich disk.

Then, the Elmegreens studied the Hubble Ultra Deep Field for a second time, now examining the tadpole-shaped galaxies. Finally, they analyzed the spirals. "The motivation for this was the 50th anniversary of the publication of a very important paper on spiral density waves in galaxies, the paper by C.C. Lin, and Frank H. Shu, 'On the Spiral Structure of Disk Galaxies,' which appeared in 1964 in the Astrophysical Journal," said Bruce Elmegreen.

(Story continues below)

A chaotic place

Out of 269 spiral galaxies in the Hubble Ultra Deep Field, the researchers analyzed 41. They discarded galaxies when it was impossible to determine a clear spiral structure or when there wasn't enough data to establish the galaxy's age. The researchers then sorted these 41 spiral galaxies into five different types, according to whether they were clumpy or smooth, well-defined or not, and the number and clarity of spiral arms they had. Next, the Elmegreens catalogued the properties of each galaxy type, such as its age, the size of clumps inside and its brightness at various frequencies.

The researchers found that the universe was a very chaotic place in its infancy. The first galaxies were disks with massive, bright, star-forming clumps and little regular structure. To develop the nice spiral forms seen today, galaxies first had to settle down, or "cool," from the previous chaotic phase. This evolution took several billion years.

Gradually, the galaxies that were to become spirals lost most of their big clumps, and a central, bright bulge would appear the smaller clumps throughout the galaxy would begin to form indistinct, "woolly" spiral arms.

These arms would only become very distinct arms once the universe was about 3.6 billion years old. At that age, as the galaxies had a chance to settle down, the turbulence decreased, and new stars would form in a much quieter disk. "We can see the transition from the early chaotic state to the modern, relaxed state," said Bruce Elmegreen.

These first spiral galaxies were either two-armed structures or had thick, irregular spirals with some remaining clumps. More finely structured, multi-armed galaxies like the Milky Way galaxy and its neighbor Andromeda appeared much later, when the universe was 8 billion years old.

Next, the researchers plan to analyze other surveys, to get a broader picture of galaxies as a whole, including their overall masses, general morphologies and distribution in space. "We are interested in the internal structures of these galaxies, including star formation structures and spiral arm structures," said Bruce Elmegreen. [See amazing photos taken by the Hubble Space Telescope]

Cocoons and butterflies

"Studies of these internal structures require the highest possible angular resolution and depth of exposure, and so far it is difficult to compete with the Advanced Camera for Survey images of the Hubble Ultra Deep Field," he added. "In the future, we would like to try to extend our analysis to other fields so that we can include more galaxies, to the extent that it is possible."

Kartik Sheth, an astronomer at the National Radio Astronomy Observatory in Charlottesville, Va., who was not involved in the study, called the research "another useful piece of information in understanding the detailed assembly of [galactic] disks."

However, he added, the results had certain limitations. "We assume that the same criteria we apply for measuring and understanding stellar structures locally is also applicable in the distant past," Sheth said.

"This is OK, but imagine that if we are looking for butterflies, and galaxies were cocoons at an earlier time," he said. "We would get an incorrect result. So care has to be taken in understanding selection effects."

Still, Sheth said, the research is "pretty interesting and important." And once Hubble's successor, the James Webb Space Telescope, launches, "we will really be able to nail all this, because we will have the same resolution at the longer wavelengths as we now have with Hubble in the optical."


Astronomy Research Paper

Introduction
Current research in astronomy embraces a range of problems related to the peculiarities of different celestial bodies and ways in which better knowledge of astronomical phenomena can help humans in various spheres. This paper will explore two important areas of research. The first is an attempt to explain the truncation of stellar discs with the application of the magnetic hypothesis. The second is a more general problem related to the ongoing research aiming to define the age of the universe.

Truncation of stellar discs
The paper prepared by a team of researchers from University de Granada in Spain attempts to test the viability of the magnetic hypothesis in the explanation of the truncation of stellar discs. In their mind, the formation of the stellar disc is influenced by three forces including magnetic and gravitational forces, inwards, and centrifugal forces. The sudden suppression of the magnetic force at the time when the new disc is formed is accompanied by insufficient gravitational force that cannot retain the disc at the same place. As a result, the disc is bound to shift into intergalactic space from its galactocentric orbit. The scientists predicted that their observations would demonstrate this discrepancy between the actual and the galactocentric radii which they called “truncation radius”.

The researchers do not limit themselves to the magnetic hypothesis and explore other plausible explanations as well. Thus, Kennicut in 1989 advanced the theory that seeks to explain the pause in the disk formation beyond a certain radius R through the presence of a ‘critical gas density’ (Battaner et al. 2002:2). Alternatively, Larson and Gunn explain the same process through the slow accretion of intergalactic matter, a process that is still ongoing albeit at a very slow pace. Finally, the truncation of stellar discs can be explained through tidal influences.

The research presents calculations that allow for the interaction of two centripetal forces in the process of disk formation, including gravitational and magnetic forces on the one hand and centrifugal forces on the other hand. After computing the central mass potential and exponential gas potential, they arrive at the decoupling time that proves to be very short, pointing to the possibility of an instantaneous transition from a magnetically driven disc to a proto-star.

As a result of investigation, researchers discard theories of ‘critical gas density’ and tidal interactions. They do not rule out Larson’s theory of slow disc formation and certainly insist that the magnetic theory is a viable explanation of the truncation of stellar discs. The magnetic hypothesis of rotation curves, the research suggests, provides a successful framework for predicting the truncation radius of the disc.

The Age of the Universe
It has long been known that the Universe is constantly expanding, and this expansion may be occurring at a quicker pace now. This idea was first advanced by Edwin Hubble who conducted his studies in Pasadena, California, and found out that the distance from the Milky Way galaxies determines the speed at which galaxies are moving away from each other. This theory conformed to the mathematical formula for the development of the universe advanced by Albert Einstein.

Hubble’s discovery led to the appearance of the Hubble constant that “determines the size of the observable universe and provides constraints on competing models of the evolution of the universe” (Freeman 2003). Determination of the accurate value of this constant still remains a challenge although recent years saw great progress in the measurement precision. The Carnegie Observatories in Pasadena, California, purchased the Hubble Space Telescope in order to measure the distance and velocity of different galaxies.

The main goal of the Hubble Key Project was to compare results obtained from two independent methods in order to determine the Hubble constant with a lower degree of uncertainty. The new telescope was used for measurement of Cepheid distances to galaxies. The next step was “to determine the Hubble constant by applying the Cepheid calibration to several methods for measuring distances further out in the Hubble expansion” (Freeman 2003).

It should be noted that in Einstein’a equation for the development of the universe there was a cosmological constant that the great physicist later regarded as “his greatest blunder” (Freeman 2003). This element of the equation denied any expansion of the universe. The studies of the Californian astronomers suggest that Einstein may have been right after all as it stands for the so-called ‘dark energy’. Freeman believes that if Hubble constant it taken to equal 70, “with matter contributing one-third and dark energy providing approximately two-thirds of the overall mass plus energy density”, the age of the universe can be pinpointed at around 13 billion years (Freeman 2003). This number corresponds to the data received from the age that was obtained from the examination of globular clusters. Although dark energy was at first met with scepticism by the scholarly community, greater investigation may confirm the existence of this type of matter.

Conclusion
The ongoing astronomic research is highly diverse in the range of explored topics. The determination of the age of the universe is a conventional area, an assignment with which scholars have been struggling for decades. However, the Hubble Key Project allows one to glance at the problem from a new angle, applying a combination of methods in order to make the resulting number more valid. The study may finally provide a more or less certain answer to the question “How old is the universe?”

The research going on at the University of Granada is focused on a more specific problem of truncation of the star discs. Scientists test a variety of theories to explore their usefulness in explanation of this truncation. The magnetic hypothesis is believed to be the most promising, which is confirmed by the mathematical modelling.

If you are looking for a highly qualified academic writer to prepare a custom research paper on any astronomy topics, you should try Categories Research papers


From surveys to modeling: characterizing the survey selection functions

Spectroscopic surveys of the Milky Way are always affected by various selection effects, commonly referred to as ‘selection biases’. Footnote 3 In their most benign form, these are due to a set of objective and repeatable decisions of what to observe (necessitated by the survey design). Selection biases typically arise in three forms: (a) the survey selection procedure, (b) the relation between the survey stars and the underlying stellar population, and (c) the extrapolation from the observed (spatial) volume and the ‘global’ Milky Way volume. Different analyses need not be affected by all three of these biases.

In many existing Galactic survey analyses emphasis has been given—in the initial survey design, the targeting choices, and the subsequent sample culling—to getting as simple a selection function as possible, e.g. Kuijken and Gilmore (1989a), Nordström et al. (2004), Fuhrmann (2011), Moni Bidin et al. (2012). This then lessens, or even obviates the need to deal with the selection function explicitly in the subsequent astrophysical analysis. While this is a consistent approach, it appears not viable, or at least far from optimal, for the analyses of the vast data sets that are emerging from the current ‘general purpose’ surveys. In the context of Galactic stellar surveys, a more general and rigorous way of guarding against these biases and correcting for them has been laid our perhaps most explicitly and extensively in Bovy et al. (2012d), and in the interest of a coherent exposition we focus on this case.

Therefore, we use the example of the spectroscopic SEGUE G-dwarf sample used in Bovy et al. (2012d)—a magnitude-limited, color-selected sample part of a targeted survey of ≃150 lines of sight at high Galactic latitude (see Yanny et al. 2009). We consider the idealized case that the sample was created using a single gr color cut from a sample of pre-existing photometry and that objects were identically and uniformly sampled and successfully observed over a magnitude range r min<r<r max. We assume that all selection is performed in dereddened colors and extinction-corrected magnitudes. At face value, such a sample suffers from the three biases mentioned in the previous paragraph: (a) the survey selection function (SSF) is such that only stars in a limited magnitude range are observed spectroscopically, and this range corresponds to a different distance range for stars of different metallicities (b) a gr color cut selects more abundant, lower-mass stars at lower metallicities, such that different ranges of the underlying stellar population are sampled for different metallicities (c) the different distance range for stars of different metallicities combined with different spatial distributions means that different fractions of the total volume occupied by a stellar population are observed.

We assume that a spectroscopic survey is based on a pre-existing photometric catalog, presumed complete to potential spectroscopic targets. The survey selection procedure can then ideally be summarized by (a) the cuts on the photometric catalog to produce the potential spectroscopic targets, (b) the sampling method, and (c) potential quality cuts for defining a successfully observed spectrum (for example, a signal-to-noise ratio cut on the spectrum). We will assume that the sampling method is such that targets are selected independently from each other. If targets are not selected independently from each other (such as, for example, in systematic sampling techniques where each ‘N’th item in an ordered list is observed), then correcting for the SSF may be more complicated. For ease of use of the SSF, spectroscopic targets should be sampled independently from each other.

The top panel of Fig. 8 shows the relation between the underlying, complete photometric sample and the spectroscopic sample for the SEGUE G-dwarf sample. While the sampling in color is close to unbiased, the sampling in r-band magnitude is strongly biased against faint targets because of the signal-to-noise ratio cut (>15 in this case).

Spectroscopic survey selection functions: this figure illustrates two essential elements of the selection functions in spectroscopy surveys that must be accounted for rigorously in analyses of spectroscopic surveys of stars in the Galaxy. The specific case is taken from Bovy et al. (2012d) and discussed in Sect. 6.3. The left panel shows in grayscale the number density of stars with SDSS photometry (presumed to be complete) within the G-dwarf target selection box in (r,gr) space the contours show the distribution of stars that resulted in successful spectroscopic catalog entries (the spectroscopic completeness), after ‘bright’ and ‘faint’ plates were taken (see Bovy et al. 2012d) obviously the distributions differ distinctly, as also the marginalized histograms show. The right panel shows the fraction of ‘available’ G-dwarf targets that were assigned fibers clearly that fraction varies dramatically with Galactic coordinates

The selection function can then be expressed as a function S(r) of the relevant quantities r from the photometric catalog this function expresses the relative fraction of entries in the spectroscopic catalog with respect to the complete photometric catalog, as a function of r. We will not concern ourselves here with how this function is derived for the spectroscopic survey in question an example is given in Appendix A of Bovy et al. (2012d) and Fig. 8. We assume that the SSF is unbiased in velocity space, that is, that all velocities have equal probability of being observed, such that the SSF only affects analyses concerned with the spatial densities of objects. If we then want to infer the spatial density in x≡(R,z,ϕ) of a set of spectroscopic objects, we need to constrain the joint distribution λ(θ) of r, x, and whichever other parameters f are necessary to relate x to r (for example, when photometric distances are used, the metallicity can be used in addition to purely photometric properties to calculate x) we denote all arguments of λ as θ≡(r,x,f). The joint distribution can be written as

where ρ(r,f|x) is the distribution of r and f as a function of x, and |J| is a Jacobian, transforming from the heliocentric frame to the Galactocentric one. As discussed in Bovy et al. (2012d), the correct likelihood to fit is

where the product is over all spectroscopic data points. This likelihood simply states that the observed rate is normalized over the volume in θ space that could have been observed within the survey selection constraints as expressed by the SSF.

For example, in the SEGUE example discussed above, we may further assume for simplicity that the distance is obtained simply as d(r,gr) (that is, ignoring the metallicity, such that there are no f) with no uncertainty, and that the distribution of colors gr is uniform over the observed color range, then λ(θ) can be written as

where l and b are Galactic longitude and latitude, respectively δ(rr[x,gr,,b]) is a Dirac delta function that expresses the photometric distance. The likelihood then reduces to

where we have assumed that we only want to fit parameters of ν (such that the SSF can be dropped from the numerator). For a survey of a limited number of lines of sight such as SEGUE, the integral over l and b can be re-written as a sum over the lines of sight.

Assuming that the SSF does not depend on the velocity, fitting a joint distribution function model for the positions and velocity, for example when fitting dynamical models to data, uses a similar expression, with the density ν simply getting replaced by the distribution function, and the integral in the denominator in Eq. (5) includes an additional integration over velocities v. For example, in the context of the simplified SEGUE example, we fit a DF f(x,v|p) with parameters p using the likelihood

$ mathcal(oldsymbol

) = prod_i frac<oldsymbol(oldsymbol_i,oldsymbol_i|oldsymbol

)>r, mathrm(g-r) mathrm l,mathrmb,mathrmoldsymbololdsymbol(oldsymbol(r,g-r,ell ,b),oldsymbol|oldsymbol

)|J|S(r,g-r,ell,b)>. $

Correcting for selection bias (b) requires stellar-population synthesis models, to connect the number of stars observed in a given color range to the full underlying stellar population. An example where this correction is performed by calculating the total stellar mass in a stellar population given the number of stars in a given color range is given in Appendix A of Bovy et al. (2012b). The total stellar mass is calculated as

where N is the number of stars and 〈M〉 is the average mass in the observed color range, and f M is the ratio of the total mass of a stellar population to the mass in the observed range. All of these can be easily calculated from stellar-population synthesis models (see Appendix A of Bovy et al. 2012b).

The correction for bias (c) above involves extrapolating the densities of stars in the observed volume to a ‘global’ volume. This is useful when comparing the total number of stars in the Milky Way in different components. For example, Bovy et al. (2012b) calculated for different MAPs the total stellar surface density at the solar radius as the ‘global’ quantity—also correcting biases (a) and (b) above—extrapolating from the volume observed by SEGUE (see below). This extrapolation requires the spatial density of each component to calculate the fraction of stars in the observed volume with respect to the ‘global’ volume. Correcting for this bias is then as simple as multiplying the number of stars in the observed volume by this factor.


The Faintest Dwarf Galaxies

Joshua D. Simon
Vol. 57, 2019

Abstract

The lowest luminosity ( L) Milky Way satellite galaxies represent the extreme lower limit of the galaxy luminosity function. These ultra-faint dwarfs are the oldest, most dark matter–dominated, most metal-poor, and least chemically evolved stellar systems . Read More

Supplemental Materials

Figure 1: Census of Milky Way satellite galaxies as a function of time. The objects shown here include all spectroscopically confirmed dwarf galaxies as well as those suspected to be dwarfs based on l.

Figure 2: Distribution of Milky Way satellites in absolute magnitude () and half-light radius. Confirmed dwarf galaxies are displayed as dark blue filled circles, and objects suspected to be dwarf gal.

Figure 3: Line-of-sight velocity dispersions of ultra-faint Milky Way satellites as a function of absolute magnitude. Measurements and uncertainties are shown as blue points with error bars, and 90% c.

Figure 4: (a) Dynamical masses of ultra-faint Milky Way satellites as a function of luminosity. (b) Mass-to-light ratios within the half-light radius for ultra-faint Milky Way satellites as a function.

Figure 5: Mean stellar metallicities of Milky Way satellites as a function of absolute magnitude. Confirmed dwarf galaxies are displayed as dark blue filled circles, and objects suspected to be dwarf .

Figure 6: Metallicity distribution function of stars in ultra-faint dwarfs. References for the metallicities shown here are listed in Supplemental Table 1. We note that these data are quite heterogene.

Figure 7: Chemical abundance patterns of stars in UFDs. Shown here are (a) [C/Fe], (b) [Mg/Fe], and (c) [Ba/Fe] ratios as functions of metallicity, respectively. UFD stars are plotted as colored diamo.

Figure 8: Detectability of faint stellar systems as functions of distance, absolute magnitude, and survey depth. The red curve shows the brightness of the 20th brightest star in an object as a functi.

Figure 9: (a) Color–magnitude diagram of Segue 1 (photometry from Muñoz et al. 2018). The shaded blue and pink magnitude regions indicate the approximate depth that can be reached with existing medium.


Astrophysical Classics: The Observed Relation between Star Formation and Gas in Galaxies

Stars form when cold gas in interstellar space collapses under its own weight. It is therefore not a stretch to think that the rate of star formation and amount of gas might be correlated. In today’s review of an “astrophysical classic,” we go back to 1998, and the seminal paper in which this correlation – now known as the “Kennicutt-Schmidt law” – is shown to hold across a wide sample of star-forming galaxies. This important relation not only hints at the underlying physics involved in the formation of stars, but is also a widely used prescription for modeling the star formation process in galaxy and cosmological simulations today.

A long time ago in a galaxy not so far, far away

The idea that the star formation rate (SFR) and gas density should be related started with a simple hypothesis stated in a key 1959 paper by Maarten Schmidt: “It is assumed that the rate of star formation…varies with a power n of the density of interstellar gas.” Schmidt supported his assertion with observational data of the solar neighborhood, concluding that the power law had an index of about two – a nonlinear relation. The ensuing decades brought better data and larger observational samples. After Schmidt, the attention gradually became focused on the global properties of galaxies rather than the nearby part of the Milky Way, as assembling a large sample of star-forming regions in our Galaxy is inhibited by source confusion and extinction thanks to our obscured vantage point within the disk. The x-axis of the relation – the gas density – originally was derived only from neutral hydrogen (H I) observations, as that was all that was possible in 1959 molecular line observations of CO (which traces molecular gas, a.k.a. H2) were first pioneered in 1970 by Wilson, Jefferson, & Penzias. Since stars directly form in molecular gas, this was a crucial step along the way. The Kennicutt-Schmidt law is typically formulated in units of surface densities – the star formation rate per unit area (ΣSFR) vs. the total gas surface density (ΣHI+H2). For galaxies, this removes the effects of a galaxy’s size or mass, rendering a comparison more meaningful.

Computing SFRs in galaxies

Estimating star formation rates is a tricky business. In nearby regions in the Milky Way, we can resolve individual stars, and thus simply look at young stellar clusters, count the stars, and then estimate their ages and masses. Not so in other galaxies – there one must rely on integrated, indirect measures. All the methods used to measure SFRs in this context leverage several important facts: (1) stars form in clusters (2) massive stars dominate the luminosity of a cluster, particularly at short wavelengths and, (3) massive stars have very short lives (see this applet). The procedure goes as follows. First, find a good tracer to observe. Some popular choices include ultraviolet emission (from the photospheres of massive stars), recombination radiation such as Hα (tracers of ionizing photons from massive stars), or infrared (dust-reprocessed ultraviolet) emission. Next, convert from luminosity in that tracer into a total number of massive stars. This step requires what’s known as “population synthesis” modeling, in which clusters (or even galaxies) of stars are simulated by combining models of stellar evolution, stellar atmospheres, and how stars are distributed in clusters as a function of their masses (the Initial Mass Function, or IMF). Observed luminosities in a tracer (say, Hα) are compared to model luminosities to infer how many massive stars are in the actual population. The IMF is then extrapolated to lower masses to estimate the total mass in stars. This mass is then divided by the timescale over which the model was evolved to determine a star formation rate for the cluster or galaxy.

Figure 1: The spiral disk galaxy M51 (left) shows knots of star formation mostly concentrated in spiral arms. In contrast, the starburst Arp 220 (right) exhibits vigorous dust-enshrouded star formation throughout its center. Image credits: NASA/Hubble.

In this 1998 paper, Kennicutt aimed to comprehensively examine the correlation between the SFR and gas density across a large dynamic range of star-forming galaxies. His sample included 61 “normal” spiral galaxies as well as 36 additional galaxies in which a very active episode of star formation – a starburst – was occurring in their centers (see Figure 1). For the normal spirals (which are disk galaxies roughly similar to the Milky Way), he compiled Hα (to trace the SFR) and H I + CO (to trace the atomic + molecular gas) measurements of galaxies from the literature. For each galaxy, the total integrated measurements were converted to surface densities by dividing by the galaxy’s area (after correcting for the inclination). For the starbursts, he instead used infrared data to calculate SFRs, as these engines of star formation are teeming with visible/ultraviolet-absorbing dust. Since the dust absorbs almost all the energy emitted by the starburst and then radiates it at its own blackbody temperature (or more accurately, temperatures, as there are likely multiple dust populations within a galaxy), the total infrared luminosity becomes a very reasonable tracer of the SFR in these systems. As the gas in starbursts is predominately molecular, Kennicutt used only CO (and not H I) measurements to estimate their gas densities. He converted integrated measurements to surface densities by dividing by the size of the starburst region, which is typically about a square kiloparsec.

A strong correlation with lots of scatter

Figure 2: The original Kennicutt-Schmidt diagram, which plots the surface density of star formation against the gas surface density. The relation is shown for normal spiral galaxies (filled circles) and starburst nuclei (squares). The open circles are the central regions of selected disk galaxies. Kennicutt found a power law correlation with slope 1.4 that holds across many orders of magnitude.

Plotting the results for all 61 spirals alongside the 36 starbursts, Kennicutt showed that a superlinear (slope N≈1.4) power law is an excellent empirical description of the relation between the star formation rate and gas surface densities across more than six orders of magnitude in SFR in galaxies. Figure 2 shows the original “Kennicutt-Schmidt” diagram that is now almost ubiquitously seen in talks about star formation in galaxies, either observational or theoretical. Let’s deconstruct this plot just a bit more.

First, while the correlation is quite remarkable, there is still significant scatter present. Amongst the disk galaxies (the black circles), up to a factor of 30 difference in SFR is seen at a fixed gas density. Part of this may be due to how Kennicutt corrected for extinction in his Hα data: lacking any more robust means to determine how much of the Hα emission in the galaxy was being absorbed by dust, he simply assumed an extinction of 1.1 magnitude (a factor of 2.8 change in flux) for all galaxies. There is also some variation in how well CO traces H2 (the infamous “X-factor”) as a function of the physical conditions of the gas. While the actual amount of extinction and the value of the X-factor most certainly vary between (and within) galaxies, it is still unlikely that differences in these two factors alone could explain the full factor of 30 difference in SFRs across the sample. This suggests that much of the scatter actually represents real variations in the star formation efficiency (how long it takes the gas to turn into stars globally).

Second, the correlation for the spirals alone is much less robust than the combined correlation. It is the addition of the starbursts that provides the lever arm to achieve the high dynamic range that makes the correlation robust. (To see this, place your hand over the right half of the plot and try to draw a line through the remaining points).

Third, since the relation is superlinear, the efficiency of star formation – the SFR divided by gas density – seems to increase with increasing gas density. What does this mean? Kennicutt offers a theoretical argument for why this might be the case. If self-gravity in a gaseous spiral disk controls the formation of stars, the SFR volume density should scale as the gas volume density ρgas divided by the timescale for the growth of gravitational perturbations in the disk. Since the latter goes as the free-fall time tff

(Gρgas) -1/2 this suggests that the SFR should scale as the gas density to the 1.5 power – very similar to the value of 1.4 observed. To convert between volume and surface densities, a constant disk scale height must also be assumed. This is however not the physical basis for the Kennicutt-Schmidt law, but simply a plausibility argument.

Kennicutt’s seminal 1998 paper combined with rapidly improving observational facilities in the early 2000’s led to a burgeoning in this field. Increased resolution has now allowed resolved studies within galaxies, and the ΣSFRgas relation appears to hold at kpc scales within disk galaxies, as well as amongst entire disk galaxies. Interestingly, the majority of work at these kpc scales finds a linear relation — N ≈ 1 – instead of the superlinear relation Kennicutt originally derived. Furthermore, the role of the molecular gas has now been isolated: the correlation between the SFR and H2 is much tighter than that between the SFR and HI+H2. Additionally, modern studies typically use multiple star formation tracers, e.g. Hα (to trace recombination radiation that escapes the galaxy) plus mid-infrared (to trace the portion absorbed by dust). For interested readers, Kennicutt & Evans (2012) provide a detailed and comprehensive (though dense) review of this subject in a review paper, which I highly recommend.

There has been incredible progress over the last two decades in understanding the fundamental process of star formation. However, it is important to remember that the Kennicutt-Schmidt law is an empirical one. It is widely cited, widely studied, and widely used as a prescription in simulations. But the physical basis for a power law relation between the star formation rate and gas density has not yet been clearly determined. And that is why, as they say, this is still a very relevant topic of active research today.

“Astrophysical classics” is a series of articles that delves into seminal papers from the astronomical past and places them in the context of modern research.


Milky Way stellar number density : is the stated equation in this paper incorrect? - Astronomy

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.

Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.

The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.


Observations of galactic and extragalactic novae

The recent GAIA DR2 measurements of distances to galactic novae have allowed to re-analyse some properties of nova populations in the Milky Way and in external galaxies on new and more solid empirical bases. In some cases, we have been able to confirm results previously obtained, such as the concept of nova populations into two classes of objects, that is disk and bulge novae and their link with the Tololo spectroscopic classification in Fe II and He/N novae. The recent and robust estimates of nova rates in the Magellanic Clouds galaxies provided by the OGLE team have confirmed the dependence of the normalized nova rate (i.e., the nova rate per unit of luminosity of the host galaxy) with the colors and/or class of luminosity of the parent galaxies. The nova rates in the Milky Way and in external galaxies have been collected from literature and critically discussed. They are the necessary ingredient to asses the contribution of novae to the nucleosynthesis of the respective host galaxies, particularly to explain the origin of the overabundance of lithium observed in young stellar populations. A direct comparison between distances obtained via GAIA DR2 and maximum magnitude vs. rate of decline (MMRD) relationship points out that the MMRD can provide distances with an uncertainty better than 30%. Multiwavelength observations of novae along the whole electromagnetic spectrum, from radio to gamma rays, have revealed that novae undergo a complex evolution characterized by several emission phases and a non-spherical geometry for the nova ejecta.

This is a preview of subscription content, access via your institution.


Watch the video: Σούπερ Μάρκετ Γαλαξίας - Προσφορές 2409-2609 (September 2022).