Astronomy

How large can a ball of water be without fusion starting?

How large can a ball of water be without fusion starting?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

How large can a ball of water be without fusion starting?

Peculiar question: some explanation might be necessary. My young son is into 'space' and astronomy. One of his posters says that Saturn could float, if a sufficiently large ocean could be found. Obviously that wouldn't work: Saturn's atmosphere would peel off and join or become the atmosphere of the larger body, and then Saturn's dense core would sink.

But could such an ocean even exist without fusion starting?


You really need a full-blown stellar evolution model to answer this precisely and I'm not sure anyone would ever have done this with an oxygen-dominated star.

To zeroth order the answer will be the similar to a metal-rich star - i.e. about 0.075 times the mass of the Sun. Any less than this and the brown dwarf (for that is what we call a star that never gets hot enough at its centre to initiate significant fusion) can be supported by electron degeneracy pressure.

A star/brown dwarf with the composition you suggest would be a bit different. The composition would be thoroughly and homogeneously mixed by convection. Note that other than a thin layer near the surface, the water would be completely dissociated and the hydrogen and oxygen atoms completely ionised. Hence the density of protons in the core would be lower for the same mass density than in a "normal star". However, the temperature dependence is so steep I think this would be a minor factor and nuclear fusion would be significant at a similar temperature.

Of much greater importance is that there would be fewer electrons and fewer particles at the same density. This decreases both the electron degeneracy pressure and normal gas pressure at a given mass density. The star is therefore able to contract to much smaller radii before degeneracy pressure becomes important and can thus reach higher temperatures for the same mass as a result.

For that reason I think that the minimum mass for hydrogen fusion of a "water star" would be smaller than for a star made mainly of hydrogen.

But how much smaller? Back-of-the-envelope time!

Use the virial theorem to get a relationship between perfect gas pressure and the temperature, mass and radius of a star. Let gravitational potential energy be $Omega$, then the virial theorem says

$$ Omega = -3 int P dV$$

If we only have a perfect gas then $P = ho kT/mu m_u$, where $T$ is the temperature, $ ho$ the mass density, $m_u$ an atomic mass unit and $mu$ the average number of mass units per particle in the gas.

Assuming a constant density star (back of the envelope) then $dV = dM/ ho$, where $dM$ is a mass shell and $Omega = -3GM^2/5R$, where $R$ is the "stellar" radius. Thus $$frac{GM^2}{5R} = frac{kT}{mu m_u} int dM$$ $$ T = frac{GM mu m_u}{5k R}$$ and so the central temperature $T propto mu MR^{-1}$.

Now what we do is say that the star contracts until at this temperature, the phase space occupied by its electrons is $sim h^3$ and electron degeneracy becomes important.

A standard treatment of this is to say that the physical volume occupied by an electron is $1/n_e$, where $n_e$ is the electron number density and that the momentum volume occupied is $sim (6m_e kT)^{3/2}$. The electron number density is related to the mass density by $n_e = ho /mu_e m_u$, where $mu_e$ is the number of mass units per electron. For ionised hydrogen $mu_e=1$, but for oxygen $mu_e=2$ (all the gas would be ionised near the temperatures for nuclear fusion). The average density $ ho = 3M/4pi R^3$.

Putting these things together we get $$h^3 = frac{ (6m_e kT)^{3/2}}{n_e} = frac{4pi mu_e}{3}left(frac{6 mu}{5} ight)^{3/2} (Gm_e R)^{3/2} m_u^{5/2} M^{1/2}$$
Thus the radius to which the star contracts in order for degeneracy pressure to be important is $$ R propto mu_e^{-2/3} mu^{-1} M^{-1/3}$$

If we now substitute this into the expression for central temperature, we find $$ T propto mu M mu_e^{2/3} mu M^{1/3} propto mu^2 mu_e^{2/3} M^{4/3}$$

Finally, if we argue that the temperature for fusion is the same in a "normal" star and our "water star", then the mass at which fusion will occur is given by the proportionality $$ M propto mu^{-3/2} mu_e^{-1/2}$$ .

For a normal star with a hydrogen/helium mass ratio of 75:25, then $mu simeq 16/27$ and $mu_e simeq 8/7$. For a "water star", $mu = 18/11$ and $mu_e= 9/5$. Thus if the former set of parameters leads to a minimum mass for fusion of $0.075 M_{odot}$, then by increasing $mu$ and $mu_e$ this becomes smaller by the appropriate factor $(18 imes 27/11 imes 16)^{-3/2} (9 imes 7/5 imes 8)^{-1/2} = 0.173$.

Thus a water star would undergo H fusion at $0.013 M_{odot}$ or about 13 times the mass of Jupiter!

NB This only deals with hydrogen fusion. The small amount of deuterium would fuse at lower temperatures. A similar analysis would give a minimum mass for this to occur of about 3 Jupiter masses.


Heat of Fusion Example Problem: Melting Ice

Leonid Ikan / Getty Images

  • Chemistry
    • Basics
    • Chemical Laws
    • Molecules
    • Periodic Table
    • Projects & Experiments
    • Scientific Method
    • Biochemistry
    • Physical Chemistry
    • Medical Chemistry
    • Chemistry In Everyday Life
    • Famous Chemists
    • Activities for Kids
    • Abbreviations & Acronyms

    Heat of fusion is the amount of heat energy required to change the state of matter of a substance from a solid to a liquid. It's also known as enthalpy of fusion. Its units are usually Joules per gram (J/g) or calories per gram (cal/g). This example problem demonstrates how to calculate the amount of energy required to melt a sample of water ice.

    Key Takeaways: Heat of Fusion for Melting Ice

    • Heat of fusion is the amount of energy in the form of heat needed to change the state of matter from a solid to a liquid (melting.)
    • The formula to calculate heat of fusion is: q = m·ΔHf
    • Note that the temperature does not actually change when matter changes state, so it's not in the equation or needed for the calculation.
    • Except for melting helium, heat of fusion is always a positive value.

    Contents

    Chemical symbol Edit

    Deuterium is frequently represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by 2
    H
    . IUPAC allows both D and 2
    H
    , although 2
    H
    is preferred. [7] A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium ( 1 H) (deuterium has a mass of 2.014 102 u , compared to the mean hydrogen atomic weight of 1.007 947 u , and protium's mass of 1.007 825 u ) confers non-negligible chemical dissimilarities with protium-containing compounds, whereas the isotope weight ratios within other chemical elements are largely insignificant in this regard.

    Spectroscopy Edit

    In quantum mechanics the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For the hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels.

    The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the atomic nucleus. For hydrogen, this amount is about 1837/1836, or 1.000545, and for deuterium it is even smaller: 3671/3670, or 1.0002725. The energies of spectroscopic lines for deuterium and light hydrogen (hydrogen-1) therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by a factor of 1.000272. In astronomical observation, this corresponds to a blue Doppler shift of 0.000272 times the speed of light, or 81.6 km/s. [8]

    The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, [9] and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, although deuterium NMR on its own right is also possible.

    Big Bang nucleosynthesis Edit

    Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium.

    Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium therefore any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused to helium). However, very shortly thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion and nucleosynthesis to occur. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. [10] The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (there are no stable nuclei with mass numbers of five or eight) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as our Sun.

    Abundance Edit

    Deuterium occurs in trace amounts naturally as deuterium gas, written 2
    H
    2 or D2, but most of the naturally occurring atoms in the Universe are bonded with a typical 1
    H
    atom, a gas called hydrogen deuteride (HD or 1
    H
    2
    H
    ). [11]

    The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there are no known natural processes other than the Big Bang nucleosynthesis, which might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton-proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of deuterium seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it.

    The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang theory over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. [12] Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in our galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. [13] In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space. [14]

    The abundance of deuterium in the atmosphere of Jupiter has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. [15] and this abundance is thought to represent close to the primordial solar system ratio. [5] This is about 17% of the terrestrial deuterium-to-hydrogen ratio of 156 deuterium atoms per million hydrogen atoms.

    Cometary bodies such as Comet Hale-Bopp and Halley's Comet have been measured to contain relatively more deuterium (about 200 atoms D per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms D per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans, emphasizes the theory that Earth's surface water may be largely comet-derived. [4] [5] Most recently the deuterium–protium (D–H) ratio of 67P/Churyumov–Gerasimenko as measured by Rosetta is about three times that of Earth water, a figure that is high. [6] This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin.

    Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus. [16]

    Production Edit

    Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally-occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods.

    In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.

    The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design.

    Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurised heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulphide gas at high pressure.

    While India is self-sufficient in heavy water for its own use, India now also exports reactor-grade heavy water.

    Physical properties Edit

    The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. D2O, for example, is more viscous than H2O. [17] Chemically, there are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that deuterium is harder to remove from carbon than protium. [18]

    Deuterium can replace protium in water molecules to form heavy water (D2O), which is about 10.6% denser than normal water (so that ice made from it sinks in ordinary water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. [19] Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a 70 kg (154 lb) person might drink 4.8 litres (1.3 US gal) of heavy water without serious consequences. [20] Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals.

    Quantum properties Edit

    The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from common light hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry.

    The triplet deuteron nucleon is barely bound at EB = 2.23 MeV , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of

    60 keV . There is no such stable particle, but this virtual particle transiently exists during neutron-proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton. [21]

    Nuclear properties (the deuteron) Edit

    Deuteron mass and radius Edit

    The nucleus of deuterium is called a deuteron. It has a mass of 2.013 553 212 745 (40) u (just over 1.875 GeV ). [22] [23]

    The charge radius of the deuteron is 2.127 99 (74) fm . [24]

    Like the proton radius, measurements using muonic deuterium produce a smaller result: 2.125 62 (78) fm . [25]

    Spin and energy Edit

    Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. ( 2
    H
    , 6
    Li
    , 10
    B
    , 14
    N
    , 180m
    Ta
    also, the long-lived radioactive nuclides 40
    K
    , 50
    V
    , 138
    La
    , 176
    Lu
    occur naturally.) Most odd-odd nuclei are unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, primarily due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron nucleus to be unstable.

    The proton and neutron making up deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment.

    Diatomic deuterium (D2) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one. [26]

    Isospin singlet state of the deuteron Edit

    Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has an electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T).

    Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin- 1 ⁄ 2 , form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. [ citation needed ] A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is

    This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is

    and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable.

    Approximated wavefunction of the deuteron Edit

    The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. have an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. have an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative).

    The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states:

    • Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry.
    • Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry.

    In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has s = 1 , l = 0 .

    In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has s = 0 , l = 1 .

    Since s = 1 gives a stronger nuclear attraction, the deuterium ground state is in the s =1 , l = 0 state.

    The same considerations lead to the possible states of an isospin triplet having s = 0 , l = even or s = 1 , l = odd . Thus the state of lowest energy has s = 1 , l = 1 , higher than that of the isospin singlet.

    The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as s = 1 , l = 0 may become a state of s = 1 , l = 2 . Parity is still constant in time so these do not mix with odd l states (such as s = 0 , l = 1 ). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the s = 1 , l = 0 state and the s = 1 , l = 2 state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore j = 1 . This is the total spin of the deuterium nucleus.

    To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly l = 0 with some l = 2 .

    Magnetic and electric multipoles Edit

    In order to find theoretically the deuterium magnetic dipole moment μ, one uses the formula for a nuclear magnetic moment

    g (l) and g (s) are g-factors of the nucleons.

    Since the proton and neutron have different values for g (l) and g (s) , one must separate their contributions. Each gets half of the deuterium orbital angular momentum l → >> and spin s → >> . One arrives at

    where subscripts p and n stand for the proton and neutron, and g (l) n = 0 .

    By using the same identities as here and using the value g (l) p = 1 , we arrive at the following result, in units of the nuclear magneton μN

    For the s = 1 , l = 0 state ( j = 1 ), we obtain

    For the s = 1 , l = 2 state ( j = 1 ), we obtain

    The measured value of the deuterium magnetic dipole moment, is 0.857 μN , which is 97.5% of the 0.879 μN value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation s = 1 , l = 0 state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment.

    But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly s = 1 , l = 0 state with a slight admixture of s = 1 , l = 2 state.

    The measured electric quadrupole of the deuterium is 0.2859 e·fm 2 . While the order of magnitude is reasonable, since the deuterium radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the l =0 state (which is the dominant one) and does get a contribution from a term mixing the l =0 and the l =2 states, because the electric quadrupole operator does not commute with angular momentum.

    The latter contribution is dominant in the absence of a pure l = 0 contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium.

    Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons.

    Deuterium has a number of commercial and scientific uses. These include:

    Nuclear reactors Edit

    Deuterium is used in heavy water moderated fission reactors, usually as liquid D2O, to slow neutrons without the high neutron absorption of ordinary hydrogen. [27] This is a common commercial use for larger amounts of deuterium.

    In research reactors, liquid D2 is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments.

    Experimentally, deuterium is the most common nuclide used in nuclear fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the D–T reaction. There is an even higher-yield D– 3
    He
    fusion reaction, though the breakeven point of D– 3
    He
    is higher than that of most other fusion reactions together with the scarcity of 3
    He
    , this makes it implausible as a practical power source until at least D–T and D–D fusion reactions have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology.

    NMR spectroscopy Edit

    Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for light-hydrogen. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl3) are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference.

    Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (Deuterium NMR). For example, the flexibility in the tail, which is a long hydrocarbon chain, in deuterium-labelled lipid molecules can be quantified using solid state deuterium NMR. [28]

    Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example.

    Tracing Edit

    In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from ordinary hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations deuterium-carbon bond vibrations are found in spectral regions free of other signals.

    Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes 17 O and 18 O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (so-called meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to mean latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. [29] The ratio of concentration of 2 H to 1 H is usually indicated with a delta as δ 2 H and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins. [30] [31]

    Contrast properties Edit

    Neutron scattering techniques particularly profit from availability of deuterated samples: The H and D cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of ordinary hydrogen is its large incoherent neutron cross section, which is nil for D. The substitution of deuterium atoms for hydrogen atoms thus reduces scattering noise.

    Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen (and deuterium) interact strongly with neutrons, neutron scattering techniques, together with a modern deuteration facility, [32] fills a niche in many studies of macromolecules in biology and many other areas.

    Nuclear weapons Edit

    This is discussed below. It is notable that although most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements, such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion that occurs in so-called hydrogen bombs, requires heavy hydrogen (either tritium or deuterium, or both) in order for the process to work.

    Drugs Edit

    A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms contained in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. [33] [34] [35] In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval. [36]

    Reinforced essential nutrients Edit

    Deuterium can be used to reinforce specific oxidation-vulnerable C-H bonds within essential or conditionally essential nutrients, [37] such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. [38] [39] Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia. [40] [36]

    Thermostabilization Edit

    Live vaccines, such as the oral poliovirus vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl2. [41]

    Slowing Circadian Oscillations Edit

    Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. [42] [43] [44] [45] In rats, chronic intake of 25% D2O disrupts circadian rhythmicity by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. [46] Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period. [47]

    Suspicion of lighter element isotopes Edit

    The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. The prevailing theory at the time was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to 1 u , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes.

    Deuterium detected Edit

    It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to 1 mL of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards in Washington, D.C. (now the National Institute of Standards and Technology). The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous. [48] [49]

    Naming of the isotope and Nobel Prize Edit

    Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from G. N. Lewis who had proposed the name "deutium". The name is derived from the Greek deuteros ('second'), and the nucleus to be called "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted the isotope to be called "diplogen", from the Greek diploos ('double'), and the nucleus to be called "diplon". [3] [50]

    The amount inferred for normal abundance of this heavy isotope of hydrogen was so small (only about 1 atom in 6400 hydrogen atoms in ocean water (156 deuteriums per million hydrogens)) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been experimentally suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis had prepared the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory, but when the neutron was reported, making deuterium's existence more explainable, deuterium won Urey the Nobel Prize in Chemistry in 1934. Lewis was embittered by being passed over for this recognition given to his former student. [3]

    "Heavy water" experiments in World War II Edit

    Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums. [51] [52]

    During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war.

    After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. They had been unable to sustain a chain reaction. The Germans had completed only a small, partly built experimental reactor (which had been hidden away). By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, [ clarification needed ] partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with a graphite reactor in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example.

    In thermonuclear weapons Edit

    The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful "hydrogen bomb" (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction.

    Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons.

    Modern research Edit

    In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. [53] [54]

    • Density: 0.180 kg/m 3 at STP ( 0 °C , 101.325 kPa ).
    • Atomic weight: 2.014 101 7926 u .
    • Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 ppm (a ratio of 1 part per approximately 6420 parts), that is, about 0.015% of the atoms in a sample (by number, not weight)

    Data at approximately 18 K for D2 (triple point):

    • Density:
      • Liquid: 162.4 kg/m 3
      • Gas: 0.452 kg/m 3
      • Solid: 2950 J/(kg·K)
      • Gas: 5200 J/(kg·K)

      An antideuteron is the antimatter counterpart of the nucleus of deuterium, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN [55] and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. [56] A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but as of 2019 [update] antideuterium has not yet been created. The proposed symbol for antideuterium is
      D
      , that is, D with an overbar. [57]


      Contents

      In-space propulsion begins where the upper stage of the launch vehicle leaves off performing the functions of primary propulsion, reaction control, station keeping, precision pointing, and orbital maneuvering. The main engines used in space provide the primary propulsive force for orbit transfer, planetary trajectories and extra planetary landing and ascent. The reaction control and orbital maneuvering systems provide the propulsive force for orbit maintenance, position control, station keeping, and spacecraft attitude control. [4] [2] [3]

      When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft. Because this is more difficult for more massive spacecraft, designers generally discuss spacecraft performance in amount of change in momentum per unit of propellant consumed also called specific impulse. [5] The higher the specific impulse, the better the efficiency. Ion propulsion engines have high specific impulse (

      3000 s) and low thrust [6] whereas chemical rockets like monopropellant or bipropellant rocket engines have a low specific impulse (

      When launching a spacecraft from Earth, a propulsion method must overcome a higher gravitational pull to provide a positive net acceleration. [8] In orbit, any additional impulse, even very tiny, will result in a change in the orbit path.

      1) Prograde/Retrogade (i.e. acceleration in the tangential/opposite in tangential direction) - Increases/Decreases altitude of orbit

      2) Perpendicular to orbital plane - Changes Orbital inclination

      The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for manoeuvring in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.

      Earth's surface is situated fairly deep in a gravity well. The escape velocity required to get out of it is 11.2 kilometers/second. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system for human spaceflight would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.

      The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass.

      In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor), whereas the ions provide the reaction mass. [8]

      A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the exhaust velocity, so that more mass-efficient engines require much more energy, and are typically less energy efficient. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So high-mass-efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-mass-efficient engine designs also provide lower thrust due to the unavailability of high amounts of energy.

      In-space propulsion represents technologies that can significantly improve a number of critical aspects of the mission. Space exploration is about getting somewhere safely (mission enabling), getting there quickly (reduced transit times), getting a lot of mass there (increased payload mass), and getting there cheaply (lower cost). The simple act of "getting" there requires the employment of an in-space propulsion system, and the other metrics are modifiers to this fundamental action. [4] [3]

      Development of technologies will result in technical solutions that improve thrust levels, Isp, power, specific mass, (or specific power), volume, system mass, system complexity, operational complexity, commonality with other spacecraft systems, manufacturability, durability, and cost. These types of improvements will yield decreased transit times, increased payload mass, safer spacecraft, and decreased costs. In some instances, development of technologies within this technology area (TA) will result in mission-enabling breakthroughs that will revolutionize space exploration. There is no single propulsion technology that will benefit all missions or mission types. The requirements for in-space propulsion vary widely due according to their intended application. The described technologies should support everything from small satellites and robotic deep space exploration to space stations and human missions to Mars applications. [4] [3]

      Defining technologies Edit

      Furthermore, the term "mission pull" defines a technology or a performance characteristic necessary to meet a planned NASA mission requirement. Any other relationship between a technology and a mission (an alternate propulsion system, for example) is categorized as "technology push." Also, a space demonstration refers to the spaceflight of a scaled version of a particular technology or of a critical technology subsystem. On the other hand, a space validation would serve as a qualification flight for future mission implementation. A successful validation flight would not require any additional space testing of a particular technology before it can be adopted for a science or exploration mission. [4]

      Spacecraft operate in many areas of space. These include orbital maneuvering, interplanetary travel and interstellar travel.

      Orbital Edit

      Artificial satellites are first launched into the desired altitude by conventional liquid/solid propelled rockets after which the satellite may use onboard propulsion systems for orbital stationkeeping. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. [9] They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital station-keeping). [10] Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. [11] A satellite's useful life is usually over once it has exhausted its ability to adjust its orbit.

      Interplanetary Edit

      For interplanetary travel, a spacecraft can use its engines to leave Earth's orbit. It is not explicitly necessary as the initial boost given by the rocket, gravity slingshot, monopropellant/bipropellent attitude control propulsion system are enough for the exploration of the solar system (see New Horizons). Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. [12] In between these adjustments, the spacecraft simply moves along its trajectory without accelerating. The most fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. [13] Special methods such as aerobraking or aerocapture are sometimes used for this final orbital adjustment. [14]

      Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust [15] an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun. The concept has been successfully tested by the Japanese IKAROS solar sail spacecraft.

      Interstellar Edit

      No spacecraft capable of short duration (compared to human lifetime) interstellar travel has yet been built, but many hypothetical designs have been discussed. Because interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival remains a formidable challenge for spacecraft designers. [16]

      The technology areas are divided into four basic groups: (1) Chemical propulsion, (2) Nonchemical propulsion, (3) Advanced propulsion technologies, and (4) Supporting technologies based on the physics of the propulsion system and how it derives thrust as well as its technical maturity. Additionally, there may be credible meritorious in-space propulsion concepts not foreseen or reviewed at the time of publication, and which may be shown to be beneficial to future mission applications. [17]

      Chemical propulsion Edit

      A large fraction of the rocket engines in use today are chemical rockets that is, they obtain the energy needed to generate thrust by chemical reactions to create a hot gas that is expanded to produce thrust. A significant limitation of chemical propulsion is that it has a relatively low specific impulse (Isp), which is the ratio of the thrust produced to the mass of propellant needed at a certain rate of flow. [4]

      A significant improvement (above 30%) in specific impulse can be obtained by using cryogenic propellants, such as liquid oxygen and liquid hydrogen, for example. Historically, these propellants have not been applied beyond upper stages. Furthermore, numerous concepts for advanced propulsion technologies, such as electric propulsion, are commonly used for station keeping on commercial communications satellites and for prime propulsion on some scientific space missions because they have significantly higher values of specific impulse. However, they generally have very small values of thrust and therefore must be operated for long durations to provide the total impulse required by a mission. [4] [18] [19] [20]

      Several of these technologies offer performance that is significantly better than that achievable with chemical propulsion.

      The Glenn Research Center aims to develop primary propulsion technologies which could benefit near and mid-term science missions by reducing cost, mass, and/or travel times. Propulsion architectures of particular interest to the GRC are electric propulsion systems, such as Ion and Hall thrusters. One system combines solar sails, a form of propellantless propulsion which relies on naturally-occurring starlight for propulsion energy, and Hall thrusters. Other propulsion technologies being developed include advanced chemical propulsion and aerocapture. [3] [21] [22]

      Reaction engines Edit

      Reaction engines produce thrust by expelling reaction mass, in accordance with Newton's third law of motion. This law of motion is most commonly paraphrased as: "For every action force there is an equal, but opposite, reaction force."

      Rocket engines Edit

      Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speed reaching as high as 10 times the speed of sound at sea level are common.

      Rocket engines provide essentially the highest specific powers and high specific thrusts of any engine used for spacecraft propulsion.

      Ion propulsion rockets can heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been proposed to be used in propulsion systems, and some have been tested in a lab.

      See rocket engine for a listing of various kinds of rocket engines using different heating methods, including chemical, electrical, solar, and nuclear.

      Nonchemical propulsion Edit

      Electromagnetic propulsion Edit

      Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine typically uses electric power, first to ionize atoms, and then to create a voltage gradient to accelerate the ions to high exhaust velocities.

      The idea of electric propulsion dates back to 1906, when Robert Goddard considered the possibility in his personal notebook. [23] Konstantin Tsiolkovsky published the idea in 1911.

      For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel.

      For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has very often been used, but for others further out or at higher power, nuclear energy is necessary engines drawing their power from a nuclear source are called nuclear electric rockets.

      With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle.

      Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows some potential.

      Some electromagnetic methods:

        (accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer)

      In electrothermal and electromagnetic thrusters, both ions and electrons are accelerated simultaneously, no neutralizer is required.

      Without internal reaction mass Edit

      The law of conservation of momentum is usually taken to imply that any engine which uses no reaction mass cannot accelerate the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System there are gravitation fields, magnetic fields, electromagnetic waves, solar wind and solar radiation. Electromagnetic waves in particular are known to contain momentum, despite being massless specifically the momentum flux density P of an EM wave is quantitatively 1/c^2 times the Poynting vector S, i.e. P = S/c^2, where c is the velocity of light. Field propulsion methods which do not rely on reaction mass thus must try to take advantage of this fact by coupling to a momentum-bearing field such as an EM wave that exists in the vicinity of the craft. However, because many of these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large. [ original research? ]

      There are several different space drives that need little or no reaction mass to function. A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object. [24] Solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. The magnetic sail deflects charged particles from the solar wind with a magnetic field, thereby imparting momentum to the spacecraft. A variant is the mini-magnetospheric plasma propulsion system, which uses a small cloud of plasma held in a magnetic field to deflect the Sun's charged particles. An E-sail would use very thin and lightweight wires holding an electric charge to deflect these particles, and may have more controllable directionality.

      As a proof of concept, NanoSail-D became the first nanosatellite to orbit Earth. [25] As of August 2017, NASA confirmed the Sunjammer solar sail project was concluded in 2014 with lessons learned for future space sail projects. [26] Cubesail will be the first mission to demonstrate solar sailing in low Earth orbit, and the first mission to demonstrate full three-axis attitude control of a solar sail. [27]

      Japan also launched its own solar sail powered spacecraft IKAROS in May 2010. IKAROS successfully demonstrated propulsion and guidance and is still flying today.

      A satellite or other space vehicle is subject to the law of conservation of angular momentum, which constrains a body from a net change in angular velocity. Thus, for a vehicle to change its relative orientation without expending reaction mass, another part of the vehicle may rotate in the opposite direction. Non-conservative external forces, primarily gravitational and atmospheric, can contribute up to several degrees per day to angular momentum, [28] so secondary systems are designed to "bleed off" undesired rotational energies built up over time. Accordingly, many spacecraft utilize reaction wheels or control moment gyroscopes to control orientation in space. [29]

      A gravitational slingshot can carry a space probe onward to other destinations without the expense of reaction mass. By harnessing the gravitational energy of other celestial objects, the spacecraft can pick up kinetic energy. [30] However, even more energy can be obtained from the gravity assist if rockets are used.

      Beam-powered propulsion is another method of propulsion without reaction mass. Beamed propulsion includes sails pushed by laser, microwave, or particle beams.

      Advanced propulsion technology Edit

      Advanced, and in some cases theoretical, propulsion technologies may use chemical or nonchemical physics to produce thrust, but are generally considered to be of lower technical maturity with challenges that have not been overcome. [31] For both human and robotic exploration, traversing the solar system is a struggle against time and distance. The most distant planets are 4.5–6 billion kilometers from the Sun and to reach them in any reasonable time requires much more capable propulsion systems than conventional chemical rockets. Rapid inner solar system missions with flexible launch dates are difficult, requiring propulsion systems that are beyond today's current state of the art. The logistics, and therefore the total system mass required to support sustained human exploration beyond Earth to destinations such as the Moon, Mars or Near Earth Objects, are daunting unless more efficient in-space propulsion technologies are developed and fielded. [32] [33]

      A variety of hypothetical propulsion techniques have been considered that require a deeper understanding of the properties of space, particularly inertial frames and the vacuum state. To date, such methods are highly speculative and include:

      A NASA assessment of its Breakthrough Propulsion Physics Program divides such proposals into those that are non-viable for propulsion purposes, those that are of uncertain potential, and those that are not impossible according to current theories. [34]

      Table of methods Edit

      Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods.

      Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method thrust and power consumption and other factors can be. However:

      • if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above)
      • if it is much more than the delta-v, then, proportionally more energy is needed if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time

      The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.)

      The fourth is the maximum delta-v this technique can give (without staging). For rocket-like propulsion systems this is a function of mass fraction and exhaust velocity. Mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, typically the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower.

      • 9: Light pressure attitude-control flight proven
      • 6: Model, 196 m 2 1.12 mN 400 m/s delta-v demonstrated in interplanetary space [41]

      Testing Edit

      Spacecraft propulsion systems are often first statically tested on Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a large-ish vacuum chamber is needed.

      Famous static test locations can be found at Rocket Ground Test Facilities

      Some systems cannot be adequately tested on the ground and test launches may be employed at a Rocket Launch Site.


      What Makes Fusion Hard

      A simple obstacle stands between us and fusion. It’s called the Coulomb barrier. Protons hate to get near each other, on account of their mutual positive charge and concomitant electrostatic repulsion. And they must get very close—about 10 −15 m—before the strong nuclear force overpowers Coulomb’s vote. Even on a perfect collision course, two protons would have to have a closing velocity of 20 million meters per second (7% the speed of light) to get within 10 −15 m of each other, corresponding to a temperature around 5 billion degrees! Even if the velocity is sufficient, the slightest misalignment will cause the repulsive duo to veer off course, not even flirting with contact. Quantum tunneling can take a bit of the edge off, requiring maybe a factor of two less energy/closeness, but all the same, it’s frickin’ hard to get protons together.

      Yet our Sun manages to do it, at a mere 16 million degrees in its core. How does it manage to make a profit? Volume. The protons in the Sun are racing around at a variety of velocities according to the temperature. While the typical velocity is far too small to defeat the Coulomb barrier, some speed demons on the tail of the velocity distribution curve do have the requisite energy. And there are enough of them in the vast volume of the Sun’s core to occasionally hit head on and latch together. One of the protons must promptly beta-plus decay into a neutron and presto-mundo, we have a deuteron! Deuterons can then collide to make helium (other paths to helium are also followed). A quick and crude calculation suggests that we need about 10 38 “sticky” collisions per second to keep the Sun going, while within the core we get about 10 64 bumps/interactions per second, implying only one in 10 26 collisions needs to be a successful fusion event.

      Deuterons have an easier time bumping into each other than do lone protons, mainly because their physical size is larger. In fact, a deuteron’s relatively weak binding makes them even puffier than the more tightly bound tritium nucleus (go tritons!). At a given temperature, deterons will move more slowly than protons, and tritons more slowly than deuterons. All flavors contain a single proton—and so exert the same repulsive force on each other—but the increased inertia from extra neutrons exactly counters the slower speed, so that each has the same likelihood of trucking through the Coulomb barrier. Then we’re left with size. Deuterons are bigger than tritons, so D-D bumps will be more common than D-T bumps.

      But there’s a catch. As soon as D and T touch, they stick together. Conversely, when D touches D, a photon (light) must be emitted in order for them to stick, which doesn’t usually happen. It is therefore said that D-T has a greater cross section for fusion than D-D. Estimates for the critical temperature required to achieve fusion come in at 400 million Kelvin for D-D fusion, and 45 million K for the D-T variety. But these temperature thresholds depend on the density of the plasma involved, so should not be taken as hard-and-fast. Still, we need our fusion reactors to be hotter than the center of the Sun because we do not have the luxury of volume and density that the solar core enjoys. Does this fact give you pause?


      Water Rockets

      The first thing you need for the rocket is some form of bung in the bottom of the bottle through which you can pump some air. There are various ways of doing this, one of the best is to use a rubber bung or cork, similar to those which are used in science lessons and wine making. You will need one that will jam into the neck of the bottle well.

      If you can't get a rubber bung you can make one out of a wine bottle cork, ideally one that tapers so it is wider at one end than the other. However, wine bottle corks are often too small, and for an ideal bottle rocket you will need something with more grip. A good way to get this grip is to cover the cork with a couple of fingers cut from an old rubber glove - turn these inside out so that they make a good seal on the inside of the bottle. Glue it all together to seal the gaps.

      A rubber bung with a ball inflaterA rubber bung with a bike valveA cork covered in rubber glove fingers with a ball inflaterA cork with a bike valve through the middle.

      You then need to be able to connect the bung to the pump. If you have a football inflating adapter you can push it into the bung, but you may have to drill from the other side to make a hole for it to meet. You can also use a bike tyre valve to make a more robust version.

      Keep safe - get an adult to do the drilling and if you are the adult be very careful drilling rubber - it is not predictable, and will catch in unexpected ways. You could try holding the cork in the top of a bottle.

      Once you have made your bung, you then need a launcher. This can be as simple a 4 or 5 pieces of wood or dowel knocked into the ground around the rocket to hold it stable while it launches.

      Put some (or no) water in your bottle, jam the cork in and pump it up!

      Be careful not to stand over the rocket as it is a rocket, so is about to fly upwards very quickly!!

      Try using different amounts of water to see how that affects the flight.

      You could try adding fins to make it fly straighter.

      Result

      You should find that the bung is pushed out and then the rocket flies up into the air in a very satisfying way.

      We tried launching a (very cheap) digital camera with ours and the results are below:

      Many thanks to the Fracture Group in the Cavendish Laboratory for the use of a high speed camera.

      Explanation

      Sir Issac Newton worked out something very fundamental about the universe, he said that 'every action has an equal and opposite reaction' - what this means is that for every force there is always an equal and opposite force. So if you push something one way it will push you back.

      If you push something it will push back - every action has an equal and opposite reaction.So if you throw a weight in one direction it will push you over in the other.

      This is how your bottle rocket works. The rocket is pushing water downwards, which means that the water pushes the rocket upwards so hard that it overcomes gravity and will fly!

      In more detail

      As you pump air into the bottle the pressure inside builds up, this air pushes out in all directions, including downwards on the bung.

      At a certain pressure the friction between the bung and the bottle is not strong enough to hold it in and the bung gets pushed out. This allows the water to be pushed out by the air that you have pumped in. It also means that the main factor controlling what pressure the rocket launches at, and therefore how high it goes, is the amount of friction on the bung. This is why you should cover a slippery cork with a high-grip rubber glove.

      As the water is pushed out the air expands air the air pressure reduces slightly.

      Eventually, normally after 2-3m the rocket runs out of water and it coasts for the rest of its journey.

      Pumping air into the bottle increases its pressureEventually the bung is pushed out allowing the water to escapeThe air pushes the water down which means the water pushes it back. This force is then transferred to the rocket.Eventually the rocket runs out of water and just coasts.

      Why does it still work with no water?

      Even with no weight of water inside the bottle, the bottle rocket will still fly upwards. Thsi is because the air in the bottle has a mass so when it is pushed downwards there is still an equal and opposite reaction pushing back up. However , because air is very light the bottle will empty very quickly. This means the force doesn't last very long, but as the rocket is very light it will accelerate very quickly.

      The air being pushed downwards also produces an upward force on the rocket.But the air escapes very quickly so the force doesn't last long.Eventually the rocket starts to tumble, slowing it down. Fins tend to stop this making it travel further.

      Why doesn't the almost full rocket work well?

      In this case the rocket will have a large upward force on it from pushing the water downwards, but the bottle will be very heavy. This excess weight will mean that it doesn't accelerate very fast. Also, because there is little air in the rocket, the air would need to expand to 3-4 times its original volume in order to push out all of the water. This means that the pressure will drop drastically and the air will no longer be able to push the water out. The rocket will often crash back to the ground still half full of water.

      There is a large force to start with but the rocket is heavy so it doesn't accelerate quickly.As the air expands its pressure decreases rapidly.Eventually the pressure reaches the same as outside so the water is no longer pushed out and the rocket will slow down and fall.

      How much water should I put in?

      This depends on lots of things, such as what pressure the bung will come out, the air resistance of your rocket and how much payload it is carrying, but between a quarter and a third full works well.


      Contents

      Quantum tunneling was developed from the study of radioactivity, [4] which was discovered in 1896 by Henri Becquerel. [10] Radioactivity was examined further by Marie Curie and Pierre Curie, for which they earned the Nobel Prize in Physics in 1903. [10] Ernest Rutherford and Egon Schweidler studied its nature, which was later verified empirically by Friedrich Kohlrausch. The idea of half-life and the possibility of predicting decay was created from their work. [4]

      In 1901, Robert Francis Earhart discovered an unexpected conduction regime while investigating the conduction of gases between closely spaced electrodes using the Michelson interferometer. J. J. Thomson commented that the finding warranted further investigation. In 1911 and then 1914, then-graduate student Franz Rother directly measured steady field emission currents. He employed Earhart's method for controlling and measuring the electrode separation, but with a sensitive platform galvanometer. In 1926, Rother measured the field emission currents in a "hard" vacuum between closely spaced electrodes. [11]

      Quantum tunneling was first noticed in 1927 by Friedrich Hund while he was calculating the ground state of the double-well potential [10] Leonid Mandelstam and Mikhail Leontovich discovered it independently in the same year. They were analyzing the implications of the then new Schrödinger wave equation. [12]

      Its first application was a mathematical explanation for alpha decay, which was developed in 1928 by George Gamow (who was aware of Mandelstam and Leontovich's findings [13] ) and independently by Ronald Gurney and Edward Condon. [14] [15] [16] [17] The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling.

      After attending a Gamow seminar, Max Born recognised the generality of tunneling. He realised that it was not restricted to nuclear physics, but was a general result of quantum mechanics that applied to many different systems. [4] Shortly thereafter, both groups considered the case of particles tunneling into the nucleus. The study of semiconductors and the development of transistors and diodes led to the acceptance of electron tunneling in solids by 1957. Leo Esaki, Ivar Giaever and Brian Josephson predicted the tunneling of superconducting Cooper pairs, for which they received the Nobel Prize in Physics in 1973. [4] In 2016, the quantum tunneling of water was discovered. [18]

      Quantum tunneling falls under the domain of quantum mechanics: the study of what happens at the quantum scale. Tunneling cannot be directly perceived. Much of its understanding is shaped by the microscopic world, which classical mechanics cannot explain. To understand the phenomenon, particles attempting to travel across a potential barrier can be compared to a ball trying to roll over a hill.

      Quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. A ball that lacks the energy to penetrate a wall bounces back. Alternatively, the ball might become part of the wall (absorption).

      In quantum mechanics, these particles can, with a small probability, tunnel to the other side, thus crossing the barrier. The ball, in a sense, borrows energy from its surroundings to cross the wall. It then repays the energy by making the reflected electrons [ clarification needed ] more energetic than they otherwise would have been. [19]

      The reason for this difference comes from treating matter as having properties of waves and particles. One interpretation of this duality involves the Heisenberg uncertainty principle, which defines a limit on how precisely the position and the momentum of a particle can be simultaneously known. [10] This implies that no solutions have a probability of exactly zero (or one), though it may approach infinity. If, for example, the calculation for its position was taken as a probability of 1, its speed, would have to be infinity (an impossibility). Hence, the probability of a given particle's existence on the opposite side of an intervening barrier is non-zero, and such particles will appear on the 'other' (a semantically difficult word in this instance) side in proportion to this probability.

      The tunneling problem Edit

      The wave function of a particle summarizes everything that can be known about a physical system. [20] Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as the Schrödinger equation, the wave function can be deduced. The square of the absolute value of this wavefunction is directly related to the probability distribution of the particle's position, which describes the probability that the particle is at any given place. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling.

      A simple model of a tunneling barrier, such as the rectangular barrier, can be analysed and solved algebraically. In canonical field theory, the tunneling is described by a wave function which has a non-zero amplitude inside the tunnel but the current is zero there because the relative phase of the amplitude of the conjugate wave function (the time derivative) is orthogonal to it.

      The simulation shows one such system.

      The 2nd illustration shows the uncertainty principle at work. A wave impinges on the barrier the barrier forces it to become taller and narrower. The wave becomes much more de-localized–it is now on both sides of the barrier, it is wider on each side and lower in maximum amplitude but equal in total amplitude. In both illustrations, the localization of the wave in space causes the localization of the action of the barrier in time, thus scattering the energy/momentum of the wave.

      Problems in real life often do not have one, so "semiclassical" or "quasiclassical" methods have been developed to offer approximate solutions, such as the WKB approximation. Probabilities may be derived with arbitrary precision, as constrained by computational resources, via Feynman's path integral method. Such precision is seldom required in engineering practice. [ citation needed ]

      The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunneling. [21] [22]

      Tunneling in phase space Edit

      The concept of dynamical tunneling is particularly suited to address the problem of quantum tunneling in high dimensions (d>1). In the case of an integrable system, where bounded classical trajectories are confined onto tori in phase space, tunneling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori. [23]

      Chaos-assisted tunneling Edit

      In real life, most system are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunneling between them. This phenomenon is referred as chaos-assisted tunneling. [24] and is characterized by sharp resonances of the tunneling rate when varying any system parameter.

      Resonance-assisted tunneling Edit

      Several phenomena have the same behavior as quantum tunneling, and can be accurately described by tunneling. Examples include the tunneling of a classical wave-particle association, [26] evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings". Evanescent wave coupling, until recently, was only called "tunneling" in quantum mechanics now it is used in other contexts.

      These effects are modeled similarly to the rectangular potential barrier. In these cases, one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B.

      In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case.

      Tunneling is the cause of some important macroscopic physical phenomena. Quantum tunnelling has important implications on functioning of nanotechnology. [9]

      Electronics Edit

      Tunneling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in the substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made. [27] Tunneling is a fundamental technique used to program the floating gates of flash memory.

      Cold emission Edit

      Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field. [28] These materials are important for flash memory, vacuum tubes, as well as some electron microscopes.

      Tunnel junction Edit

      A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires understanding quantum tunneling. [29] Josephson junctions take advantage of quantum tunneling and the superconductivity of some semiconductors to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields, [28] as well as the multijunction solar cell.

      Quantum-dot cellular automata Edit

      QCA is a molecular binary logic synthesis technology that operates by the inter-island electron tunneling system. This is a very low power and fast device that can operate at a maximum frequency of 15 PHz. [30]

      Tunnel diode Edit

      Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunneling. When a small forward bias is applied, the current due to tunneling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically. [31]

      Because the tunneling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunneling probability changes as rapidly as the bias voltage. [31]

      The resonant tunneling diode makes use of quantum tunneling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a lot of current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantum potential well that has a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunneling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunneling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable. [32]

      Tunnel field-effect transistors Edit

      A European research project demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunneling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they would improve the performance per power of integrated circuits. [33] [34]

      Nuclear fusion Edit

      Quantum tunneling is an essential phenomenon for nuclear fusion. The temperature in stars' cores is generally insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve thermonuclear fusion. Quantum tunneling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction. [35]

      Radioactive decay Edit

      Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunneling of a particle out of the nucleus (an electron tunneling into the nucleus is electron capture). This was the first application of quantum tunneling. Radioactive decay is a relevant issue for astrobiology as this consequence of quantum tunneling creates a constant energy source over a large time interval for environments outside the circumstellar habitable zone where insolation would not be possible (subsurface oceans) or effective. [35]

      Astrochemistry in interstellar clouds Edit

      By including quantum tunneling, the astrochemical syntheses of various molecules in interstellar clouds can be explained, such as the synthesis of molecular hydrogen, water (ice) and the prebiotic important formaldehyde. [35]

      Quantum biology Edit

      Quantum tunneling is among the central non-trivial quantum effects in quantum biology. Here it is important both as electron tunneling and proton tunneling. [36] Electron tunneling is a key factor in many biochemical redox reactions (photosynthesis, cellular respiration) as well as enzymatic catalysis. Proton tunneling is a key factor in spontaneous DNA mutation. [35]

      Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled. [37] A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation. [38] Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix. Other instances of quantum tunneling-induced mutations in biology are believed to be a cause of ageing and cancer. [39]

      Quantum conductivity Edit

      While the Drude model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunneling to explain the nature of the electron's collisions. [28] When a free electron wave packet encounters a long array of uniformly spaced barriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely high conductance, and that impurities in the metal will disrupt it significantly. [28]

      Scanning tunneling microscope Edit

      The scanning tunneling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, may allow imaging of individual atoms on the surface of a material. [28] It operates by taking advantage of the relationship between quantum tunneling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunneling between the needle and the surface reveals the distance between the needle and the surface. By using piezoelectric rods that change in size when voltage is applied, the height of the tip can be adjusted to keep the tunneling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor. [28] STMs are accurate to 0.001 nm, or about 1% of atomic diameter. [32]

      Kinetic isotope effect Edit

      In chemical kinetics, the substitution of a light isotope of an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled using transition state theory. However, in certain cases, large isotope effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunneling is required. R. P. Bell developed a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon. [40]

      Some physicists have claimed that it is possible for spin-zero particles to travel faster than the speed of light when tunneling. [4] This apparently violates the principle of causality, since a frame of reference then exists in which the particle arrives before it has left. In 1998, Francis E. Low reviewed briefly the phenomenon of zero-time tunneling. [41] More recently, experimental tunneling time data of phonons, photons, and electrons was published by Günter Nimtz. [42]

      Other physicists, such as Herbert Winful, [43] disputed these claims. Winful argued that the wavepacket of a tunneling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wavepacket does not measure its speed, but is related to the amount of time the wavepacket is stored in the barrier. But the problem remains that the wave function still rises inside the barrier at all points at the same time. In other words, in any region that is inaccessible to measurement, non-local propagation is still mathematically certain.

      An experiment done in 2020, overseen by Aephraim Steinberg, showed that particles should be able to tunnel at apparent speeds faster than light. [44] [45]


      Bottling the World’s Coldest Plasma to Unlock the Secrets of Fusion Power

      Rice University physicists have discovered a way to trap the world’s coldest plasma in a magnetic bottle, a technological achievement that could advance research into clean energy, space weather and astrophysics.

      “To understand how the solar wind interacts with the Earth, or to generate clean energy from nuclear fusion, one has to understand how plasma — a soup of electrons and ions — behaves in a magnetic field,” said Rice Dean of Natural Sciences Tom Killian, the corresponding author of a published study about the work in Physical Review Letters.

      Using laser-cooled strontium, Killian and graduate students Grant Gorman and MacKenzie Warrens made a plasma about 1 degree above absolute zero, or approximately -272 degrees Celsius, and trapped it briefly with forces from surrounding magnets. It is the first time an ultracold plasma has been magnetically confined, and Killian, who’s studied ultracold plasmas for more than two decades, said it opens the door for studying plasmas in many settings.

      “This provides a clean and controllable testbed for studying neutral plasmas in far more complex locations, like the sun’s atmosphere or white dwarf stars,” said Killian, a professor of physics and astronomy. “It’s really helpful to have the plasma so cold and to have these very clean laboratory systems. Starting off with a simple, small, well-controlled, well-understood system allows you to strip away some of the clutter and really isolate the phenomenon you want to see.”

      Rice University graduate student MacKenzie Warrens adjusts a laser-cooling experiment in Rice’s Ultracold Atoms and Plasmas Lab. Credit: Photo by Jeff Fitlow/Rice University

      That’s important for study co-author Stephen Bradshaw, a Rice astrophysicist who specializes in studying plasma phenomena on the sun.

      “Throughout the sun’s atomosphere, the (strong) magnetic field has the effect of altering everything relative to what you would expect without a magnetic field, but in very subtle and complicated ways that can really trip you up if you don’t have a really good understanding of it,” said Bradshaw, an associate professor of physics and astronomy.

      Solar physicists rarely get a clear observation of specific features in the sun’s atmosphere because part of the atmosphere lies between the camera and those features, and unrelated phenomena in the intervening atmosphere obscures what they’d like to observe.

      “Unfortunately, because of this line-of-sight problem, observational measurements of plasma properties are associated with quite a lot of uncertainty,” Bradshaw said. “But as we improve our understanding of the phenomena, and crucially, use the laboratory results to test and calibrate our numerical models, then hopefully we can reduce the uncertainty in these measurements.”

      Images produced by laser-induced fluorescence show how a rapidly expanding cloud of ultracold plasma (yellow and gold) behaves when confined by a quadrupole magnet. Ultracold plasmas are created in the center of the chamber (left) and expand rapidly, typically dissipating in a few thousandths of a second. Using strong magnetic fields (pink), Rice University physicists trapped and held ultracold plasmas for several hundredths of a second. By studying how plasmas interact with strong magnetic fields in such experiments, researchers hope to answer research questions related to clean fusion energy, solar physics, space weather and more. Credit: Image courtesy of T. Killian/Rice University

      Plasma is one of four fundamental states of matter, but unlike solids, liquids, and gases, plasmas aren’t generally part of everyday life because they tend to occur in very hot places like the sun, a lightning bolt or candle flame. Like those hot plasmas, Killian’s plasmas are soups of electrons and ions, but they’re made cold by laser-cooling, a technique developed a quarter century ago to trap and slow matter with light.

      Killian said the quadrupole magnetic setup that was used to trap the plasma is a standard part of the ultracold setup that his lab and others use to make ultracold plasmas. But finding out how to trap plasma with the magnets was a thorny problem because the magnetic field plays havoc with the optical system that physicists use to look at ultracold plasmas.

      “Our diagnostic is laser-induced fluorescence, where we shine a laser beam onto the ions in our plasma, and if the frequency of the beam is just right, the ions will scatter photons very effectively,” he said. “You can take a picture of them and see where the ions are, and you can even measure their velocity by looking at the Doppler shift, just like using a radar gun to see how fast a car is moving. But the magnetic fields actually shift around the resonant frequencies, and we have to disentangle the shifts in the spectrum that are coming from the magnetic field from the Doppler shifts we’re interested in observing.”

      That complicates experiments significantly, and to make matters even more complicated, the magnetic fields change dramatically throughout the plasma.

      Rice University physicists (from left) Grant Gorman, Tom Killian and MacKenzie Warrens discovered how to trap the world’s coldest plasma in a magnetic bottle, a technological achievement that could advance research into clean energy, space weather and solar physics. Credit: Photo by Jeff Fitlow/Rice University

      “So we have to deal with not just a magnetic field, but a magnetic field that’s varying in space, in a reasonably complicated way, in order to understand the data and figure out what’s happening in the plasma,” Killian said. “We spent a year just trying to figure out what we were seeing once we got the data.”

      The plasma behavior in the experiments is also made more complex by the magnetic field. Which is precisely why the trapping technique could be so useful.

      “There is a lot of complexity as our plasma expands across these field lines and starts to feel the forces and get trapped,” Killian said. “This is a really common phenomenon, but it’s very complicated and something we really need to understand.”

      One example from nature is the solar wind, streams of high-energy plasma from the sun that cause the aurora borealis, or northern lights. When plasma from the solar wind strikes Earth, it interacts with our planet’s magnetic field, and the details of those interactions are still unclear. Another example is fusion energy research, where physicists and engineers hope to recreate the conditions inside the sun to create a vast supply of clean energy.

      Rice University plasma physicist Stephen Bradshaw studies solar flares, heating in the sun’s atmosphere, solar wind and other solar physics phenomena. Credit: Jeff Fitlow/Rice University

      Killian said the quadrupole magnetic setup that he, Gorman and Warrens used to bottle their ultracold plasmas is similar to designs that fusion energy researchers developed in the 1960s. The plasma for fusion needs to be about 150 million degrees Celsius, and magnetically containing it is a challenge, Bradshaw said, in part because of unanswered questions about how the plasma and magnetic fields interact and influence one another.

      “One of the major problems is keeping the magnetic field stable enough for long enough to actually contain the reaction,” Bradshaw said. “As soon as there’s a small sort of perturbation in the magnetic field, it grows and ‘pfft,’ the nuclear reaction is ruined.

      “For it to work well, you have to keep things really, really stable,” he said. “And there again, looking at things in a really nice, pristine laboratory plasma could help us better understand how particles interact with the field.”

      Reference: “Magnetic Confinement of an Ultracold Neutral Plasma” by G. M. Gorman, M. K. Warrens, S. J. Bradshaw, and T. C. Killian, 25 February 2021, Physical Review Letters.
      DOI: 10.1103/PhysRevLett.126.085002

      The research was supported by the Air Force Office of Scientific Research and the National Science Foundation Graduate Research Fellowship Program.


      Testing the device

      To test the efficacy of the tool, researchers with the University of Texas Health Science Center at San Antonio sent the HiccAway to more than 600 people who reported having hiccups at least once a month. The results showed that the tool stopped hiccups 92 percent of the time for the 249 participants whose responses were validated in the study. More than 90 percent of participants said it was more effective than home remedies.

      "Many home remedies consist of physical maneuvers designed to stimulate contraction of the diaphragm and/or closure of the epiglottis," stated a study published in JAMA Network Open. "These maneuvers lack clear, standardized instructions and can be cumbersome to perform, and there are few, if any, scientific studies of their effectiveness."

      Still, it's worth noting that the results were based on self-reported data, and the study didn't feature a control group. Future research could compare the efficacy of HiccAway with a device that looks similar but doesn't function.

      It's also worth pointing out that you don't need a $14 device to stimulate the vagus and phrenic nerves. You may just need a glass of water and a straw. A 2006 article published in the British Medical Journal noted that "plugging both ears tightly, pushing both right and left tragus, and drinking the entire glass of water through the straw without pause, without releasing the pressure over the ears" is a "nearly infallible" method to stop hiccups.

      What if nothing stops your hiccups? Consider consulting a doctor: Persistent hiccups can signal underlying medical conditions, including pancreatitis, pregnancy, and liver cancer, among others.


      Next Step

      Schedule Suspension Inspection

      The most popular service booked by readers of this article is Suspension Inspection. Once the problem has been diagnosed, you will be provided with an upfront quote for the recommended fix and receive $20.00 off as a credit towards the repair. YourMechanic’s technicians bring the dealership to you by performing this job at your home or office 7-days a week between 7AM-9PM. We currently cover over 2,000 cities and have 100k+ 5-star reviews. LEARN MORE


      Watch the video: Χοντρουλα μέσα στην μπάλα (February 2023).