Astronomy

What happened to the 2017 proposal on redefining planethood? Is this information available?

What happened to the 2017 proposal on redefining planethood? Is this information available?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In 2017, Alan Stern et al. submitted a geophysical planet definition to the IAU for review which states

“A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters.”

Obviously, the IAU didn't accept it (as yet) but does anyone know what exactly the IAU stated about the draft and whether it is still being debated or entirely discarded etc.?

I'm just asking if it is possible to know the IAU's reaction. While users may have opinions on how reasonable the proposal is or isn't, please don't post opinions as answers.


In the linked article by A. Bouchard and another by J. Daley, the words "in the journal Lunar and Planetary Science" link not to a journal article but to a poster in a K-12 education session at the 2017 Lunar and Planetary Science Conference.

First author Kirby Runyon told Universe Today in 2017 that he would not submit this geophysical definition to the IAU process:

We in the planetary science field don't need the IAU definition… If [the geophysical definition] is the definition that people use and what teachers teach, it will become the de facto definition, regardless of how the IAU votes in Prague.

It was not presented at the 2018 IAU General Assembly in Vienna. I don't know where Bouchard got "has been submitted to the IAU for review."


Help Nickname New Horizons' Second Target

NASA's New Horizons mission to Pluto and the Kuiper Belt is looking for your ideas on what to informally name its next flyby destination, a billion miles (1.6 billion kilometers) past Pluto.

On New Year's Day 2019, the New Horizons spacecraft will fly past a small, frozen world in the Kuiper Belt, at the outer edge of our solar system. The target Kuiper Belt object (KBO) currently goes by the official designation "(486958) 2014 MU69." NASA and the New Horizons team are asking the public for help in giving "MU69" a nickname to use for this exploration target.

"New Horizons made history two years ago with the first close-up look at Pluto, and is now on course for the farthest planetary encounter in the history of spaceflight," said Thomas Zurbuchen, associate administrator for NASA's Science Mission Directorate in Washington. "We're pleased to bring the public along on this exciting mission of discovery."

After the flyby, NASA and the New Horizons project plan to choose a formal name to submit to the International Astronomical Union, based in part on whether MU69 is found to be a single body, a binary pair, or perhaps a system of multiple objects. The chosen nickname will be used in the interim.
"New Horizons has always been about pure exploration, shedding light on new worlds like we've never seen before," said Alan Stern, New Horizons principal investigator from Southwest Research Institute in Boulder, Colorado.

"Our close encounter with MU69 adds another chapter to this mission's remarkable story. We're excited for the public to help us pick a nickname for our target that captures the excitement of the flyby and awe and inspiration of exploring this new and record-distant body in space."

The naming campaign is hosted by the SETI Institute of Mountain View, California, and led by Mark Showalter, an institute fellow and member of the New Horizons science team. The website includes names currently under consideration site visitors can vote for their favorites or nominate names they think should be added to the ballot. "The campaign is open to everyone," Showalter said. "We are hoping that somebody out there proposes the perfect, inspiring name for MU69."

The campaign will close at 3 p.m. EST/noon PST on Dec. 1. NASA and the New Horizons team will review the top vote-getters and announce their selection in early January.

Telescopic observations of MU69, which is more than 4 billion miles (6.5 billion kilometers) from Earth, hint at the Kuiper Belt object being either a binary orbiting pair or a contact (stuck together) pair of nearly like-sized bodies – meaning the team might actually need two or more temporary tags for its target.

"Many Kuiper Belt Objects have had informal names at first, before a formal name was proposed. After the flyby, once we know a lot more about this intriguing world, we and NASA will work with the International Astronomical Union to assign a formal name to MU69," Showalter said. "Until then, we're excited to bring people into the mission and share in what will be an amazing flyby on New Year's Eve and New Year's Day, 2019!"

To submit your suggested names and to vote for your favorites, go to: http://frontierworlds.seti.org

Media Contacts:
Rebecca McDonald, SETI Institute
(650) 960 4526, [email protected]
Michael Buckley, Johns Hopkins Applied Physics Laboratory
(240) 228-7536, [email protected]
Laurie Cantillo, NASA Headquarters
(202) 358-1077, [email protected]

Atmospheric haze makes Pluto colder than previously thought


How positively peeved Pluto people plan to pluck back Pluto’s planethood

Can't say the whole demotion thing ever got to me, since I always saw Pluto as the King of the Dwarf Planets. And that got solidified after seeing some of the New Horizons pictures. Absolutely incredible.

Besides - I'd rather be called "King of the Dwarves" then, well. a dwarf among kings.

I don't see why classification needs to change when it's simpler and more logical to assign objects to multiple groups.

Phobos and Deimos can be referred to as moons and also as asteroids that were captured by Mars. If they were to be ejected from orbit around Mars then they would no longer be moons but they would retain their asteroid classification.

That strikes me as more consistent and logical than having them start as asteroids, stop being asteroids when they entered Mars orbit, then become asteroids again when they left Mars orbit. Whether or not they're considered moons would obviously depend on whether they're in orbit around a planet because that title is defined by location, whereas there is no need to do the same with "asteroid".

There's nothing arbitrary about it. A "moon" means something in orbit about a planet so it's location dependent. How is that hard to understand or illogical?

It's like the term "atmosphere". It doesn't just refer to a mass of gas/plasma in any setting such as a nebula. It has to be around a planet or star so it's location dependent. At no time have I asserted that no definitions should rely on location, just that the broad group of planets has no logical reason to be based on location.

What does tradition have to do with scientific classification? Appealing to tradition in your argument has as much validity as saying that Pluto (and only Pluto among bodies of its type) should be called a planet because it used to be included as one of the 9 planets.

Why are rogue planets not planets? Why must a planet orbit the Sun and no other star in the universe in order to be defined as such? Why would you need to consider location when there's no obvious need to include that information within the definition?

Why does it make sense to come up with names that imply a hierarchical relationship where none exists? If you want a group that includes just those 8 planets, then come up with a new name rather than trying to define the implicitly broad title "planet" in such narrow terms. You could even call them something like the "major planets", which could be defined in whichever way you wanted but would also make it clear that they were a subset of planets in general.

Defining things based on our own existing and known special case is stupid though. Under the current definition, every piece of fantasy and science fiction that references other planets are misusing the word planet unless body in question orbits the sun. Culture be damned, all the writers are wrong.

That's nonsense. Science can and does anticipate similar cases arising elsewhere we cannot observe. The definition of life is not restricted to our solar system, even though our only example of it exists here on Earth.

There's a difference between prescribing a definition that isn't general enough and prescribing one that insists that "The exact ones that we know about are the only ones that exist unless we change our minds".

How could it be that if you take a building by a lake called a 'house' but stop living in it during the winter it suddenly becomes a 'cottage'? How could it be if you move it to a mountain it becomes a 'chalet'? Rent it out by the night with maid service and it is a 'hotel'? Rent by the month without and it is a 'rooming house'? Sell it to a religious order and it is a 'monastery'? It's one building- how dare we have more than one word for it! Because location matters. Whether you could or would live there matters. How you use it matters.

Even in the post-truth era, it is OK to have words that mean things and help draw distinctions between things that are distinguishable. A planet that is warmed by a star might be habitable- a rogue planet cannot. A planet stands out from anything around it- a dwarf planet has neighbors that are much the same.

A house is not a planet. A house is defined by a function assigned to it by humans whereas planets have no function so the comparison is meaningless.

If you take a house and convert from being somewhere to live into something with a completely different purpose like a bar, then it's obviously not a house. A house that you only live in for part of the year doesn't stop being a house, although you might find other terms being used to describe it. Either way, it has nothing to do with scientific classification because you're talking about imprecise definitions used in common parlance with no attempt or intention to conform to a rigorous scientific definition.

Similarly, just because the average person might use the word "organic" to describe food that they (mistakenly, because it's more complex than they think) believe to have been grown without pesticides doesn't have anything useful to say about how chemists define the word.

That's the point, though: the scientific classification of a body in space isn't solely a function of that body's internal properties. The classification is also supposed to reflect the origin and history of this body, and how it relates to other bodies. For example: it's often speculated that the moons of Mars were originally asteroids in the Asteroid Belt, before being captured by Mars. So when they were located in the Belt, they were asteroids. Now that they're orbiting Mars, they're moons. Do you have a problem with their classification changing, based on where they're located?

Likewise, a body orbiting the Sun on its own, without anything else major in its orbit, is a planet. If it's just one member of a large population of objects in the same general orbit, without any special distinction within this group, it's a dwarf planet.

I don't see why classification needs to change when it's simpler and more logical to assign objects to multiple groups.

Phobos and Deimos can be referred to as moons and also as asteroids that were captured by Mars. If they were to be ejected from orbit around Mars then they would no longer be moons but they would retain their asteroid classification.

That strikes me as more consistent and logical than having them start as asteroids, stop being asteroids when they entered Mars orbit, then become asteroids again when they left Mars orbit. Whether or not they're considered moons would obviously depend on whether they're in orbit around a planet because that title is defined by location, whereas there is no need to do the same with "asteroid".

There's nothing arbitrary about it. A "moon" means something in orbit about a planet so it's location dependent. How is that hard to understand or illogical?

It's like the term "atmosphere". It doesn't just refer to a mass of gas/plasma in any setting such as a nebula. It has to be around a planet or star so it's location dependent. At no time have I asserted that no definitions should rely on location, just that the broad group of planets has no logical reason to be based on location.

It's the same problem that you listed when you used the example of Earth losing their planet status if it was ejected from the Solar System. Changing the location of a moon means that a moon stops being a moon, and you are okay with that. Changing the location of a planet would mean that the planet loses it's status. If Mercury somehow got moved from their current place to a orbit around Jupiter. It would be the fifth biggest moon in that system, but it would no longer be a planet. If Ganymede got ejected from Jupiter and managed to establish a orbit between Jupiter and Saturn, it would start being a planet. So either you accept that things can change according to their location, or we need to start calling anything bigger than Pluto a planet. Even if they are moons around a bigger planet. Our own moon is bigger than Pluto, is the Moon also a planet?.


What happened to the 2017 proposal on redefining planethood? Is this information available? - Astronomy

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.

That's no moon ( Score: 3, Funny)

oh wait, it is just a moon.

Re:That's no moon ( Score: 4, Interesting)

Moons are next on the hit list.
At last count Jupiter has 67 so-called moons: The four Galilean moons, plus 63 rocks.
We really need to clamp down on what counts as a moon, or every bit of space-trash will demand to be listed.

Re:That's no moon ( Score: 5, Interesting)

Any "space-trash" that demands to be listed as something else needs to be immediately identified as a "sentient being", and on behalf of all of us Earthlings the UN needs to publicly apologize to him/her/it. That is simple playground rules: you don't want to insult anybody that much bigger than you are.

As to everything else, I think the planetary geologists have it right. If it is big enough to be rounded of its own volition, it is a planet. And planets that go around another planet more quickly than they go around their star are also moons.

Corollary: that makes Earth the larger part of a binary planetary system. Which puts proper emphasis on the way the Moon creates tides that keeps the hydrosphere stirred up, which has had a major impact on how life has evolved here. Exoplanetary explorers should look for other binary planets in the Goldilocks zone as these are much more likely to have life that is similar to Earth life.

(Is a "bazinga!" called for here? Was this just another Sheldon impersonation, or did I accidentally say something insightful?)

Re: ( Score: 2)

that makes Earth the larger part of a binary planetary system.

There is a rule to avoid that. If the common centre of mass is inside one body, Earth in this case, it is considered a planet & moon, not a binary.

Re: ( Score: 2)

That's one of those foolish rules put forth by the idiocy contingent of the IAU.

The barycenter of the Earth - Moon binary is outside of the Earth's hard inner core, in the region of the liquid outer core. This is the center or neutral point of the tidal forces acting on the Earth. No one has yet looked at the effects of these tides on the outer core's liquidity, or its electromagnetic properties, mostly because astronomers look upward and geologists look downward and there is a very serious failure for eit

Re: ( Score: 2)

Very insightful stuff. Out curiosity, what are your thoughts on formally recognizing the Sol/Jupiter binary system -- it's barycenter being outside of the Sun entirely? All of this terminology just goes so many levels deep. fun stuff haha.

Re: ( Score: 2)

My thinking has been too Earth-bound to consider the Sol - Jupiter relationship. But I see others are thinking about it there are several lay articles and apparently some more serious articles on the web. But I haven't done any critical reading on the subject.

There does seem to be a correlation between Jupiter's orbital period and the sunspot cycle as both are roughly 11 years. But if there is an underlying mechanism (not conveniently dismissible as "coincidence"), it seems more likely that the mechanisms

Re: ( Score: 2)

But I haven't done any critical reading on the subject.

There does seem to be a correlation between Jupiter's orbital period and the sunspot cycle as both are roughly 11 years. But if there is an underlying mechanism

I'd advise you to do your critical reading. Look at the numbers and you'll find that the range of solar sunspot cycles is from 8.8 years to 14 years the corresponding orbital period for Jupiter is 11.875 years - 4331 days - with a variation of a lot less than a day. Remember that Ole Roemer was

Re: ( Score: 2)

As I know I mentioned before, I doubt that there is a gravitationally mediated interaction between Jupiter and the solar cycle, and if there is a electromagnetic interaction, then that would involve Saturn as well as Jupiter, and probably Earth. Both Saturn and Jupiter have a strong impact on the solar wind. During the years when they are in close heliocentric conjunction, Jupiter's magnetotail and the bow wave of Saturn's magnetosphere are trying to occupy the same space. There has got to be some interesti

Re: ( Score: 2)

Without getting into essential complexities (e.g., the orbits are not circular, but elliptical), there's one very simple check that you don't seem to have considered. The 1.3 difference in orbital inclination between Earth and Jupiter means that the projection of the Earth's magnetotail out to Jupiter's orbit will be up to 23 million km (nearly 1/4 AU) above or below the line of

Re: ( Score: 2)

Re: ( Score: 2)

You don't bother to look up anything you've never been taught, I guess.

The magnetospheres of the planets that have them are several times the radius of the physical planet. But even greater than that, the field effects of standing waves and turbulence in the solar wind extend well beyond the magnetospheres that shape them. Remember (or look it up since it seems like you've never been taught about it) that the solar wind is composed of mono-atomic ions and free electrons moving at very high speeds. What lie

Re: ( Score: 2)

Anyway, for the silent audience, I did a couple of other calculations last night. (Something you seem remarkably resistant too. Whether that's the "mystic" part of your chosen persona, or the "goat", I neither know nor care.) Given the respective diameters of the Sun and the Earth, and their spacing, it is a simple matter to calculate that the optical tail, the umbral shadow, of the Earth is around a million km long. How lon

Re:Mike Brown was the Clown Responsible ( Score: 5, Informative)

I find that title hilarious.

Pluto's back? ( Score: 1)

As far a I am concerned, it never went away.

Re: ( Score: 3)

Re: ( Score: 2)

Take it up with NASA and get a medal. Of course, they might tell you that you don't know what you're on about, but then you might just accept that as evidence of just how right you are.

Maybe ( Score: 5, Funny)

Maybe stop changing arbitrary definitions. Pluto was always a planet. Fuck you, NASA and shitty celebrity "scientists" like Neil Tyson.

Re: ( Score: 1, Insightful)

Maybe stop changing arbitrary definitions.

Definitions like this will be arbitrary, and it just comes down to what makes it easier to write journal articles (where IAU has any authority). If Pluto was included in planets, there are quite a few orbital dynamics and evolution papers that would need to use the phrase, "The planets excluding Pluto. ". There are plenty of papers on geology and atmospheric dynamics that wouldn't care about the orbit and would benefit from a definition like proposed here. There are others that would need to take a def

Re:Maybe ( Score: 5, Interesting)

Clearly given that people like Stern have regularly given interviews decrying the decision, and going so far as to call it "bullshit" (can you say that at NASA?), it's clearly not the storm in a teacup that you want to present it as.

What the proponents did was take a term widely used by planetary geologists and have it mean something completely different - akin to dentists suddenly declaring to doctors that the heart is no longer an organ and to stop referring to it as one. And contrary to your presentation of why they did it ("to make it easier to write journal articles") without fail every last supporter I've seen interviewed about their vote has given some variant of the following reason for why they voted the way they did: "I don't want my daughter having to memorize the names of hundreds of planets." Which is so blatantly unscientific it's embarrassing that such a thing would influence their decision at all on a scientific matter.

The IAU vote was narrow, at a conference only attended at all by a fraction of its membership, on the last day when a lot of the people opposed to the definition they passed had already left because it had looked up to that point like there was either not going to be a vote at all , or one on a hydrostatic equilibrium definition - all options that they were fine with. Only 10% of the people who attended were still around.

I have a lot of issues with the last vote, and that's just the start. Here's my full list:

1. Nomenclature: An "adjective-noun" should always be a subset of "noun". A "dwarf planet" should be no less seen as a type of planet than a "dwarf star" is seen as a type of star by the IAU.

2. Erroneous foundation: Current research agrees that most planets did not clear their own neighborhoods, and even that their neighborhoods may not always have been where they are. Jupiter, and Saturn to a lesser extent, have cleared most neighborhoods. Mars has 1/300th the Stern-Levison parameter as Neptune, and Neptune has multiple bodies a couple percent of Mars's mass (possibly even larger, we've only detected an estimated 1% of large KBOs) in its "neighborhood". Mars's neighborhood would in no way would be clear if Jupiter did not exist - even Earth's might not be. Should we demote the terrestrial planets as well?

Note that the Stern-Levison parameter does not go against this, as it's built around the ability of a planet to scatter a mass distribution similar to our current asteroid belt, not large protoplanets.

3. Comparative inconsistency: Earth is far more like Ceres and Pluto than it is like Jupiter, yet these very dissimilar groups - gas giants and terrestrial planets - are lumped together as "planets" while dwarfs are excluded.

4. Poor choice of dividing line: While defining objects inherently requires drawing lines between groups, the chosen line has been poorly selected. Achieving a rough hydrostatic equilibrium is a very meaningful dividing line - it means differentiation, mineralization processes, alteration of primordial materials, and so forth. It's also often associated with internal heat and, increasingly as we're realizing, a common association with subsurface fluids. In short, a body in a category of "not having achieved hydrostatic equilibrium" describes a body which one would study to learn about the origins of our solar system, while a body in a category of "having achieved hydrostatic equilibrium" describes a body one would study, for example, to learn more about tectonics, geochemistry, (potentially) biology, etc. By contrast, a dividing line of "clearing its neighborhood" - which doesn't even meet standard #2 - says little about the body itself.

5. Mutability: Under the IA definition, what an object is declared as can be altered without any of the properties of the object changing simply by its "neighborhood" changing in any of countless ways.

6. Situational inconsistency: (Related) An exact copy of Earth (what the vast majority of people would consider the prototype for what a planet s


What’s That Science Thing: Puffin Power Edition

This tool allows users to do something that many people can do without a second thought.

Which of these is made possible by the device pictured?

C. Threading a needle.

D. Steadying a moving video camera.

      The correct answer is B.

    This mouth-operated joystick called the Puffin is made for people who can’t use their arms or hands. It allows them to check email, send texts and post the selfie they just took to Instagram. By manipulating the joystick with their mouths, users can maneuver a selector on a mobile device like a smartphone a “simple pressure system” capable of detecting both length and intensity registers sharp inhalations and exhalations as input — accomplishing the same thing as a person using his or her fingers to tap out a text.

    The sip-and-puff technology underlying the Puffin — which requires users to suck air out of or blow air into a mouthpiece — has helped people with mobility impairments for more than 50 years. But it’s not enough to be able to flick a light switch on or off in the era of smartphones.

    “There’s nothing really for mobile devices, and that’s what this is really about,” said Adriana Mallozzi, an accessibility advocate who has cerebral palsy and consulted with four M.I.T. students who built the device. In Ms. Mallozzi’s experience, other sip-and-puff accommodations — nearly all of them intended for stationary use — are far too limiting.

    “They’re very large and clunky and cost too much to travel with,” she said of similar devices, some of which can cost more than a thousand dollars. By contrast, the Puffin prototype was assembled from materials that cost less than $200 and was able to be configured to the precise specifications of Ms. Mallozzi’s powerchair.

    September 2, 2015

    A Brief History of Artificial Intelligence

    This post is a brief outline of what happened in artificial intelligence in the last 60 years. A great place to start or brush up on your history.

    I. The origins


    Inspite of all the current hype, AI is not a new field of study, but it has its ground in the fifties. If we exclude the pure philosophical reasoning path that goes from the Ancient Greek to Hobbes, Leibniz, and Pascal, AI as we know it has been officially started in 1956 at Dartmouth College, where the most eminent experts gathered to brainstorm on intelligence simulation.

    This happened only a few years after Asimov set his own three laws of robotics, but more relevantly after the famous paper published by Turing (1950), where he proposes for the first time the idea of a thinking machine and the more popular Turing test to assess whether such machine shows, in fact, any intelligence.

    As soon as the research group at Dartmouth publicly released the contents and ideas arisen from that summer meeting, a flow of government funding was reserved for the study of creating a nonbiological intelligence.

    II. The phantom menace


    Atthat time, AI seemed to be easily reachable, but it turned out that was not the case. At the end of the sixties, researchers realized that AI was indeed a tough field to manage, and the initial spark that brought the funding started dissipating.

    This phenomenon, which characterized AI along its all history, is commonly known as “AI effect”, and is made of two parts:

    1. The constant promise of a real AI coming in the following decade
    2. The discounting of the AI behavior after it mastered a certain problem, redefining continuously what intelligence means.

    In the United States, the reason for DARPA to fund AI research was mainly due to the idea of creating a perfect machine translator, but two consecutive events wrecked that proposal, beginning what it is going to be called later on the first AI winter.

    In fact, the Automatic Language Processing Advisory Committee (ALPAC) report in the US in 1966, followed by the “Lighthill report” (1973), assessed the feasibility of AI given the current developments and concluded negatively about the possibility of creating a machine that could learn or be considered intelligent.

    These two reports, jointly with the limited data available to feed the algorithms, as well as the scarce computational power of the engines of that period, made the field collapsing and AI fell into disgrace for the entire decade.

    III. Attack of the (expert) clones

    In the eighties, though, a new wave of funding in UK and Japan was motivated by the introduction of “expert systems”, which basically were examples of narrow AI as defined in previous articles.

    These programs were, in fact, able to simulate skills of human experts in specific domains, but this was enough to stimulate a new funding trend. The most active player during those years was the Japanese government, and its rush to create the fifth generation computer indirectly forced US and UK to reinstate the funding for research on AI.

    This golden age did not last long, though, and when the funding goals were not met, a new crisis began. In 1987, personal computers became more powerful than Lisp Machine, which was the product of years of research in AI. This ratified the start of the second AI winter, with the DARPA taking a clear position against AI and further funding.

    IV. The return of the Jed(AI)


    Luckily enough, in 1993 this period ended with the MIT Cog project to build a humanoid robot, and with the Dynamic Analysis and Replanning Tool (DART) — that paid back the US government of the entire funding since 1950 — and when in 1997 DeepBlue defeated Kasparov at chess, it was clear that AI was back to the top.

    In the last two decades, much has been done in academic research, but AI has been only recently recognized as a paradigm shift. There are of course a series of causes that might bring us to understand why we are investing so much into AI nowadays, but there is a specific event we think it is responsible for the last five-years trend.

    If we look at the following figure, we notice that regardless all the developments achieved, AI was not widely recognized until the end of 2012. The figure has been indeed created using CBInsights Trends, which basically plots the trends for specific words or themes (in this case, Artificial Intelligence and Machine Learning).


    Artificial intelligence trend for the period 2012–2016.

    More in details, I drew a line on a specific date I thought to be the real trigger of this new AI optimistic wave, i.e., Dec. 4th 2012. That Tuesday, a group of researchers presented at the Neural Information Processing Systems (NIPS) conference detailed information about their convolutional neural networks that granted them the first place in the ImageNet Classification competition few weeks before (Krizhevsky et al., 2012). Their work improved the classification algorithm from 72% to 85% and set the use of neural networks as fundamental for artificial intelligence.

    In less than two years, advancements in the field brought classification in the ImageNet contest to reach an accuracy of 96%, slightly higher than the human one (about 95%).

    The picture shows also three major growth trends in AI development (the broken dotted line), outlined by three major events:

    1. The 3-years-old DeepMind being acquired by Google in Jan. 2014
    2. The open letter of the Future of Life Institute signed by more than 8,000 people and the study on reinforcement learning released by Deepmind (Mnih et al., 2015) in Feb. 2015
    3. The paper published in Nature on Jan. 2016 by DeepMind scientists on neural networks (Silver et al., 2016) followed by the impressive victory of AlphaGo over Lee Sedol in March 2016 (followed by a list of other impressive achievements — check out the article of Ed Newton-Rex).

    V. A new hope


    AI is intrinsically highly dependent on funding because it is a long-term research field that requires an immeasurable amount of effort and resources to be fully depleted.

    There are then raising concerns that we might currently live the next peak phase (Dhar, 2016), but also that the thrill is destined to stop soon.

    However, as many others, I believe that this new era is different for three main reasons:

    1. (Big) data, because we finally have the bulk of data needed to feed the algorithms
    2. The technological progress, because the storage ability, computational power, algorithm understanding, better and greater bandwidth, and lower technology costs allowed us to actually make the model digesting the information they needed
    3. The resources democratization and efficient allocation introduced by Uber and Airbnb business models, which is reflected in cloud services (i.e., Amazon Web Services) and parallel computing operated by GPUs.
    • Dhar, V. (2016). “The Future of Artificial Intelligence”. Big Data, 4(1): 5–9.
    • Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). “Imagenet classification with deep convolutional neural networks”. Advances in neural information processing systems: 1097–1105.
    • Lighthill, J. (1973). “Artificial Intelligence: A General Survey”. In Artificial Intelligence: a paper symposium, Science Research Council.
    • Mnih, V., et al. (2015). “Human-level control through deep reinforcement learning”. Nature, 518: 529–533.
    • Silver, D., et al. (2016). “Mastering the game of Go with deep neural networks and tree search”. Nature, 529: 484–489.
    • Turing, A. M. (1950). “Computing Machinery and Intelligence”. Mind, 49: 433–460.

    Disclosure: this article was originally part of the longer article ‘Artificial Intelligence Explained’ which I am breaking down now based on some good readers’ feedback about article readability. I hope this helps.

    Bio: Francesco Corea is a Decision Scientist and Data Strategist based in London, UK.


    Widespread Discrimination Continues to Shape LGBT People’s Lives in Both Subtle and Significant Ways

    A national poll sheds light on how discrimination affects LGBTQ individuals across various areas of their lives.

    New research from the Center for American Progress shows that LGBT people across the country continue to experience pervasive discrimination that negatively impacts all aspects of their lives. In response, LGBT people make subtle but profound changes to their everyday lives to minimize the risk of experiencing discrimination, often hiding their authentic selves.

    1 in 4 LGBT people report experiencing discrimination in 2016

    Over the past decade, the nation has made unprecedented progress toward LGBT equality. But to date, neither the federal government nor most states have explicit statutory nondiscrimination laws protecting people on the basis of sexual orientation and gender identity. LGBT people still face widespread discrimination: Between 11 percent and 28 percent of LGB workers report losing a promotion simply because of their sexual orientation, and 27 percent of transgender workers report being fired, not hired, or denied a promotion in the past year. Discrimination also routinely affects LGBT people beyond the workplace, sometimes costing them their homes, access to education, and even the ability to engage in public life.

    Data from a nationally representative survey of LGBT people conducted by CAP shows that 25.2 percent of LGBT respondents has experienced discrimination because of their sexual orientation or gender identity in the past year. The January 2017 survey shows that, despite progress, in 2016 discrimination remained a widespread threat to LGBT people’s well-being, health, and economic security.

    Among people who experienced sexual orientation- or gender-identity-based discrimination in the past year:

    • 68.5 percent reported that discrimination at least somewhat negatively affected their psychological well-being.
    • 43.7 percent reported that discrimination negatively impacted their physical well-being.
    • 47.7 percent reported that discrimination negatively impacted their spiritual well-being.
    • 38.5 percent reported discrimination negatively impacted their school environment.
    • 52.8 percent reported that discrimination negatively impacted their work environment.
    • 56.6 report it negatively impacted their neighborhood and community environment.

    Unseen harms

    LGBT people who don’t experience overt discrimination, such as being fired from a job, may still find that the threat of it shapes their lives in subtle but profound ways. David M.,* a gay man, works at a Fortune 500 company with a formal, written nondiscrimination policy. “I couldn’t be fired for being gay,” he said. But David went on to explain, “When partners at the firm invite straight men to squash or drinks, they don’t invite the women or gay men. I’m being passed over for opportunities that could lead to being promoted.”

    “I’m trying to minimize the bias against me by changing my presentation in the corporate world,” he added. “I lower my voice in meetings to make it sound less feminine and avoid wearing anything but a black suit. … When you’re perceived as feminine—whether you’re a woman or a gay man—you get excluded from relationships that improve your career.”

    David is not alone. Survey findings and related interviews show that LGBT people hide personal relationships, delay health care, change the way they dress, and take other steps to alter their lives because they could be discriminated against.

    Maria S.,* a queer woman who lives in North Carolina, described a long commute from her home in Durham to a different town where she works. She makes the drive every day so that she can live in a city that’s friendly to LGBT people. She loves her job, but she’s not out to her boss. “I wonder whether I would be let go if the higher-ups knew about my sexuality,” she says.

    CAP’s research shows that stories such as Maria’s and David’s are common. The below table shows the percentage of LGBT people who report changing their lives in a variety of ways in order to avoid discrimination.

    As Table 1 shows, LGBT people who’ve experienced discrimination in the past year are significantly more likely to alter their lives for fear of discrimination, even deciding where to live and work because of it, suggesting that there are lasting consequences for victims of discrimination. Yet findings also support the contention that LGBT people do not need to have experienced discrimination in order to act in ways that help them avoid it, which is in line with empirical evidence on a component of minority stress theory: expectations of rejection.

    Not only can threatened discrimination bar LGBT people from living authentically—it can also deny them material opportunities. Rafael J.,* a gay student in California, told CAP that he “decided to apply to law schools only in LGBT-safe cities or states,” denying him the opportunity pursue his graduate education at schools he might otherwise have applied to. “I did not think I would be safe being an openly gay man,” he said. “Especially a gay man of color, in some places.”

    Unique vulnerabilities in the workplace

    Within the LGBT community, people who were vulnerable to discrimination across multiple identities reported uniquely high rates of avoidance behaviors.

    In particular, LGBT people of color were more likely to hide their sexual orientation and gender identity from employers, with 12 percent removing items from their resumes—in comparison to 8 percent of white LGBT respondents—in the past year. Similarly, 18.7 percent of 18- to 24-year-old LGBT respondents reported removing items from their resumes—in comparison to 7.9 percent of 35- to 44-year-olds. Meanwhile, 15.5 percent of disabled LGBT respondents reported removing items from their resume—in comparison to 7.3 percent of nondisabled LGBT people. This finding may reflect higher rates of unemployment among people of color, disabled people, and young adults it may also reflect that LGBT people who could also face discrimination on the basis of their race, youth, and disability feel uniquely vulnerable to being denied a job due to discrimination, or a combination of factors.

    Unique vulnerabilities in the public square

    Discrimination, harassment, and violence against LGBT people—especially transgender people—has always been common in places of public accommodation, such as hotels, restaurants, or government offices. The 2015 United States Transgender Survey found that, among transgender people who visited a place of public accommodation where staff knew or believed they were transgender, nearly one in three experienced discrimination or harassment—including being denied equal services or even being physically attacked.

    In March 2016, then Gov. Pat McCrory signed North Carolina H.B. 2 into law, which mandated anti-transgender discrimination in single-sex facilities—and began an unprecedented attack on transgender people’s access to public accommodations and ability to participate in public life. That year, more than 30 bills specifically targeting transgender people’s access to public accommodations were introduced in state legislatures across the country. This survey asked transgender respondents whether they had avoided places of public accommodation from January 2016 through January 2017, during a nationwide attack on transgender people’s rights. Among transgender survey respondents:

    • 25.7 percent reported avoiding public places such as stores and restaurants, versus 9.9 percent of cisgender LGB respondents
    • 10.9 percent reported avoiding public transportation, versus 4.1 percent of cisgender LGB respondents
    • 11.9 percent avoided getting services they or their family needed, versus 4.4 percent of cisgender LGB respondents
    • 26.7 percent made specific decisions about where to shop, versus 6.6 percent of cisgender LGB respondents

    These findings suggest that ongoing discrimination in public accommodations pushes transgender people out of public life, making it harder for them to access key services, use public transportation, or simply go to stores or restaurants without fear of discrimination.

    Disabled LGBT people were also significantly more likely to avoid public places than their nondisabled LGBT counterparts. Among disabled LGBT survey respondents, in the past year:

    • 20.4 percent reported avoiding public places such as stores and restaurants, versus 9.1 percent of nondisabled LGBT respondents
    • 8.8 percent reported avoiding public transportation, versus 3.6 percent of nondisabled LGBT respondents
    • 14.7 percent avoided getting services they or their family needed, versus 2.9 percent of nondisabled LGBT respondents
    • 25.7 percent made specific decisions about where to shop, versus 15.4 percent of nondisabled LGBT respondents

    This is likely because, in addition to the risk of anti-LGBT harassment and discrimination, LGBT people with disabilities contend with inaccessible public spaces. For example, many transit agencies fail to comply with Americans with Disabilities Act, or ADA, requirements that would make public transportation accessible to people with visual and cognitive disabilities.

    Unique vulnerabilities in health care

    In 2010, more than half of LGBT people reported being discriminated against by a health care providers and more than 25 percent of transgender respondents reported being refused medical care outright. Since then, LGBT people have gained protections from health care discrimination—most notably, regulations stemming from the Affordable Care Act, or ACA, have prohibited federally funded hospitals, providers, and insurers from discriminating against LGBT patients. Despite progress, LGBT people, and transgender people in particular, remain vulnerable to healthcare discrimination: In 2015, one-third of transgender people who saw a health care provider reported “at least one negative experience related to being transgender.” These negative experiences included being refused treatment or even being physically assaulted. Transgender people of color and people with disabilities reported particularly high rates of discrimination from health care providers.

    Unsurprisingly, people in these vulnerable groups are especially likely to avoid doctor’s offices, postponing both preventative and needed medical care:

    • 23.5 percent of transgender respondents avoided doctors’ offices in the past year, versus 4.4 percent of cisgender LGB respondents
    • 13.7 percent of disabled LGBT respondents avoided doctors’ offices in the past year, versus 4.2 percent of nondisabled LGBT respondents
    • 10.3 percent of LGBT people of color avoided doctors’ offices in the past year, versus 4.2 percent of white LGBT respondents

    These findings are consistent with research that has also identified patterns of health care discrimination against people of color and disabled people. For example, one survey of health care practices in five major cities found that more than one in five practices were inaccessible to patients who used wheelchairs.

    A call to action

    To ensure that federal civil rights laws explicitly protect LGBT people, Congress should pass the Equality Act, a comprehensive bill banning discrimination based on sexual orientation and gender identity in employment, public accommodations, housing, credit, and federal funding, among other provisions. Likewise, state and local governments should pass comprehensive nondiscrimination protections for all. Comprehensive nondiscrimination protections have more support from voters than ever before: A majority in every state in the country support nondiscrimination laws.

    While comprehensive nondiscrimination protections won’t prevent all instances of discrimination, they are a critical way to hold employers and landlords accountable. Additionally, they send the message that LGBT people are both accepted and respected by all levels of government. LGBT people deserve the opportunity to live full, equal, and authentic lives—and that won’t be possible while discrimination remains a looming threat to LGBT people and their families.

    Sejal Singh is the Campaigns and Communications Manager for the LGBT Research and Communications Project at American Progress. Laura E. Durso is the Vice President of the LGBT Research and Communications Project at American Progress.

    *Authors’ note: All names have been changed out of respect for interviewees’ privacy.

    Methodology

    To conduct this study, CAP commissioned and designed a survey, fielded by Knowledge Networks, which surveyed 1,864 individuals about their experiences with health insurance and health care. Among the respondents, 857 identified as lesbian, gay, bisexual, and/or transgender, while 1,007 identified as heterosexual and cisgender/nontransgender. Respondents came from all income ranges and are diverse across factors such as race, ethnicity, education, geography, disability status, and age. The survey was fielded online in English in January 2017 to coincide with the fourth open enrollment period through the health insurance marketplaces and the beginning of the first full year of federal rules that specifically protect LGBT people from discrimination in health insurance coverage and health care. The data are nationally representative and weighted according to U.S. population characteristics. All reported findings are statistically significant unless otherwise indicated. All comparisons presented are statistically significant at the p < .05 level.

    Separate from the quantitative survey, the authors solicited stories exploring the impact of discrimination on LGBT people’s lives. Using social media platforms, the study authors requested volunteers to anonymously recount personal experiences of changing their behavior or making other adjustments to their daily lives to prevent experiencing discrimination. Interviews were conducted by one of the study authors and names were changed to protect the identity of the interviewee.

    Additional information about study methods and materials are available from the authors.


    Did the Universe Begin? VIII: The No Boundary Proposal

    The last bit of evidence from physics which I'll discuss is the "no-boundary" proposal of Jim Hartle and Stephen Hawking (and some related ideas). The Hartle-Hawking proposal was described in Hawking's well known pop book, A Brief History of Time. This is an excellent pop description of Science, which also doubles as a somewhat dubious resource for the history of religious cosmology, as for example in this off-handed comment:

    [The Ptolemaic Model of Astronomy] was adopted by the Christian church as the picture of the universe that was in accordance with Scripture, for it had the great advantage that it left lots of room outside the sphere of fixed stars for heaven and hell.

    Carroll, after making some metaphysical comments about how outdated Aristotelian metaphysics is , and how the only things you really need in a physical model are mathematical consistency and fitting the data—this is Carroll's main point, well worthy of discussion, but not the subject of this post—goes on to comment on the Hartle-Hawking state in this way:

    Can I build a model where the universe had a beginning but did not have a cause? The answer is yes. It’s been done. Thirty years ago, very famously, Stephen Hawking and Jim Hartle presented the no-boundary quantum cosmology model. The point about this model is not that it’s the right model, I don’t think that we’re anywhere near the right model yet. The point is that it’s completely self-contained. It is an entire history of the universe that does not rely on anything outside. It just is like that.

    Temporarily setting aside Carroll's comment that he doesn't actually think this specific model is true—we'll see some possible reasons for this later—the first thing to clear up about this is that the Hartle-Hawking model doesn't actually have a beginning! At least, it probably doesn't have a beginning, not in the traditional sense of the word. To the extent that we can reliably extract predictions from it at all, one typically obtains an eternal universe, something like a de Sitter spacetime. This is an eternal spacetime which contracts down to a minimum size and then expands: as we've already discussed in the context of the Aguirre-Gratton model.

    This is because the Hartle-Hawking idea involves performing a "trick", which is often done in mathematical physics, although in this case the physical meaning is not entirely clear. The trick is called Wick rotation, and involves going to imaginary values of the time parameter . The supposed "beginning of time" actually occurs at values of the time parameter that are imaginary! If you only think about values of which are real, most calculations seem to indicate that with high probability you get a universe which is eternal in both directions.

    Now why is the Hartle-Hawking model so revolutionary? In order to make predictions in physics you need to specify two different things: (1) the "initial conditions" for how a particular system (or the universe) starts out at some moment of time, and (2) the "dynamics", i.e. the rule for how the universe changes as time passes.

    Most of the time, we try to find beautiful theories concerning (2), but for (1) we often just have to look at the real world. In cosmology, the effective initial conditions we see are fairly simple but have various features which haven't yet been explained. What's interesting about the Hartle-Hawking proposal is that is a rather elegant proposal for (1), the actual initial state of a closed universe.

    One reason that the Hartle-Hawking proposal is so elegant is that the rule for the initial condition is, in a certain sense, almost the exact same rule as the rule for the dynamics, except that it uses imaginary values of the time instead of real values. Thus, in some sense the proposal, if true, unifies the description of (1) and (2). However, the proposal is far from inevitable, since there is no particularly good reason (*) to think that this special state is the only allowed state of a closed universe in a theory of quantum gravity. There are lots of others, and if God wanted to create the universe in one of those other states, so far as I can see nothing in that choice would be inconsistent with the dynamical Laws of Nature in (2).

    (Hawking has a paragraph in his book asserting that the proposal leaves no room for a Creator, but I'll put my comments on that into a later post!)

    In the context of a gravitational theory, imaginary time means that instead of thinking about metrics whose signature is , as normal for special or general relativity, we think about "Euclidean" (or "Riemannian") signature metrics whose signature is . So we have a 4 dimensional curved space (no longer spacetime).

    The assumption is that time has an imaginary "beginning", in the sense that it is finite when extended into the imaginary time direction. However, because there is no notion of "past" or "future" when the signature of spacetime, it's arbitrary which point you call the "beginning". What's more, unlike the case of the Big Bang singularity in real time, there's nothing which blows up to infinity or becomes unsmooth at any of the points.

    All possible such metrics are considered, but they are weighted with a probability factor which is calculated using the imaginary time dynamics. However, there are some rather hand-waving arguments that the most probable Euclidean spacetime looks like a uniform spherical geometry. The spherical geometry is approximately classical, but there are also quantum fluctuations around it. When you convert it back to real time, a sphere looks like de Sitter space: hence the Hartle-Hawking state predicts that the universe should look have an initial condition that looks roughly like de Sitter space, plus some quantum fluctuations.

    I say handwaving, because first of all nobody really knows how to do quantum gravity. The Hartle-Hawking approach involves writing down what's called a functional integral over the space of all possible metrics for the imaginary-time goemetry. There are an infinite-dimensional space of these metrics, and in this case nobody knows how to make sense of it. Even if we did know how to make sense of it, nobody has actually proven that there isn't a classical geometry that isn't even more probable than the sphere. Worst of all, it appears that for some of the directions in this infinite dimensional space, the classical geometries are a minimum of the probability density rather than a maximum! This gives rise to instabilities, which if interpreted naively give you a "probability" distribution which is unnormalizable, meaning that there's no way to get the probabilities to add up to 1.

    So Hartle and Hawking do what's called formal calculations, which is when you take a bunch of equations that don't really make sense, manipulate them algebraically as if they did make sense, cross your fingers and hope for the best. In theoretical physics, sometimes this works surprisingly well, and sometimes you fall flat on your face.

    Unfortunately, it appears that the predictions of the Hartle-Hawking state, interpreted in this way, are also wrong when you use the laws of physics in the real universe! The trouble is that there are two periods of time when the universe looks approximately like a tiny de Sitter space, (a) in the very early universe during inflation, and (b) at very late times, when the acceleration of the universe makes it look like a very big de Sitter space. Unfortunately, the Hartle-Hawking state seems to predict that the odds the universe should begin in a big de Sitter space is about times greater than the odds that it begins in the little one. That's a shame because if it began in the little one, you would plausibly get a history of the universe which looks roughly like our own. Whereas the big one is rather boring: since it has maximum generalized entropy, nothing interesting happens (except for thermal fluctuations). St. Don Page has a nice article explaining this problem, and suggesting some possible solutions which even he believes are implausible.

    Alex Vilenkin has suggested a different "tunnelling" proposal, in which the universe quantum fluctuates out of "nothing" in real time rather than imaginary time. This proposal doesn't actually explain how to get rid of the initial singularity, and requires at least as much handwaving as the Hartle-Hawking proposal, but it has the advantage that it favors a small de Sitter space over a big one. From the perspective of agreeing with observation, this proposal seems better. And it has an actual beginning in real time, something which (despite all the press to the contrary) isn't true for Hartle-Hawking.

    (*) There is however at least one bad reason to think this, based on a naive interpretation of the putative "Holographic Principle" of quantum gravity, in which the information in the universe is stored on the boundary. A closed universe has no boundary, and therefore one might think it has no information, meaning that it has only one allowed state! (The argument here is similar to the one saying the energy is zero.) At one time I took this idea seriously, but I now believe that such a strong version of the Holographic Principle has to be wrong. There are lots of other contexts where this "naive" version of the Holographic Principle gets the wrong answer for the information content of regions, and actual calculations of the information content of de Sitter-like spacetimes give a nonzero answer. So I'm pretty sure this isn't actually true.


    Top 10 Exopolitics Stories for 2020

    2020 was big year for exopolitics and UFO disclosure with multiple mainstream news sites reporting major developments. Legacy media is now regularly discussing UFOs/UAPs and extraterrestrial life, along with the latest developments with the US Space Force. I discussed my list of the Top 10 Exopolitics news stories with Corey Goode on Zoom (see video below) to get his take on what they mean for “full disclosure”. I consider Corey, along with the late William Tompkins, to be one of the most informed, legitimate and accurate insiders about secret space programs, extraterrestrial life, etc., with significant evidence to back up his claims, as I have discussed previously.

    I will go into detail about my the Top 10 list with slides and news videos on January 3 in the upcoming Ascension, Exopolitics & Disclosure Conference with Laura Eisenhower, John DeSouza and Neil Gaur. This promises to be an exciting webinar discussing what happened in 2020 and what we can expect in 2021.

    What follows is the zoom video with Corey and my list with links to relevant exopolitics.org articles published earlier in 2020.

    Top Ten Exopolitics Stories for 2020

    1. Professor Haim Eshed revelations on US ET agreements and Galactic Federation https://exopolitics.org/controversy-over-israeli-scientist-claims-of-us-alien-agreements-galactic-federation/
    2. Signing of Artemis Accords – https://exopolitics.org/artemis-accords-are-a-first-step-to-a-space-nato-future-star-fleet/
    3. Eric Davis Briefings to Pentagon Congress on alien reverse engineering https://exopolitics.org/what-was-revealed-in-classified-ufo-briefings-to-congress-pentagon/
    4. Mike Turber revelations on Navy Tic Tac sightings being part of USAF SSP https://exopolitics.org/tic-tac-ufos-revealed-in-2005-briefing/
    5. Salvatore Pais Patent application on nuclear fusion gets published in prestigious journal https://exopolitics.org/paper-on-nuclear-fusion-reactor-for-hybrid-spacecraft-published-in-prestigious-journal/
    6. Space Center to be established at Ramstein Air Base, Germany https://exopolitics.org/nato-creates-space-center-in-germany-in-move-towards-future-star-fleet/
    7. Space Force completes first year with official logo, recruits, bases, doctrinal documents and Guardian name https://exopolitics.org/space-force-sets-priorities-to-prevent-future-space-war/
    8. Trump received secret briefing that Roswell UFO involved time traveling humans https://exopolitics.org/roswell-ufo-crash-to-be-officially-disclosed-as-time-traveling-future-humans/
    9. Congress asks Intel Community for comprehensive UFO report 180 days after passage of 2021 NDAA: https://exopolitics.org/us-congress-asks-for-ufo-report-from-intel-community-in-180-days/
    10. China sends up a Moon lander and retrieves lunar rocks to demonstrate its growing space power capabilities


    The Sooty Empiric

    Recently an article entitled `Redefining Statistical Significance' (RSS) has been made available. In this piece a diverse bunch of authors (including four philosophers of science - represent) put forward an argument with the thesis: ``[f]or fields where the threshold for defining statistical significance for new discoveries is P<0.05, we propose a change to P<0.005.'' In this very brief note I just want to state my support for the broad principle behind this proposal and make explicit an aspect of their reasoning that is hinted at in RSS but which I think is especially worth holding clear in our minds.

    RSS argues that, basically, rejecting the null at P<0.05 represents (by Bayesian standards) very weak evidence against the null and in favour of the hypothesis under test, and further than its communal acceptance as the standard significance level for discovery predictably and actually leads to unacceptably many false-positive discoveries. P<0.005 taken as the norm would go some way towards solving both these problems, and the authors emphasise most especially that it would bring false positive levels down to within what they deem to be more acceptable levels. RSS doesn't claim originality for these points, and is a short and very readable paper I recommend checking it out.

    The authors then have a section replying to objections. They note that they do not think that changing the significance level communally required for discovery claims is a cure-all, and deploy a number of brief but very interesting arguments against the counter-claim that the losses in terms of false-negatives would outweigh the gains in avoiding false positives. This is all interesting stuff, but the point at which I wish to state my broad agreement comes when they consider the objection that ``The appropriate threshold for statistical significance should be different for different research communities.'' Here their response is to say that they agree in principle that different communities facing different sorts of puzzles ought use different norms for discovery claims, but note that many communities have settled on the idea that given the sort of claims they are considering and tests they can do P<0.05 is an appropriate standard for discovery claims. They are addressing those communities in particular with their proposal, so are addressing communities which have already come to agree that they should share a standard for discovery claims.

    My one small contribution here, then, is in following up on this point. They briefly note in their reply to this objection that -- `it is helpful for consumers of research to have a consistent benchmark.' I think this point deserves elaboration and emphasis, and it is why I feel that, although I do not feel sufficiently expert to comment on the specific proposal they made, the broad contours of their argument are right. Why, after all, do we actually have to agree on a communal standard for what counts as an appropriate significance level for `claims of discovery of new effects' at all? Couldn't we leave that to the discretion of individual researchers? Or maybe foster for some time a diversity of standards across journals and let a kind of Millian intellectual marketplace do its work? To put it philosophically, why have something rather than nothing here?

    I take it that a lot of what the communal standard is doing is providing a bench mark for those not able to make expert or highly-informed personal assessment of the claims and evidence to know that the hypothesis in question is confirmed to the standards of those who are able to make expert or highly informed assessments. These consumers of the research are those for whom the consistent benchmark helps. Especially for the kind of social scientific fields which have in fact adopted this benchmark, a pressing methodological consideration has to be that non-scientists or folk not able to assess statistical claims, and more pointedly people with policy or culturally influential positions, will consume the research, and take actions based on what they believe to be reliable, or at least take action on the grounds of what convinces them. The trade off between Type 1 and Type 2 errors, then, must be made with it in mind that there is an audience of non-experts to the claims made in this field, and an audience who will shape actions and lives and self-perceptions (in part) upon the results these fields put out. As a scientific community we must therefore decide what we think of our own work can be vouchsafed to these observers, or validated to the standard this cultural responsibility entails.

    In theory, of course, we could still leave this up to individuals or allow for a diversity of standards among journals. But I think awareness of the scientific community's public role tends to speak against that. Such diversity, I'd wager, would either result in a cacophonic public discourse on science in which the media and commentators constantly reported results, then their failure to replicate, and then their replication once more (as well as contrary results, their failure to replicate. ). This because the diversity of standards led to non-experts picking who to believe randomly among folk with different standards, or according to who they judged to have the flashiest smile, or whichever university PR department reached out to them last, or factionally choosing their favourite sources. Or, it would result in silence, as gradually scientific results come to be seen as too unreliable, too divided among themselves, to be worth paying much attention to at all. If you think that scientifically acquired information can make a positive difference to public discourse, either of these seem like bad outcomes. (The somewhat self-promoting Du Bois scholar nerd in me can't resist pointing out that Du Bois brought similar considerations to bear in responding to widespread failures of social scientific research in his day.) In fact, I think this epistemic environment makes a conservative attitude sensible, and speak in favour of adopting a very low tolerance for false-positives. This because is much harder to correct misinformation once it is out there than it is to defer announcing until we are more confident, and the very act of correction may induce the same loss of trust worry mentioned before. This means that in addition to elaborating upon RSS' reply to an objection, and without feeling competent to quite judge whether P<0.005 in particular is the right standard, I also think the overall direction of change advocated by RSS is the right one, relative to where we are now.