Archives

Categories

Will humanity perish (soon)?

Yes it is an untypically narrow question but with zero expertise…..I’ll have a go. With no background in the relevant cosmology or philosophy area I have been reading philosopher John Leslie’s The End of the World.  It has always seemed to me that accounting for possibilities, such as the end of the human species, in economic analysis was being a bit esoteric.  After reading Leslie’s account of the Carter ‘doomsday argument” I am not so sure.  The completely a priori philosophical argument on the “doomsday” issue in Leslie doesn’t convince me – I am probably not a skilled enough Bayesian to fully appreciate it – although I am unable to discredit it with obvious counterarguments – these are addressed in the Leslie book anyway.  The empirical part of the book looks at the various ways the human world might end. It is more straightforward and more convincing. Like Leslie I do worry about North Korean military idiocy and the possibility of run away climate change that kills us all.

As background, in my work on climate change I do use the  standard Ramsey discount rate determination model . According to this view you only discount because society in the future might be richer and the marginal utility of higher consumption lower. In addition a parameter δ is identified as the probability that the human might cease to exist at some future date – if society is not around in the future it is pointless to attach a weight to the utility it would derive.   More precisely δ is defined as the rate of decline of the survival probability if humanity’s destruction is the first event in a Poisson process with parameter δ so that the probability of destruction over an interval Δt is δΔt. How might δ be estimated?  Lord Stern (2007) uses the Ramsey formulation to estimate from a normative viewpoint discount rates for evaluating global climate change policies. He just guesses at estimates for  δ that seem plausible.  Even though he comes up with very low discount rates I always felt his estimates of  δ seemed a bit large. So unlike others who criticised Stern for setting discount rates too low, I always felt they were a bit on the high side.

One strand of literature does not provide a specific answer to the question but does suggest that we should be relatively pessimistic about the future.  The Cambridge mathematician Brendon Carter in the early 1980s developed a ‘doomsday argument’ (DA) that has been exposited and expanded by the philosopher John Leslie (1996) and independently arrived at by others.   The argument is interesting and warrants careful thinking.

The background is that the human race, Homo sapiens, has steadily increased in size since it originated some 100,000-300,000 years ago.   There are about 7 billion people alive today from about 100-115 billion people have ever lived.  Thus the number of people alive today is about 6.5% of those who have ever lived.

Suppose the universe consists of many thousands of intelligent races all of about our size. If we thought about likelihoods we could not expect to be in one of the earliest generations of one of these populations.  We couldn’t expect to be in the first 0.0001 per cent of people who ever existed in one of these populations – a cohort that existed 1 million years before humanity became extinct.  It makes more sense to suppose we are living at the same time as say 6.5% of humans are living. This suggests we should be reluctant to accept any theory which positions us exceptionally early among all the humans who will ever be born.  Hence if we have in the past made estimates of the riskiness of humanity’s future then this consideration should lead to a revision in estimating such risks which magnifies them.  Gott (1993) provides a similar view. Prima facie humanity would seem unlikely to survive for a long time for, if it did, we would now be in the unlikely state of being extraordinarily early in human population history.

How pessimistic should we in fact be?  The answer depends on the factors that might permit survival.  The risks of nuclear war and pollution are not negligible.  The Carter-Gott-Leslie position is not that the world will end soon but, rather, that it is bound to happen sooner or later and that the chance it will occur in the near future is greater than most people often assume.  It is therefore difficult to give humanity a high chance of colonizing the universe and not at all unrealistic to do what Stern and others have done – attach a non-negligible probability to the risk that we will not all survive more than a few hundred more years.

I liked Leslie’s remarks about why in a universe where thousands of human-like civilisations exist, contact between earthlings and others has not occurred.  His argument is that perhaps societies self-destruct when they reach a certain level of technological advancement.  This self destruction occurs before the advanced technologies involved in making contact have evolved. (Update: John Mashey in the comments thread refers to a much broader set of explanations as to why contact with alternative forms of intelligent life have not occurred – the relevant literature concerns the Fermi Paradox.)

References

R. Gott, ‘Implications of the Copernican principle for our future prospects’, Nature, May 27, 1993.

J. Leslie, The End of the World: The Science and Ethics of Human Extinction,  Routledge, London & New York, 1996. (1858)

8 comments to Will humanity perish (soon)?

  • Jim Rose

    super volcanoes seem to be the main contender in my eyes:
    Yellowstone,
    Long Valley, and
    Valles Caldera in the United States;
    Lake Toba, North Sumatra, Indonesia;
    Taupo Volcano, North Island, New Zealand; and
    Aira Caldera, Kagoshima Prefecture, Kyūshū, Japan.

    The Oruanui eruption of New Zealand’s Taupo Volcano was the world’s largest known eruption in the past 70,000 years

    the fragmental material from the eruption covered much of the central North Island with ignimbrite up to 200 metres deep. Most of New Zealand was affected by ashfall.

  • davidp

    I worry a bit (though not so much to change what am doing) about three things:

    1. Militant nationalists in China and India (highest probability)
    2. Some destructive software operating through the Internet (somewhat lower probability)
    3. (with considerably lower probability) aliens from outer space making physical contact and humanity being devastated by disease to which we don’t have an immunity (even if they are friendly).

    The volcanoes though sound like another set of potential problems.

  • Jim Rose

    super volcanoes erupt from time to time. Yellowstone is still active and would cover half the USA with ash.

  • Jim Rose

    Richard Posner wrote Catastrophe: Risk and Response.

    Posner groups catastrophes into four types:

    natural catastrophes (e.g., a massive asteroid collision); catastrophic accidents caused by unfettered scientific exploration (e.g., a “strangelet” scenario in which hyper-dense quarks produced in a physicist’s lab compress the Earth in seconds);
    unintended byproducts of technological progress (e.g., abrupt climate change); and
    intentional catastrophes (nuclear terrorism, bioterrorism or even nuclear winter).

    see http://www.washingtonpost.com/wp-dyn/articles/A54801-2005Jan6.html for a review

  • Jim Rose

    most accident probabilities are about whether we will be struck by a bus etc, in our lifetime.

    posner’s list of natural catastrophes – a massive asteroid collision, super-volcanoes and abrupt climate change such as a new ice age – all will happen from time to time albeit with very long gaps in between.

    any advance on a strategy of hoping we are all zipping around the universe star trek style when these do happen?

  • John Mashey

    Without opining on the other issues:
    1) At some point in the future any time a decade from now to millions of years off, another big asteroid will impact Earth. At that time, either we will have adequate technology in place to a) detect it far enough a way and b) get something out there to deflect it, far enough away that such deflection has reasonable energy costs.

    2) However, I don’t think the Fermi Paradox (where are they?) is actually very good evidence of collapse of other civilizations. See FErmi Paradox Solved and thew comments following, including several from me.

    ’2) The only possible civilizations we could *ever* see would likely be closer to Kadashev Type I’s with plenty of energy and longevity, willing to run multi-million year efforts to beam narrowband signals at nearby stars [~10,000 systems within 100 light-years] in hopes of finding one close enough in space and time to get a reply. I’d love to see the budget process for that! One needs to be willing to burn energy run high-powered transmitters for long periods, as well as keeping a bunch of receivers going.’

    and
    ‘AS for detection range, that was in the SETI FAQ hotlink I mentioned. Basically, broadband: a small fraction of a lightyear, i.e., not much beyond Pluto. Like I said, civilizations at our level cannot “see” each other’s “leakage”.

    From my categorization, we couldn’t ever see anything below 7f), i.e., a space-capable civilization willing to spend serious energy transmitting focused narrowbeam signals at nearby stars, and doing it at the right range at the right time. The distinction between 7f and the lower ones is that they not only had the capability but the very-long-term will to do it.’

  • [...] Saving other species is laudable, but can we save ourselves? In a thought-provoking section, the museum presents the concept of Homo extinctus - humans wiped out forever. “There’s nothing inevitable about our survival,” says Chris Stringer, the museum’s head of human origins. “The biggest threat to us is us.” (Agreed. I am a pessimist as I have set out before). [...]

  • Another one of your respective fantastic blogposts, carry on the good states…

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>