View Full Version : Posner: why do we suck so much at cat risk prevention/preparation/quantification?

06-08-2010, 07:42 AM
Praise the lord, not a "black swan" in sight:

The BP oil spill in the Gulf of Mexico is the latest of several recent disastrous events for which the country, or the world, was unprepared. Setting aside terrorist attacks, where the element of surprise is part of the plan, that still leaves the Indian Ocean tsunami of 2004, Hurricane Katrina in 2005, the global economic crisis that began in 2008 (and was aggravated by Greece's recent financial collapse) and the earthquake in Haiti in January.

In all these cases, observers recognized the existence of catastrophic risk but deemed it to be small. Many other risks like this are lying in wait, whether a lethal flu epidemic, widespread extinctions, nuclear accidents, abrupt global warming that causes a sudden and catastrophic rise in sea levels, or a collision with an asteroid.

Why are we so ill prepared for these disasters? It helps to consider an almost-forgotten case in which risks were identified, planned for and averted: the Y2K threat (or "millennium bug") of 1999. As the turn of the century approached, many feared that computers throughout the world would fail when the two-digit dates in their operating systems suddenly flipped from 99 to 00. The risk of disaster probably was quite small, but the fact that it had a specific and known date made it irrational to postpone any remedies -- it was act now or not at all.

Our tendency to procrastinate is aggravated by three additional circumstances: when fixing things after the fact seems like a feasible alternative to preventing disaster in the first place; when the people responsible have a short time horizon; and when the risk is uncertain in the sense that no objective probability can be attached to it.

All these forces came together to permit the economic crisis, despite abundant warnings from reputable sources, including economists and financial journalists. Risky financial practices were highly profitable, and giving them up would have been costly to financial firms and their executives and shareholders. The Federal Reserve and most academic economists believed incorrectly that in the event of a crash, remedial measures -- such as cutting interest rates -- would be enough to jump-start the economy. Meanwhile, depending on how they were compensated, many financial executives had a limited horizon; they were not worried about a collapse years down the road because they expected to be securely wealthy by then. Similarly, elected officials have short time horizons; with the risk of a financial collapse believed to be low, and therefore a meltdown unlikely in the immediate future, they had little incentive to push for costly preventive measures.
Two final problems illuminate our vulnerability to such risks. First, it is very hard for anyone to be rewarded for preventing a low-probability disaster. Had the Federal Reserve raised interest rates in the early 2000s rather than lowering them, it might have averted the financial collapse in 2008 and the ensuing global economic crisis. But we wouldn't have known that. All that people would have seen was a recession brought on by high interest rates. Officials bear the political costs of preventive measures but do not receive the rewards.

The second problem is that there are so many risks of disaster that they can't all be addressed without bankrupting the world many times over. In fact, they can't even be anticipated. In my 2004 book "Catastrophe: Risk and Response," I discussed a number of disaster possibilities. Yet I did not consider volcanic eruptions, earthquakes or financial bubbles, simply because none of those seemed likely to precipitate catastrophes.

In principle, all disaster possibilities should be ranked by their "expected cost" -- roughly speaking, by multiplying the dollar consequences of the disaster if it occurs by the probability that it will occur. If Disaster A would cause a loss of $1 trillion, and the annual probability of it occurring is 1 percent, then its expected annual cost is $10 billion. (That means we wouldn't want to spend more than that each year to prevent it.) And suppose Disaster B would exact $100 billion in damage, and its annual probability of occurring is 5 percent. That is a higher probability, but the expected cost -- $5 billion -- is only half as great, so we should spend less trying to prevent it.

A politician who proposed a campaign of preventing asteroid collisions with Earth, for example, would be ridiculed and probably voted out of office. Yet, planetary scientist John S. Lewis has estimated that there is a 1 percent chance of an asteroid of one or more kilometers in diameter hitting the Earth in a millennium, and that such a hit would probably kill on the order of 1 billion people. That works out to 10,000 deaths per year, far exceeding the annual deaths from airplane crashes.
That last bit has us looking forward to the real highest \mu_x at some point in time.... [referring to the thread on the highest mortality second in the past 100 years]

Anyway, I found it hard to excerpt as the whole thing is good...and I'm using it as material for an article I'm writing [along with some other stuff].