Poojan (Wagh) Blog

Requests for comment

A conversation with Lowell Bryan and Richard Rumelt – McKinsey Quarterly – Strategy – Strategic Thinking

without comments

Enjoyed the McKinsley Quarterly podcast A conversation with Lowell Bryan and Richard Rumelt – McKinsey Quarterly – Strategy – Strategic Thinking (link is to transcript of the interview).

They make the point that we were all looking at the wrong metrics before the mortgage/credit crisis occurred. Things such as GDP, etc. have no correlation with people making much larger mistakes (packaging high-risk loans as low-risk).

Here’s a really good analogy:

At the heart of this failure is what I call the “smooth sailing” fallacy. Back in the 1930s, the Graf Zeppelin and the Hindenburg were the largest aircraft that had ever flown. The Hindenburg was as big as the Titanic. Together these vehicles had made 620-odd successful flights when one evening the Hindenburg suddenly burst into flames and fell to the ground in New Jersey. That was May 1937.

Years ago, I had the chance to chat with a guy who had actually flown over Europe in the Hindenburg. And he had this wistful memory that it was a wonderful ride. He said, “It seemed so safe. It was smooth, not like the bumpy rides you get in airplanes today.” Well, the ride in the Hindenburg was smooth, until it exploded. And the risk the passengers took wasn’t related to the bumps in the ride or to its smoothness. If you had a modern econometrician on board, no matter how hard he studied those bumps and wiggles in the ride, he wouldn’t have been able to predict the disaster. The fallacy is the idea that you can predict disaster risk by looking at the bumps and wiggles in current results.

The history of bumps and wiggles—and of GDP and prices—didn’t predict economic disaster. When people talk about Six Sigma events or tail risk or Black Swan, they’re showing that they don’t really get it. What happened to the Hindenburg that night was not a surprisingly large bump. It was a design flaw.

This theory of large disasters makes a lot of sense to me: it almost seems like a necessary condition for a large disaster that the conventional metrics wouldn’t predict it. We’re not all stupid; if some metric predicted disaster, someone would take advantage of it–and in free markets, each opportunistic person forms a feedback loop that corrects the original market inefficiency (in this case, averts disaster by gradually devaluing mortgage-backed securities).

The interviewees go on to say that the systematic design flaw was treating correlated securities as having independent risk. That seems like a contradiction to me: aren’t things such as correlations and risk well-established metrics? So, weren’t the metrics available at the time able to predict this disaster?

According to the interviewees, these metrics weren’t being analyzed within the scope of economic stability. Instead, GDP (and GDP volatility) was being tracked. I don’t know enough to know whether they are right or wrong. But, it makes for thought-provoking reading.

Written by PoojanWagh

July 6th, 2009 at 7:36 am

Posted in finance

Tagged with

Leave a Reply