- Nov 20, 1999
- 22,994
- 779
- 126
When you're putting all your confidence in models that couldn't possible model the real world perfectly, you're not going to catch every variable that could cause your model to fail. This is exactly why Wall Street needs to be heavily regulated. If they weren't able to use so much leverage and they had more reserves to cushion blows and harsh clawbacks of bonuses to serve as the right type of incentive, we wouldn't have to deal with this shit. Instead, these execs are using models they don't understand.
http://www.ft.com/cms/s/0/77bf5f98-4441-11e0-931d-00144feab49a.html#axzz1FS9a2o5L
Just look what happened to the collapse, some math wiz came out with a model to assess risk and wall street started using it to package mortgages into securities because it gave them positive feedback that all their risky trades would work out:
http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Of course, they a) Had no idea how it worked and b) Didn't heed calls that there might be limits to what the model could tell you and everything crashed.
Fuck wall street.
http://www.ft.com/cms/s/0/77bf5f98-4441-11e0-931d-00144feab49a.html#axzz1FS9a2o5L
For those who cannot read the article due to the stupid FT paywall:
Don’t blame luck when your models misfire
By John Kay Published: March 1 2011 22:31 | Last updated: March 1 2011 22:31
When the financial crisis broke in August 2007, David Viniar, chief financial officer of Goldman Sachs, famously commented that 25-standard deviation events had occurred on several successive days. If you marked your position to market every day for a million years, there would still be a less than one in a million chance of experiencing a 25-standard deviation event. None had occurred. What had happened was that the models Goldman used to manage risk failed to describe the world in which it operated.
If the water in your glass turns to wine, you should consider more prosaic explanations before announcing a miracle. If your coin comes up heads 10 times in a row – a one in a thousand probability – it may be your lucky day. But the more likely reason is that the coin is biased, or the person who flips the penny or reports the result is cheating. The source of most extreme outcomes is not the fulfilment of possible but improbable predictions within models, but events that are outside the scope of these models.
Sixty years ago, a French economist described the Allais paradox, based on the discovery that most people treat very high probabilities quite differently from certainties. Not only do normal people think this way, but they are right to do so. There are no 99 per cent probabilities in the real world. Very high and very low probabilities are artifices of models, and the probability that any model perfectly describes the world is much less than one. Once you compound the probabilities delivered by the model with the unknown but large probability of model failure, the reassurance you crave disappears.
Techniques such as value at risk modelling – the principal methodology used by banks and pressed on them by their regulators – may be of help in monitoring the day-to-day volatility of returns. But they are useless for understanding extreme events, which is, unfortunately, the main purpose for which they are employed. This is what Mr Viniar and others learnt, or should have learnt, in 2007.
Yet the use of risk models of this type is one of many areas of finance in which nothing much has changed. The European Union is ploughing ahead with its Solvency II directive for insurers, which – incredibly – is explicitly modelled on the failed Basel II agreements for monitoring bank solvency. Solvency II requires that businesses develop models that show the probability of imminent collapse is below 0.5 per cent.
Insurance companies do fail, but not for the reasons described in such models. They fail because of events that were unanticipated or ignored, such as the long-hidden danger from asbestos exposure, or the House of Lords judgment on Equitable Life. They fail because underwriters misunderstood the risk characteristics of their policies, as at AIG, or because of fraud, as at Equity Fundings.
Multiple sigma outcomes do not happen in real life. When all the Merchant of Venice’s ships are lost at sea during the interval, we know that we are watching a play, not an account of history. Shakespeare, no fool, knew that too. In Act V Antonio was able to write back his loss provisions in full even if it was too late to fulfil his banking covenant to Shylock.
But today the modellers are in charge, not the poets. Like practitioners of alchemy and quack medicine, these modellers thrive on our desire to believe impossible things. But the search for objective means of controlling risks that can reliably be monitored externally is as fruitless as the quest to turn base metal into gold. Like the alchemists and the quacks, the risk modellers have created an industry whose intense technical debates with each other lead gullible outsiders to believe that this is a profession with genuine expertise.
We will succeed in managing financial risk better only when we come to recognise the limitations of formal modelling. Control of risk is almost entirely a matter of management competence, well-crafted incentives, robust structures and systems, and simplicity and transparency of design.
Just look what happened to the collapse, some math wiz came out with a model to assess risk and wall street started using it to package mortgages into securities because it gave them positive feedback that all their risky trades would work out:
http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Of course, they a) Had no idea how it worked and b) Didn't heed calls that there might be limits to what the model could tell you and everything crashed.
The damage was foreseeable and, in fact, foreseen. In 1998, before Li had even invented his copula function, Paul Wilmott wrote that "the correlations between financial quantities are notoriously unstable." Wilmott, a quantitative-finance consultant and lecturer, argued that no theory should be built on such unpredictable parameters. And he wasn't alone. During the boom years, everybody could reel off reasons why the Gaussian copula function wasn't perfect. Li's approach made no allowance for unpredictability: It assumed that correlation was a constant rather than something mercurial. Investment banks would regularly phone Stanford's Duffie and ask him to come in and talk to them about exactly what Li's copula was. Every time, he would warn them that it was not suitable for use in risk management or valuation.
In hindsight, ignoring those warnings looks foolhardy. But at the time, it was easy. Banks dismissed them, partly because the managers empowered to apply the brakes didn't understand the arguments between various arms of the quant universe. Besides, they were making too much money to stop.
Fuck wall street.
Last edited:
