A recent New York Times article follows up on a previous discussion (see blog post) regarding modeling and risk management. While financial engineering and quants will continue to receive some criticism for the recent problems in the markets, along with development of generally inadequate risk management models, the NY Times article further explores whether it was the models or human failure that are to blame for the current financial situation. There is no doubt that some models, especially black box models, have created some of the problems, but we cannot really fix the problem until we know the root causes. Is it simply a matter of not having sophisticated enough modeling techniques, or are poor assumptions, inadequate data, and lack of oversight also to blame?
Research at the IMF found that quantitative methods underestimated defaults for subprime borrowers, at times often relying on computerized credit-scoring models instead of human judgment (then again, I am not sure Moody's or S&P were much better or more timely, but I digress). On the other hand, economists at the Fed concluded that risk models had correctly predicted that a drop in real estate prices of 10-20 percent would be bad for subprime mortgage-backed securities (not a surprise), but that analysts themselves assigned a very low probability of this happening. In fact, the Fed study might be at the heart of the problem - human behavior.
As mentioned in the article, asset prices depend on not only our belief, but the belief of others. In an "efficient" market the participants expect that the true (or near true) price is reflected, even if the belief of one person is far from the efficient value. Of course, it is hard to model the beliefs of the market, so often the beliefs, hopes, or profit motives of one person may come into play, with at times disastrous results. The problem is compounded when risk management models are assumed to follow some natural law, when in fact both the theory that defines the model, and the inputs provided to the models, are more stochastic in nature.
As I have argued before, we need risk models, even those based on imperfect mathematics and assumptions, but we must always take into consideration what could happen if we are wrong. If our assumptions are too optimistic or too pessimistic, what is the fallout? We should be asking ourselves how confident we are in mathematics used AND the assumptions made. Are there assumption scenarios that could bring a company to its knees? What models are best to help understand all the possible outcomes that we should be worried about? These are the questions risk managers need to continue to ask. Yet in many instances blind black box models with fixed assumptions were trusted. Unfortunately, I suspect that even with the recent problems and consequences hanging over our heads, asking which models and assumptions point to the highest profits or lowest levels of regulatory capital will once again start to be considered. After all, its just human nature. Of course, avoiding pain is also a natural human instinct. Maybe there is a lesson to be learn there as well when the next bailout is voted on.
Common Sense and Risk Modeling, Its Just Human Nature
Posted by Bull Bear Trader | 11/05/2008 08:21:00 AM | Financial Engineering, Modeling, Quants, Risk Management, Stochastic Models | 0 comments »Comparing Implied Volatility With Historical Volatility
Posted by Bull Bear Trader | 4/27/2008 11:16:00 PM | GARCH, Historical Volatility, Implied Volatility, Stochastic Models | 0 comments »As I was browsing various blog sites I came across an article entitled "What is High Implied Volatility?" at the VIX And More blog. The article is worth a read, and is linked above, but I wanted to mention and elaborate on a simple concept from the article that often gets overlooked. As traders and professors we often instruct students and others about how we can compare implied volatility to historical volatility to see whether an option is over-price or under-priced, assuming that the historical volatility is constant, and that it will stay that way into the future. Of course, comparing the two in real-life without such assumptions can be a little more complex.
As described at VIX and More, we need to compare the current implied volatility with a defined range of either historical volatility or implied volatility. Of importance are the time frames used, the type of past volatility (historical or implied), and the comparison universe (the same stock and/or similar stocks). Furthermore - and this is the key - we must not forget that historical volatility is by definition, and calculation, backward looking. On the other hand, implied volatility is forward looking and considers the market's expectation and potential reaction to news and events, such as earnings, dividends, FDA phase testing results, Federal Reserve actions, etc.
To make life easier, sometimes a relative measure, even one that considers a range or moving average, is useful. For those with a little more background and interests in mathematics and modeling, one of the many variations of the Generalized Autoregressive Conditionally Heteroskedastic (GARCH) models, or stochastic models of implied volatility surfaces, can be used. Other models also exist. I have used GARCH and find it to be useful when constructing option spreads, although parameter selection is necessary. With the right software, even a pre-programed Excel spreadsheet, the analysis can be implemented with less pain than expected, and sometimes incorporated into defined trading rules for those platforms that allow such integration.