Nobel Laureate Robert Engle on VaR, Systemic Risk, and Liquidity
I sat down with Nobel laureate Robert Engle in Tokyo last month to discuss an amazingly wide range of applications that he and industry practitioners have found for ARCH (autoregressive conditional heteroscedasticity) models and related research. Engle is a professor at the NYU Stern School of Business and his work is widely cited in academic journals and referenced by practitioners from hedge fund managers to risk management professionals.
This is the first part of a two-part series. In this installment, we will cover the development of the ARCH model, the global financial crisis, systemic risk, and forecasting liquidity with ARCH models.
CFA Institute: Let’s start from the beginning. What was the motivation behind the ARCH model, and how did you come up with the idea?
Robert Engle: Well, the ARCH model was something that I studied and developed when I was on sabbatical from the University of San Diego at the London School of Economics.
It must be hard to concentrate on work in San Diego.
[Laughter] When you are on sabbatical, you have a chance to let your mind wander towards what you think is the most interesting question. The question that I was really interested in then was macroeconomics. This was the time when rational expectations models were thought to have the implications that policy couldn’t do anything, the impotence of policy. That’s because if you anticipate what policy was going to do, the private sector will undo it. [The hypothesis I had was that] if you have some uncertainty about how this policy will work, or uncertainty about whether the policy is actually going to be put in place, then you wouldn’t undo it. You wouldn’t undo it completely anyways. So I thought what we really need to do is to build models which allow uncertainty that changes over time. But in fact, the only models that were around were models in which people looked at variance. And the variance is just a number. So how do you go from a variance that is a constant to a variance that is a whole time series? And that’s what I was trying figure out.
So did you go about solving that problem?
I usually describe this as having three totally unrelated ideas coming together. I was very influenced by the literature on Kalman filters, which continuously build models conditional upon what we know today. Conditional means are important and offered a very nice insight that we can talk about conditional variances in the same sort of way. And the third idea was bilinear models. Clive Granger had been interested in bilinear models, and he had developed a test. One time I was sitting at a computer at San Diego and he said, “Take a look at the residual from this model.” So I showed him the plot. He said “Square them. Fit an auto regression.” It was very significant! Even though there wasn’t correlation of the residuals, there was correlation of the squared residuals. I knew that empirically you might find that sort of thing, but I didn’t really think that was the perfect test for bilinear models. It was the perfect test for something else and I had to figure out what it was. That’s how I came up with the ARCH model.
Fascinating. I think this is really something that turns out to have many profound implications. I remember that, about 10 years ago, the developer of arguably the most popular risk analytics software launched a new version of its risk models, changing the whole forecasting process to a GARCH process. So this is something that as recently as 10 years ago was still considered a profound innovation in the industry, even though much of the work was done 25, 30 years ago.
[The developer] talked to me quite a bit about it. I teach my students on Wall Street [the basic ARCH model] every day. They all feel like they’ve learnt something that they can use, and I get all these emails saying I showed this to my boss. We had a problem at work, I fit this model and people were very interested. So it is not fully absorbed in financial practice yet.
I think one very popular application for ARCH analysis is the value at risk (VaR) model. Before the global financial crisis, people generally thought if they got a value at risk model, they knew where they were in terms of risk exposures. The financial crisis changed that perception to a large extent.
There were certainly lots of things that stopped working right during the financial crisis. Risk models are clearly on that list.
We have this website called V-Lab, which publishes volatility forecasts every day. We can go back to what it was publishing during the financial crisis to see how its calculations actually survived the financial crisis. It turns out there was almost no deterioration in the value at risk performance during the financial crisis. It was able to predict volatility in extreme quantiles quite well.
The problem with it is it only gave you one day’s notice. So the simple answer is: yes, I can predict financial crisis one day ahead. That is what value at risk measures. So the problem with VaR is not only that it’s a little bit complicated to calculate but also it doesn’t summarize the risks very well. The risks that we’re interested in are not just one-day-ahead risks, but much-longer-horizon-ahead risks. I think that’s an important lesson from the financial crisis.
So how can we address that? Was this the inspiration that led to your more recent research in systemic risk?
What we learned from the financial crisis is that the failure of financial firms can have a serious impact later in the real economy. And consequently we don’t want to leave it to chance whether such financial institutions fail. We have set up new and improved regulatory structures to try to assess the financial health of these financial institutions. What we have done, again in V-Lab, is to try to come up with a way of doing this using only publicly available information on the firms [applying] statistical methods based on the ARCH model. The question is whether these financial institutions have sufficient capital buffer so that they can withstand the financial crisis. How much capital does a financial institution need to raise in order to continue to function normally if we have another financial crisis? And we give this a name, SRISK, for systematic risk.
So how do you measure that?
That’s the heart of it. We look at the relationship between the market cap and the book value of liabilities. That relationship is the key relationship. If a firm’s equity, which is sort of its loss-absorbing cushion, is too small relative to its liabilities, this firm no longer functions normally.
We have a way of measuring correlations and volatility so we construct the time varying beta, which will tell us if the market is in a crisis, if the market collapses for the next 6 months by 40%, what happens to this firm’s market cap.
We then look at the firm market cap relative to the liabilities and see if it has fallen down a lot and how much capital it needs to raise so it can bring it back up to a normal level.
We calculate this weekly for 1,200 firms around the world. We add them all up to get the estimate of what the cost of bailing out the financial sector would be for entire global economy if we have another crisis.
Would you say the SRISK is a response to the financial crisis? As we have been saying, VaR may not be the appropriate measure. People may not have used it properly.
Right. SRISK is more like a stress test that is applied to financial firms. It is used to measure systemic risk, which is not the same as firm or portfolio risk.
So this should probably be of the utmost importance to financial regulators.
That’s right.
What are some of the new frontiers in applying the ARCH model?
I think the measurement and forecasting of liquidity is a very important frontier.
When we talk about illiquidity, S&P 500 is maybe not what we really have in mind. I think we should talk about CDS, or municipal bonds, or emerging market bonds. [In these markets], if you decide to put out a big order, you will really change the price. I think we have a lot of things still to do on that front and that some of the same tools that are useful for measuring volatility are also useful for [measuring liquidity].
In fact, liquidity has had amazing changes over the last couple of decades so it’s really improved dramatically from the old days trading in quarters and eighth and sixteenth to the kinds of liquidity we see today. But there are a lot of people that feel the markets are not as easy to trade as they used to be, so maybe the measurement of liquidity isn’t being done exactly the right way.
A variety of measures of liquidity are widely used but I think ultimately it’s really the pricing impact that you want to know about. One of the most popular measures of liquidity is the Amihud measure. Yakov [Amihud] uses the absolute value of returns over some period, say a day, divided by the volume [to measure illiquidity]. If absolute returns change a lot on not much volume, it’s an illiquid market. If you can trade a lot of volume without much change in price, that’s a liquid market. A lot of people have used this and have really had pretty good success in finding that [liquidity defined in this manner] explains important things like the liquidity premium.
That seems to be the focus of current research, using liquidity as a characteristic, or factor, in explaining portfolio returns. Being able to forecast liquidity obviously has important implications in that context.
You don’t want fool yourself into thinking whatever it is today is what it will be.
Volatility is at the heart of forecasting this kind of illiquidity. But you also have to forecast volume and the correlation between volume and volatility. This is one of the new things we are doing in V-Lab. We are planning on expanding [to include illiquidity estimates for a few hundred assets every day].
If you enjoyed this post, you should subscribe to The Enterprising Investor
Please note that the content of this site should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute.
Photo credit: ©iStockphoto.com/enot-poloskun
The ongoing research on the impact of liquidity is headed in the right direction and calls for a collective effort to save the world another financial crisis. Prof. Engle would like to see a better risk measure that can give an estimate over a much longer horizon rather than a day ahead measure like VaR. I’m just thrilled by that knowing that the future holds uncertainty.
Fantastic thoughts by Prof. Engle.
Hi Olufemi,
I was very impressed by Prof. Engle’s insights as well and am very excited to share that with my fellow CFAs and investors. Watch out for the next installment where we will discuss application of risk modeling in portfolio management and high frequency trading.
Warm regards,
Larry
It’ll be interesting if he can have an indepth discussion on how practitioner can make use of the V Lab!
Hi Sheldon,
Thank you very much for your interest. I am sure that Prof. Engle will be very happy to know that as well.
If there are others out there that share Sheldon’s sentiment, please voice your interest here as well. We’ll certainly take that into account when planing our future coverage.
Warm regards,
Larry
Sir
Wonder learning. Relation of market cap to o/s liability is new learning for me. I want to know how liquidity can be measure.
Regards
Jyoti Ranjan
Dear Jyoti,
I’m glad you found the blog post helpful. The Amihud measure of stock illiquidity mentioned in the blog post is defined as the daily ratio of absolute stock return to its dollar volume, averaged over some period, say, a year. Hope this answers your question.
Warm regards,
Larry
Good insights..
Dear Venkata,
I’m glad you found the blog post helpful. The next installment of this series will be published next week. Stay tuned!
Warm regards,
Larry