Practical analysis for investment professionals
27 August 2012

Fact File: S&P 500 Sigma Events

A seemingly endless battle is waged between believers in the efficient market hypothesis, such as Eugene Fama, and believers in behavioral finance, such as Daniel Kahneman. Regardless of your perspective, an analysis of the S&P 500’s history of sigma events provides an interesting field for the battle to be waged.

For example, from 3 January 1950 through 31 July 2012, the average daily return of the S&P 500 was 0.03%, and the standard deviation was 0.98% (source: Yahoo Finance, CFA Institute). These results are remarkably similar to the mean and standard deviation of the normal distribution of 0 and 1, respectively. This suggests that daily returns for the S&P 500 closely approximate the normal distribution, and that returns follow a random walk.

Subscribe Button

What if you feel that mean and standard deviation are not the only way to describe a probability distribution? Then these data do not tell the entire story. Many researchers have noted anomalies in return data such as extreme positive and negative daily returns — the proverbial “fat” tails that characterize stock market returns.

Here is a look at the distribution of the S&P 500’s daily returns categorized by how extremely those returns deviated from the average daily return of 0.03%.


Number of S&P 500 Sigma Events (3 January 1950 – 31 July 2012)

Number of S&P 500 Sigma Events

Source: CFA Institute.


As you can see, the overwhelming majority of daily returns fall within one standard deviation, or sigma, from the mean return of 0.03% per day. This is actually a characteristic not discussed as frequently as the stock market’s “fat tails.” Namely, that daily returns are leptokurtic until you reach the tails. Yet, the normal distribution holds that ~68% of returns should occur within one standard deviation of the mean, yet the actual number is a gigantic 95.6%.

Here is the numerical breakdown of the graph above:

S&P 500 Sigma Events

Source: CFA Institute.

Market observers have noted that financial markets have become more volatile over time. A look at the number of sigma events by decade makes that clear.

Sigma Events by Decade

Source: CFA Institute.

For example, the number of normal trading days — as measured by the percentage of trading days that are a <1 sigma event — is sharply lower since the 2000s, with the decade of the 2000s having 1 sigma trading days only 89.54% of time, as compared to the average of 95.56% and the peak of 98.61% in the 1950s. That said, the most “normal” decade, as measured by the decade with the smallest deviations from the average, was a recent decade, the 1990s.

There has also been a doubling of two sigma events, a tripling of three sigma events, and so forth. However, careful scrutiny reveals something extraordinarily interesting: Just two years of daily market activity, 1987 and 2008, account for 56% of all five sigma and above events! In 1987 there were six events that were five sigma and above, and in 2008 there were 18 such occurrences. Wow! These numbers compare to the average number of five sigma and above events per year of 0.68. So, in addition to there being daily return sigma events to be cautious of, there are clearly high sigma years to be wary of as an investor, too.

What about the expected daily occurrence of sigma events? Here is the historical record:

Historical Record of Sigma Events

Source: CFA Institute.

What the above chart shows is that there are, on average, 129.2 trading days per 251.62 trading days in a year in which your return is between 0.03% and 1.02%. Similarly, there are, on average, 3.85 days per year where your loss is between −0.98% and −1.99%, or between a one sigma and two sigma loss.

After the two sigma events, it becomes harder to tell what the expected frequency of a sigma event is, so here are the data rescaled by years, instead of days:

Sigma Events Scaled by Years

Source: CFA Institute.

Here you can see that a seven sigma up day can be expected once every 31.29 years, and a 10 sigma or greater down day can be expected once every 62.58 years.

One famous piece of oft repeated wisdom doled out by the buy-and-hold community is that missing just the top 10 up days results in a significantly lower total return. Consequently you should always stay invested, lest you miss these days. Indeed, when framed this way there is truth to the statement. One dollar invested on 3 January 1950 would have turned into $81.79 on 31 July 2012. Yet, if you had missed those top 10 performing days, you would only have $38.95 instead of $81.79.

But this is only half the story. For what if you were in fact a brilliant market timer, and you were able to miss just the 10 worst-performing days in market history? Your $81.79 would actually be a whopping $214.41. This result is clearly an example of brilliant market timing as investors would have experienced each of the top 10 performing days, yet missed all of the 10 worst trading days.

House ad for Behavioral Finance: The Second Generation

So what is the result of missing the top 10 and bottom 10 trading days? Investors’ $1 would have grown to $102.94. Because this result is much higher than the $81.79 earned with the buy-and-hold strategy, it does not make sense to justify a buy-and-hold strategy just on the premise that you make more money from employing it instead of market timing.

And just for giggles, what if you had perfect market timing and were only invested on up days? Your $1 investment would have grown to be:

$335,288,501,296,558,000,000,000.

For those of you not up on your large numbers, that is $335 sextillion (or a trillion trillion).

Last, the largest positive sigma event of all time occurred on 13 October 2008, when the S&P 500 surged upward registering as an 11.82 sigma event. Meanwhile, the largest negative sigma event was the famous 19 October 1987 crash, which was a whopping 20.98 sigma event!

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images/Bloomberg Creative Photos

About the Author(s)
Jason Voss, CFA

Jason Voss, CFA, tirelessly focuses on improving the ability of investors to better serve end clients. He is the author of the Foreword Reviews Business Book of the Year Finalist, The Intuitive Investor and the CEO of Active Investment Management (AIM) Consulting. Voss also sub-contracts for the well known firm, Focus Consulting Group. Previously, he was a portfolio manager at Davis Selected Advisers, L.P., where he co-managed the Davis Appreciation and Income Fund to noteworthy returns. Voss holds a BA in economics and an MBA in finance and accounting from the University of Colorado.

Ethics Statement

My statement of ethics is very simple, really: I treat others as I would like to be treated. In my opinion, all systems of ethics distill to this simple statement. If you believe I have deviated from this standard, I would love to hear from you: [email protected]

42 thoughts on “Fact File: S&P 500 Sigma Events”

  1. M says:

    Could the author tell us how often 7, 8, 9 and 10 sigma events should occur if equity prices really were normally distributed?

  2. Hi M,

    Here are the probability percentages of these events occurring if stock returns were normally distributed:

    SIGMA/PROBABILITY OF OCCURRENCE
    1 68.270%
    2 95.450%
    3 99.730%
    4 99.994%
    5 99.999%
    6-10 increasingly asymptotic to 100%

    Hope that helps.

    With smiles,

    Jason A. Voss, CFA

    1. Alexander says:

      You have included 1 and 2 sigma events, both in the 1st deviation. That’s why you got this big number…

      1. Hello Alexander,

        To which number are you referring?

        With smiles,

        Jason

  3. Jason says:

    Assuming a normal distribution, one would expect a 7-sigma occurrence once every 3,105,395,365 years.

    Using Excel and MATLAB, a paper from UCD Business Schools calculated the probability of up to 25-sigma. Needless to say, the calculations are tongue-in-cheek given they quickly become meaninglessly large.

    http://www.smurfitschool.ie/academicsampresearch/workingpapers/wp_08_13.pdf

  4. Hi Jason,

    Thank you for sharing the data about the 25-sigma event. I actually have the formula built into Excel which is what I used to calculate the 10-sigma event. But you are correct that the number become meaningless. In fact, they are essentially meaningless at the 10-sigma level, yet then there is reality intruding on the perfect world of the determinists.

    I hope that you enjoyed the data reported above. I loved putting the numbers together.

    With smiles!

    Jason

  5. Ashok says:

    Really simple and powerful way to analyse asset class returns. However, does the usage of standard deviation result in a self-fulfilling statistic of normal distribution pattern. I worked out 1 sigma plus and minus, and found that on an average 77% of returns lie within this range. Is this somehow a result of using standard deviation in the first place?
    Looking forward to your response.

  6. Hello Ashok,

    The standard deviation reported is for daily returns, because your 1 sigma range is different than the 1 sigma range I got I am not sure what numbers you used. Separately, in the piece I stated that the mean and standard deviation approach 0 and 1 (0.03% and 0.98%) – which suggests returns are normally distributed. However, I then point out that the actual distribution of returns is strongly leptokurtic (i.e. peaked) and that the tails are indeed “fat.” That would argue that the distribution of returns for the S&P 500 is not normally distributed.

    Separately, I am not sure I understand your point about the self-fulfilling nature of using standard deviation. Please feel free to clarify.

    With smiles,

    Jason

    1. Ashok says:

      Jason, thanks for your response. I used the equity benchmark index for Indian equity markets. Yes, I did use daily mean and standard deviation. The numbers I got are different from S&P500, obviously, but they are not too different I guess. My question is largely to do with the basic construct of normal deviation. I calculated the number of times, the returns fell into the range of +1 and -1 standard deviation. I was interested in seeing this on a yearly basis. Yes, most of the times the distribution was leptokurtic. The +1 and -1 SD occurences range between 75 – 85% ( I am rounding of quite a bit here for simplicity). What I am happy about is that almost all the times the distributions can pass for a normal distribution. Did the chances of getting normal distribution improve because I used ‘standard deviation’ ? After all isn’t standard deviation computed from mean – sum of squared…. If I used anything else to measure risk, other than standard deviation, would I still get such near perfect central tendencies?

  7. Hello again Ashok,

    Ahh, thank you for clarifying that you were looking at the returns of Indian equity markets.

    If you look at the formula for the standard normal curve you will see that you cannot escape standard deviation in shaping the height and width of the curve. So the concept of standard deviation is inextricably linked with the idea of describing nature using curves and calculus to calcuate the area under the curve/probability.

    The idea of what is the proper measure for ‘risk’ is an entirely different discussion. Here I would agree with you that your distribution would look different if a more realistic/asymmetric defintion of risk (i.e. the chance of loss) were used. Instead of a symmetric distribution you would have an asymmetric one. But this is a subject well outside the bounds of the discussion.

    Thanks for your points of view and your questions!

    Jason

  8. Ashok says:

    Thanks Jason for your response.

  9. Hi Ashok,

    I might add that how to properly measure risk is one of my very favorite topics. In fact, I wrote my masters thesis on this very subject. It sounds as if it is a pet topic of yours, too?! If so, I would love to hear more about your work here.

    In short, standard deviation and beta are not descriptors of risk, in my opinion. They are statistical measures that describe variation and slope in curvilinear geometry and linear geometry, nothing more.

    With smiles,

    Jason

    1. Ashok says:

      Hi Jason – Yes, I am currently working on developing an asset allocation model under the MVO framework. But firstly I am trying to understand, rather qualitatively, the risk of asset classes and their behavior in pairs i.e covariance. I think risk, and its statistical measure such as standard deviation, has got its utility largely as inputs to MVO. And the theory behind it that risks are not additive.

      I think there is low correlation even among extreme events. For example I took a vector of 15 worst returns between India Equity and MSCI EM ex Japan (arrange the daily returns of these two in a single continuous array and pick the worst – Rank function). Now, for every return, I pick the alternate asset class’s return (same day return). I find the correlation between these two arrays is +0.36. I think that is low, given that I am testing the tail.

      Are there any other tests (especially non-parametric) that you think is useful to test and understand risks between asset classes. What I like about non-parametric is they are intuitive and easy to explain.

      1. Hans says:

        Hello,

        You said that you focus is also on alternating volatility and beta to reflect chances of underperformance. Can you point me to some links/work …that expand on this. You can mail me on [email protected]

        Thanks so much!
        Regards
        Hans

  10. Hi Ashok,

    I have never built an asset allocation model before and don’t look at risk in this way so I am unable to help you out here. My work on risk has focused on:

    * Defining risk – in every industry, except finance, risk is defined as ‘the chance of loss’

    * Altering standard deviation and beta so that the numbers only reflect underperformance

    * Development of new Sharpe and Treynor ratios that incorporate actual risk measures

    * An examination of different ways of measuring alpha

    * Development of risk categories

    * Development of qualitative measures of risk

    Stay tuned to The Enterprising Investor and you may hear more about these subjects.

    Be well!

    Jason

  11. Ashok says:

    Thanks Jason. Good to know similar concepts of risk being used in other industries as well. Have always been thinking about it the context of stock or asset class prices. May be its better used and understood in another industry than in finance!

    Regards
    Ashok

  12. Jimmy Dotiwala says:

    Hello Jason,

    This is a fantastic article on my topic of interest. The human mind simply takes numbers at face value.The problem with probabilistic measures of risk is the tendency of ignoring the size of the underlying risk that ‘hides’ behind the probability of an event. The sigma level does not capture this aspect of risk that comes with the fat tails; it ironically creates a false sense of security than an alarm. I think you explained that beautifully in your article. Do you ever intend to shift your focus and explain non-linear measures? That would be very interesting because beta, standard deviation and regression models are already much debated.

    Regards,
    Jimmy

  13. Hi Jimmy,

    Thanks very much for your feedback – I am very pleased that you enjoyed the piece.

    As for the non-linear measures of risk…maybe. At one point I considered myself to be very knowledgeable and cutting edge about such things. However, because the limitations of conventional risk measures are fairly obvious by now, I am guessing that research has to have been written that addresses some of these shortcomings.

    So before commenting further I would want to fill in my 15 year knowledge gap. In other words, I would want to respectfully read the current cutting edge research on the subject before commenting. It may very well be the case that the material I would want to mention is someone else’s entire research interest.

    If you have resources you would like to point me to, I would love it.

    With smiles!

    Jason

  14. Jimmy Dotiwala says:

    Hi Jason,

    I am not aware of such a resource but I will certainly let you know if I come across one.

    Regards,
    Jimmy

  15. Eden says:

    Great piece. Have you performed this study for Monthly returns as well? The results would be intriguing.

  16. Hi Eden,

    I have not performed the study for monthly returns, but may do so in the future.

    Thank you for the feedback!

    Jason

  17. Ashok says:

    If i understand the concept correctly, I don’t think it is about 1M or 6M return that matter. The point is returns tend to converge when you deal with higher and higher number of samples. Even with 6M return, the tendency of normal curve would appear. And that has to do with basic central tendency. The simplicity of 1D return is you get Mean of 0 and SD of 1 which is straight out of the statistics textbook. If you’r running 6M and comparing with 1D take care to adjust the number of sample-n.
    Jason – do you reason similarly, in your experience with other industries.
    Thanks in advance.
    Regards
    M Ashok, CFA

  18. Hello Ashok,

    In general, I feel that you are correct. I would add that just because the mean and standard deviation approximate normal does not necessarily mean that the entire curve approximates normal. After all, essentially mean and standard deviation are two point estimates of a two-dimensional curve. I can think of many geometrical shapes that would have similar points, but radically different shapes. In fact, that is what we have with the S&P 500’s returns. I feel the most surprising thing of the above data is the extreme leptokurtic quality of the returns. Typically we only hear about the “fat tails” of market returns, but what about the giraffe head? This is strongly non-normal.

    I would also point out that I feel differing the time scale will change the results, just as changing your sieve changes the texture of your refined spice. Yet, the spice will taste the same even if it does not look the same.

    Ashok, you point out the importance of adjusting for dispersion (n) is making comparisons. Yet, this adjustment for Brownian motion is essentially a normal distribution/stochastic/random transform. But applying this transform to a curve that is certainly non-normal is probably tough to justify. Take a look at this other The Enterprising Investor piece about this topic: http://blogs.cfainstitute.org/investor/2012/05/29/holes-in-some-of-finances-critical-assumptions-an-interview-with-massif-partners-kevin-harney-part-one/

    As always, thanks for your invaluable contribution!

    Jason

  19. Ashok says:

    Agreed, on the sampling part. Yes the leptokurtic distribution (over the long term) did take me by surprise. The fat tails are compensating as well -in shorter time frames. I mean a negative fat tail events is most often followed by positive fat tail events. The interval could vary, but its not very long. My point is one always hears about tail-risk, but they don’t obviously discuss the upside-tail-risk, do they?

  20. Emlyn says:

    Nice piece of research here, and almost more so the generated discussion within the commentary.

    If I may weigh in a few of the raised issues.

    The term of your returns will most certainly have a large impact on the obtained return distribution. And, unfortunately, this need not approach normality. An oft quoted but somewhat misinterpreted stylized market fact a al Cont (2001) is that of aggregational guassianity, with the rule of thumb being that returns can be considered close enough to normality from 1M onwards (Bingham & Kiesel (2004)). This is definitely not true in certain markets, under both rolling and resampled x-month returns. In your case, your dataset is large enough to choose independent periods, thus alleviating any autocorrelation issues.

    In addition, depending on the period chosen, one will find quite severely differing results, even if the chosen periods are somewhat overlapping.

    That said, what one can posit is that returns are ergodic. However, ergodicity is very difficult to

    In terms of non-linear risk measures, I would suggest coupling this type of analysis with your simple VaR and CVaR measures. The nice point here is that due to the size of your dataset, you can quite easily use a kernel density estimator to find the specified percentile and mean below that with without having to worry too much about estimation/sample size effects here.

    I would also suggest considering Omega. If you are not aware, there is a great picture of two extremely different distributions superimposed with the statement: ‘these distributions have the same mean and variance’. This would capture the potential differences between negative and positive ‘fat tail’ events, and would also allow you to quantify with a bit more rigour what ‘fat tail’ really means and the extent of its effects.

    Finally, in terms of the statement that a negative fat tailed event is most often followed by a positive fat tailed event – I am not so sure. One is uniquely aware at a base level of the gain/loss asymmetry within returns which immediately points to there being more negative extreme events than corresponding positives. However, in order to properly analyse this type of statement, one should really make use of survival (reliability) analysis techniques. While typical survival analysis models the time until ‘death’ of a population for example, one can quite easily define survival as being within certain sigma bounds and ‘death’ being an extreme value. Thus one can accurately capture the dynamics of the recurrence times between extreme negative events or between extreme positive events and more importantly, the recurrence time between moving from a neg (pos) extreme to a pos (neg) extreme. In essence, one would focus on the probability of moving from one extreme event to the next extreme (fore example, down-to-up), conditional upon past survival (no extremes). The hazard function considers exactly this.

    I am always surprised by how under-utilised this type of analysis is in financial research.

    Yours in research,
    Emlyn

  21. Emlyn says:

    Apologies on the unfinished lines in my previous comment. The correction lines below:

    “That said, what one can posit is that returns are ergodic. However, ergodicity is very difficult to prove for dynamic systems, of which the financial world is most certainly one. Another confounding factor is that ergodicity is most usually associated with systems in statistical mechanics, where one’s scale of observations is close to Avogadro’s constant (6.023 x 10^23!), rather than only 13000 returns.

    1. Hi Emlyn,

      Thank you very much for taking the time to share your thoughts about the above data. We all have our favorite aspects of interesting data results. My favorite from the work I did above was suggested by my colleague, Ron Rimkus. He suggested that by creating a ratio of the value of one dollar invested in a buy-and-hold strategy, divided by the result of perfect market timing you could get a sense of the market’s ability to predict the future. Essentially, and obviously that ratio is zero.

      I did not report it in the above piece, but the result of absolutely perfect bad market timing (only buying on the down days) results in your one dollar turning into $0.02 x 10^-23!

      Emlyn and Ashok, you both may be interested in my most recent “Fact File” piece published today on The Enterprising Investor: http://blogs.cfainstitute.org/investor/2012/09/10/fact-file-the-size-of-the-market/

      With smiles!

      Jason

      1. Ashok says:

        Interesting points Jason. I was convinced about the futility of market timing strategy after looking at the average of daily overlapping returns, which is very close to zero. If you analyse buy-and-hold strategies (by looking at increasing duration of buy and hold) you will be convinced that higher-duration returns statistically improve with time. Chance-of-loss or a VAR improve with increasing duration.

  22. Ashok says:

    Wow, this discussion has taken another level with Emlyn’s comments. I am a nobody in advanced Stats but this opens up thinking.
    As for your comments on utilizing basic and adv statistics in financial research, I am gonna agree with you. The closer you reach towards self-actualization mode, the more disconnected you feel with the rest!
    Thanks much for your inputs.

    Regards
    Ashok, CFA

  23. Floyd Vest says:

    Dear Jason: I wrote an article for my students based on your “Fact File:
    S&P 500 Sigma Event” which includes data on monthly and annual returns.
    I can send you a copy if you wish. Dr. Floyd Vest, Retired Professor of
    Mathematics and Education, Mathematics Department, University of North
    Texas, 940-387-2137, 1103 Brightwood, Denton, TX 76209, [email protected]

    1. Hello Floyd,

      I would love to see it! I will reach out via e-mail.

      With smiles!

      Jason

  24. Mark Etwiz says:

    Interesting discovery by AI on predicting multi-sigma events

  25. Jack O'Brien says:

    Jason,

    Great piece! Any chance of revisiting and updating the data through 2021?

    Thanks,

    Jack

  26. Russell says:

    This exists as just a thought exercise at this time, I wouldn’t know how to begin measuring it!

    I’m starting to form the opinion that all stock prices are equally possible in all time periods, including 1 sigma and higher sigma events. So much for normal and other distributions!

    The reason we don’t see more higher sigma events (but still way more than any distribution probability would predict) is because WE make it that way – we want it to be normal and so it is. After all, the market is just two parties determining a price based on internal and external factors and personal opinions at that time – how does that fit into any distribution, normal, log normal or whatever.

    Normal distribution, of course, is fine for marks in class or heights of US males, where there is no personal or human psychology involved, it just is what it is.

    But you gotta measure something right…

    What are your thoughts?

  27. Ana Carol says:

    Great post on the S&P 500 and sigma events! It’s interesting to see how the normal distribution holds for daily returns for the S&P 500, and how the overwhelming majority of daily returns fall within one standard deviation from the mean return. The graphs and breakdowns of sigma events by decade and expected daily occurrence are informative and well-presented. Congratulations on a well-researched and well-written article!

Leave a Reply to Ashok Cancel reply

Your email address will not be published. Required fields are marked *



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close