# Rescaled Range Analysis: A Method for Detecting Persistence, Randomness, or Mean Reversion in Financial Markets

*Editor’s note: Thanks to the diligence of Armin Grueneich this post has been amended to reflect the addition of step #5, below, in the calculation of the rescaled range.*

Rescaled range analysis is a statistical technique designed to assess the nature and magnitude of variability in data over time. In investing rescaled range analysis has been used to detect and evaluate the amount of persistence, randomness, or mean reversion in financial markets time series data. Insight of this kind into financial data naturally suggests investment strategies.

Originally invented for the field of hydrology by Harold Edwin Hurst, the technique was developed to predict Nile River flooding in advance of the construction of the Aswan High Dam. The dam needed to fulfill multiple and divergent purposes, including serving as both a store of water to protect against drought for farmers down river, and as flood protection for those same farmers during typical annual flooding. Rainfall levels in Central Africa were seemingly random each year, yet the Nile River flows seemed to show autocorrelation. That is, rainfall in one time period seemed to influence rainfall in subsequent periods. Hurst needed to be able to see if there was a hidden long-term trend — statistically known as a *long-memory process* — in the Nile River data that might guide him in building a better dam for Egypt.

Does this sound familiar? A time series of varying levels that is seemingly random but in which it is suspected that there might also be a long-term, hidden trend. Not surprisingly rescaled range analysis had its moment in the financial analysis sun in the mid-1990s, when chaos theory, as applied to financial markets was a hot topic. Chaos theory is a branch of science that studies the interconnectedness of events that otherwise, on the surface, seem random.

Closely associated with rescaled range analysis is the Hurst exponent, indicated by *H*, also known as the “index of dependence” or the “index of long-range dependence.” A Hurst exponent ranges between 0 and 1, and measures three types of trends in a time series: persistence, randomness, or mean reversion.

- If a time series is persistent with
*H*≥ 0.5, then a future data point is likely to be like a data point preceding it. So an equity with*H*of 0.77 that has been up for the past week is more likely to be up next week as well, because its Hurst exponent is greater than 0.5. - If the Hurst exponent of a time series is
*H*< 0.5, then it is likely to reverse trend over the time frame considered. Thus, an equity with*H*= 0.26 that was up last month is more likely than chance to be down next month. - Time series that have Hurst exponents near to 0.5 display a random (i.e., a stochastic) process, in which knowing one data point does not provide insight into predicting future data points in the series.

So what are the steps to conducting a rescaled range analysis and to estimating the Hurst exponent? As an instructional example, please reference the spreadsheet of the rescaled range analysis of daily return data for the S&P 500 Index from 3 January 1950 through 15 November 2012.

**Rescaled Range Analysis Steps**

1. **Choose your time series.** Do you want to analyze fluctuations in the yield curve? West Texas sweet crude? Apple (AAPL) or Google (GOOG) stock? Or the Dow Jones Industrial Average (DJIA)? Here I am going to select the S&P 500’s daily returns.

2. **Choose your ranges.** Rescaled range analysis depends on multiple lengths of time (i.e., ranges) to be analyzed and chosen arbitrarily by the analyst. In the example of the S&P 500, there are 15,821 daily returns. So I chose the following ranges, all powers of two:

**a.** Size of range is the entire data series = one range of 15,821 daily returns.

**b.** Size of each range is 1/2 of the entire data series = 15,821 ÷ 2 = two ranges of either 7,911 or 7,910 daily returns.

**c.** Size of each range is 1/4 of the entire data series = 15,821 ÷ 4 = four ranges of either 3,956 or 3,955 daily returns.

**d.** Size of each range is 1/8 of the entire data series = 15,821 ÷ 8 = eight ranges of either 1,978 or 1,977.

**e.** Size of each range is 1/16 of the entire data series = 15,821 ÷ 16 = sixteen ranges of either 989 or 988 daily returns.

**f.** Size of each range is 1/32 of the entire data series = 15,821 ÷ 32 = thirty-two ranges of either 495 or 494 daily returns.

3. **Calculate the mean for each range .** For each of the ranges, calculate a mean per the formula below.

*Note: In the above example of the S&P 500 there are 1 + 2 + 4 + 8 + 16 + 32 = 63 means calculated, one for each range.*

Where:

*s* = series (Series 1 is whole data series for S&P 500, or 15,821 daily returns; series 5 is 16 ranges of either 989 or 988 daily returns.)

*n* = the size of the range for which you are calculating the mean

*X* = the value of one element in the range

4. **Create a series of deviations for each range.** Create another time series of deviations using the mean for each range. *Note: In the case of the S&P 500, there will be six new “deviations from the mean” ranges, given the six categories of ranges chosen in Step 2 above (i.e. ranges a, b, c, d, e, and f).*

Where:

*Y* = the new time series adjusted for deviations from the mean

*X* = the value of one element in the range

*m* = the mean for the range calculated in Step 3 above

5.* *Create a series which is the running total of the deviations from the mean.** **Now that you have a series of deviations from the mean for each range, you need to calculate a running total for each range’s deviations from the mean.

Where:

y = the running total of the deviations from the mean for each series

Y = the time series adjusted for deviations from the mean

6. **Calculate the widest difference in the series of deviations.** Find both the maximum and minimum values in the series of deviations for each range. Take the difference between the maximum and minimum in order to calculate the widest difference. *Note: For the S&P 500 example, there are 63 calculations, one for each of the 63 ranges.*

Where:

*R* = the widest spread in each range

*Y* = the value of one element in the “deviations from the mean” range

7. **Calculate the standard deviation for each range.** *Note: There will be 63 standard deviations, one for each range.*

8. **Calculate the rescaled range for each range in the time series.** This step creates a new measure for each range in the time series that shows how wide is the range measured in standard deviations.

Where:

*R*/*S* = the rescaled range for each range in the time series

*R* = the range created in step 5 above

*σ* = the standard deviation for the range under consideration

9. **Average the rescaled range values for each region to summarize each range.** For each region, average the rescaled range (*R*/*S*) values. Using the S&P 500 data as an example, we have the following *R*/*S* values for each of the four ranges of ~3,955 in size:

“Range 1/4”, part 1, *R*/*S*: 83.04

“Range 1/4”, part 2, *R*/*S*: 63.51

“Range 1/4”, part 3, *R*/*S*: 84.16

“Range 1/4”, part 4, *R*/*S*: 88.09

Average of the four *R*/*S* values for “Range 1/4” = (83.04 + 63.51 + 84.16 + 88.09) ÷ 4 = 79.70

For the S&P 500 we have the following values for the rescaled ranges:

Now that you have rescaled each range in the time series, you can calculate the Hurst exponent, *H*, that will summarize in one number the degree of persistence, randomness, or mean reversion in your time series.

**Calculating the Hurst Exponent Steps**

1. **Calculate the logarithmic values for the size of each region and for each region’s rescaled range.** For example, consider the above S&P 500 data:

2. **Plot the logarithm of the size ( x axis) of each series versus the logarithm of the rescaled range (y axis).** This results in a graph that looks something like this one for the S&P 500:

**Rescaled Range Analysis of the S&P 500 (3 January 1950 to 15 November 2012)**

3. **Calculate the slope of the data to find the Hurst exponent.** *H* is the slope of the plot of each range’s log (*R*/*S*) versus each range’s log (size). For the S&P 500 for 3 January 1950 to 15 November, *H* is 0.49. Recall that this means that the S&P 500 demonstrates randomness.

Knowing the *H*, suggests some hypothetical trading strategies. For example, stocks with *H* ≥ 0.5 — that is, persistence — and positive price appreciation would be attractive to a growth manager wanting future capital appreciation. Whereas, stocks with H < 0.5 with prices declining for some time suggest an eventual price trend reversal to a value investor.

*Please note that the content of this site should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute.*

Photo credit: ©iStockphoto.com/ugurhan

I have read your write-up with great interest. Have you carried out the hurst exponent calculations, and if so which stocks and markets. How can I use this for shares listed in the Colombo Stock Exchange. Just one more question is this something like mean reversion?

Hello Hisham,

Thanks for your comment. Yes, the Hurst exponent calculations for the S&P 500 will appear in a subsequent post here on CFA Institute’s The Enterprising Investor blog.

In order to use rescaled range analysis for the Colombo Stock Exchange you simply need long time series of data. I would recommend daily closing prices over the course of at least a decade. Then follow the steps I describe in the above blog post. Also, there is a link in the above post that allows you to download an Excel spreadsheet that demonstrates each of the calculations.

A Hurst exponent of less than 0.5 suggests mean reversion. The lower the Hurst exponent the greater the mean reversion.

I hope that helps!

With smiles,

Jason

Hello Jason,

Very interesting work. I am preparing an article to publish. I was interested to know if you could run the Hurst technique on the S&p for me for specific time periods? Happy to provide you with more info, and happy to cite you should the results be supportive.

Paul

For those familiar with R, the Hurst exponent calculation is implemented in a few packages:

https://r-forge.r-project.org/scm/viewvc.php/pkg/PerformanceAnalytics/R/HurstIndex.R?view=markup&root=returnanalytics

http://www.inside-r.org/packages/cran/pracma/docs/hurst

Here’s an interesting discussion of its evolution over time:

http://www.r-bloggers.com/exploring-the-market-with-hurst/

Hi Paul,

Thank you for the kind offer, however I am going to decline and blame it on my lack of time.

If you follow the link in the article above – http://cfa.wpengine.netdna-cdn.com/investor/files/2013/01/SP-500-Rescaled-Range-Analysis.xlsx – you can download the spreadsheet that contains all of my data as well as all of the above calculations. Hopefully from that you can conduct the analysis for the time periods you need for your research.

With smiles!

Jason

Hello Jason

Your article and accompanying workbook is well done and very helpful.

I am testing a quant strategy using all listed ETFs in the US, and I was looking for a supplementary measure to help predict short-term price movements.

I will let you know if this technique works consistently across the 12 years that I am doing my backtesting.

Thanks again for your time in putting this together.

Best wishes

Savio

Hello Savio,

Thanks very much for your comments. I hope that you found this piece useful. And please do let us know what you find when applying ‘rescaled range analysis’ to your time series.

With smiles!

Jason

Excellent article, and very well explained. The attached example was really helpful too. Thank you very much.

Hi Jason.

Quick question. At the end of point 9. we have a matrix with region count, average data point count and average rescaled range.

If i understand correctly by using these data and logarithms we should be able to get the data in point one under “Calculating the Hurst Exponent Steps”.

But i dont understand what i have to do to get 4.2 and 2.18 from 1, 15821 and 151.77.

Do you see where my problem is and can you help me?

Thank you.

Hi,

Can you upload the document that contains all of your data as well as all of the calculations?

Thanks

Hello Andris and Dani,

The downloadable spreadsheet has been a part of the post from the beginning. Click on the language in the post above that says, “the spreadsheet of the rescaled range analysis of daily return data for the S&P 500 Index from 3 January 1950 through 15 November 2012.” Once you open the spreadsheet you should be able to see all of the data – many thousands of daily returns for the S&P 500 – plus all of the calculations. I just downloaded it myself to confirm this is possible.

If that doesn’t work, please let me know and I will see about giving a more in-depth answer.

Thanks for reading!

Jason

Hi Jason.

Why do you use your calculations on a Returns column? What`s the reason not to use Adj Close column?

Thank you.

Andris.

In your opinnion, is there anything better out there than Hurst for determining trend?

Hi Jason,

Thank you for your clear explanation as to how to calculate the rescaled range values and the Hurst exponent.

I saw that you had included a link to an Excel spreadsheet that allowed to calculate the Hurst exponent, but the link you provided unfortunately no longer works.

Could you please update this link? I’m trying to calculate R/S values for different currency pairs, and it would be a big help to have a professional spreadsheet like yours to do the calculations.

Thanks,

Adrian

Hi Adrian,

Thank you for pointing out that the link did not work. I will endeavor to update the spreadsheet and ensure that the link is restored.

With smiles,

Jason

Hello Jason,

Thanks for the spreadsheet, very helpful. Just one query, Peters (1991) has the fractal as 1/H rather than 2-H. Is this subjective?

Thanks,

Aaron.

Hello Aaron,

I have not read Peters’ work since about 1995 so am not sure what he uses to estimate the fractal dimension. In preparing the above blog post I used several references that were in agreement with the method I described above. However, I have also found other folks using an entirely different method to estimate the fractal dimension. So agreement here is not unanimous.

I think that the important point here is the concept of a fractal dimension. Namely, that the geometry we were taught is not entirely descriptive of real world phenomenon.

I hope this helps!

Jason

That should read ‘fractal dimension’ rather than ‘fractal’

Why do you calculate y in step 5, but don’t use it for anything?

In the Wikipedia article you link to (http://en.wikipedia.org/wiki/Hurst_exponent) they use this cumulative sum (what you call “the running total”) in the calculation of R, but you use the deviations Y.

Hi Filip,

Thanks for your comment. Take a look at the spreadsheet that I provided to see how that series is utilized.

With smiles,

Jason

Hi,

Thank you for the information on this page.

I was wondering if you should shed any light on this graph:

Graph: http://www.bearcave.com/misl/misl_tech/wavelets/hurst/moving_hurst.jpg

From website: http://www.bearcave.com/misl/misl_tech/wavelets/hurst/

Say a 15 day return, would you a;

1) Just take the stock price every 15 days and calculate returns from that

2) or make the width of your 1/32 component above to be 15 days so the whole range is 480 trading days (15 x 32).

I presume it is 1) above otherwise it would be impossible to calculate a 2 day Hurst exponent using this method (as 1/32 would be 1.5hr!).

Anyway if you could clarify that would be great.

Thank you

Hi Tejay,

The decision of what ranges to use is entirely subjective and up to the analyst. Each analyst will be interested in persistence in time horizons unique to their individual analytical work seeking a signal from amongst the noise. That said, Hurst developed rescaled range analysis to look at very long term data on the level of the Nile river. But in a world of high frequency trades being executed in picoseconds, a minute seems like an eternity. If pressed for a recommendation I would say that if your consciousness can comprehend an insight for a chosen time scale (picosecond all the way up to millenia) then you should be able to use rescaled range analysis. The conditional factor here is not the length of time, but whether or not there is meaningful data in dividing the time frame up into ever smaller bits of time.

Hope that helps!

Jason

Hi Jason.

The hurst algorithm takes time series F1,….,FN. Financial time series looks something like brownian random walk: http://upload.wikimedia.org/wikipedia/commons/d/da/Random_Walk_example.svg

But you have transported the brownian signal to something like gausian by (FN+1/FN)-1. This is confusing for me. Could you explain why you did that? Why is that necessary?

The results are different for the same time series, so i assume it is important.

What type of input did the Hurst use for Nile?

Thank you.

Hello Kovalevskis,

So sorry for the delayed response on your question. Somehow my automatic notification of comments is broken on this post so just now saw your question. Apologies.

In answer to your question about why I calculated something the way I calculated it…I was following the procedure outlined by Hurst himself nearly 100 years ago.

Hurst’s input for the Nile, if I remember correctly, was the annual level of flooding, probably measured in meters. If I remember correctly he was trying to help build a dam for the river and he needed to know how high to make the dam in order to ensure there was no downstream flooding caused by having too short a dam height.

Cheers!

Jason

the sub-samples are taken without replacement.

There are some flaws in rescaled range analysis as noted in this paper

http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2220942

Sorry, here is the correct link to my rescaled range analysis paper

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2448648

two questions…

in light of the comments of July 4, 2014 is the spreadsheet with data through November 2012 still correct?

I am updating the spreadsheet to more recent data, I noticed that there are hardcoded values spread from cell J15829 to J16289. they are not used anywhere. can you shed some light as to whether they are needed (I suspect not) or what function they perform?

thanks in advance

Hello San Fran Sam,

To the best of my memory those hardcoded data are the values that were originally calculated for the maximums and minimums in the ranges in the first iteration of the spreadsheet. If you take a look at the top of this post describing rescaled range analysis you will see that someone pointed out an error in my calculations in the spreadsheet. In order to show the readers of this piece the difference that made, I hardcoded the old (mistaken) values to the right so as to eliminate confusion for those that had different versions of the spreadsheet.

I hope that helps!

Jason

Dear Jason,

Your explanation is very easy to understand. I am a master student and doing my final thesis which is studied about ” Dynamic characteristics analysis of Malaysian crude palm oil price using fractal theory”.

I am not so sure whether I need to use original data or return data in order to find Hurst exponent value. I have some problems related to the process of data analysis. Would you mind kindly to give your suggestion?

1. To find Hurst exponent, I should use original data or return data.

2. Do you know the code for plot the fluctuation of the price in matlab?

3. How to plot the log(R/S)n and longn and linear regression in matlab?

4. How to find V-statistic (Vn) in order to find memory term and plot the picture in matlab?

I really apologize to disturb your time.

I am looking forward to hearing from you as soon as possible.

Thanks and Best Regards,

Molida

Hello Molida,

Thank you so much for your comment – I am pleased that the explanation was easy to understand.

Rescaled range analysis and the Hurst exponent were developed for time series data related to the annual flooding cycle of the Nile River. This time series is very similar to the returns for financial assets. You should be able to use return data. In fact, if you have access to a Bloomberg terminal the function ‘KAOS’ calculates the Hurst exponent for a series of closing prices for assets or a return series (if I am not mistaken).

I do not use Matlab so unfortunately cannot help you. There may be a help group for Matlab on LinkedIn.

Yours, in service,

Jason

Hi

I am currently working on a project about emotion recognition .

I have a little information from Hurst exponent .

I want to use them in my project .

I want a program to extract Hurst exponent of the QRS complex using RRS(Rescaled Range Statistics) & FVS (Finite Variance Scaling) methods.

You can help me?

Hello Mahvash,

The only program I am aware of that calculates the Hurst exponent automatically is Bloomberg and its famous terminal. Specifically, the function ‘KAOS’ calculates the Hurst exponent for any time series.

Yours, in service,

Jason

there is a downloadable spreadsheet included in the data archive for this paper that you may be able to use.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2448648

Hi Jason

If you can provide similar excel calculator for Hurst ” Fractaional Brownian Motion ” and DFA.

Regards

Biplab

Hello, Jason

I think your work is very clear. However, I have some questions for choosing the range, for example, A=N/n, here A is the group we can get, N is the total length of the time series, n is the subseries’ length, like you put 2,4,8,16,32. This is one method to select the length of subseries, and do you have other ways to separate the time series? for instance, scale=2,4,6,8,16…(2a).

Anyway, I test both ways and get different Hurst exponents, can you give more information about the effects of subseries’ length on Hurst exponent?

Thank you very much!

Olivia

Hello Olivia,

The choice of the ranges is entirely arbitrary and is up to the analyst. One suggestion would be to use ranges that match your investment time horizon. For example, if your firm normally has a 2 year investment time horizon then make the ranges multiples or fractions of the 2 year time horizon. Or if your firm has an investment time horizon of 3 months then make your ranges multiples or fractions of that time horizon. One other suggestion would be to create an algorithm that looks for sustained Hurst exponents by utilizing thousands of different time horizons.

Yours, in service,

Jason

Hello,

Excellent article. There is a lot of information of this ratio, but this is pretty clear.

Can you please explain a bit more the investment horizon you explain in the previous comment? For example, if I work with a two year time horizon investment I should generate data for the previous two years and if it is mean reverting expect to reverse the next two years? or if it is mean reverting is probably going to revert quicker? For example, generate two years data and expect to revert in the next two months.

Do you know if there is any study about this?

Thank you very much. It is very helpful.

Andres

Hello Andres,

I’m glad that you find this article useful. In answer to your question, I was referring to step #2 in rescaled range analysis where the analyst selects her/his ranges. This choice is arbitrary and is up to the analyst.

I said “two years” in answer to the above question from Olivia because if an investor has a two year investment time horizon that means that their unbiased estimator of the future will be the preceding two years worth of data for the time series on which they want to conduct a rescaled range analysis. If your investment time horizon is 5 years then the unbiased estimate is the preceding 5 years of data. And so on.

The interpretation of the Hurst exponent is as you described it. If for the preceding two years an asset’s price has been down, and the Hurst exponent indicates mean reversion (i.e. H < 0.50), then an analyst would expect the price to move back to its long-term average. As for your question about "quicker" - that is, if it is already mean reverting, do you expect it to revert back more quickly. I have no idea! : ) Given that the original use of rescaled range analysis (RRA) was to build a dam once (i.e. the Aswan) that could contend with any scenario thrown at it, I would have to say that RRA does not say anything about second order/second moment/accelerating influences. I do not know of a study that addresses these issues. Perhaps someone else on this forum knows the answer. Yours, in service, Jason

i wrote up a little article on “Methods for Estimating the Hurst Exponent of Stock Returns” that is just as applicable to other time series data. This link for the download button is

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2564916

Hi Jamal!

Thank you so much for sharing your work with our fellows!

Yours, in service,

Jason

Thank you so much for you quick answer.

I have another question. It looks a bit stupid, but just to clarify. In step 5, the Yt, I guess, it should be in absolute value, isn’t it?

For example, if I do the formula without absolute value for the foloowing data: 1300,1450,1600,1850 (mean = 1550).

The deviation without absolute value is 0.

The deviation with absolute value is 700 (the same value if you use AVEDEV formula x 4 in excel).

Thanks again and regards,

Hello Andres,

Not a stupid question at all. The correct method is to NOT use absolute values. Yes, at the end of every series they will sum to zero.

Yours, in service,

Jason

Hi dear

when I want to create a model for prediction a time series

Do N^H give me horizon of prediction?

if yes,which N do I use?

Is N for bigest H corrct

Hi.

Am a masters student in Kenya Studying Msc in applied Statistics.

I have followed the process and made my decision on the type of project.

Thanks

Hi ,

I am not able to open the page having spreadsheet. So, can you just help me on that. Actually I need the data that you used for calculation of Hurst exponent here, as I want to calculate ‘H’ using a software(matlab) and compare my results with yours. I’ll be sure then whether I am on the right track or not.

Hello NJ,

I just tested the ability of the spreadsheet to be downloaded using Chrome, Firefox, and Internet Explorer and had no troubles downloading it. If you are not using one of these three browsers then I would recommend downloading using one of them. If you are using one of those browsers maybe your service provider has a limit on the size of a download – the file is over 5MB in size.

Best wishes for success!

Yours, in service,

Jason

Hello Jason,

First of all thanks for replying. Coming to the problem, the moment I am clicking on the link I am getting a page of CFA Institute stating ‘Nothing Found’. I am using chrome and also my service provider’s limit is not restricted. If there is some other link that you can send or if you could just mail me the data I would be grateful to you.

My email id is jamwalnaina.er@gmail.com.

Thank you so much. You have really great talent.

I have a little question. If calculated Hurst exponent is greater than one, what it mean?

I understood:

0<H<0.5 mean-reverting

H=0.5 Random walk

0.5<H1 mean?

Hello John,

I believe that having a Hurst exponent greater than 1.0 is an impossibility because of how the number is calculated. But I could be wrong. Besides my explanation of how to calculate the Hurst, there are other websites that also demonstrate the technique, including Wikipedia. I would check your work relative to one of those sites. If you executed your work in a spreadsheet or in a maths software, then all likelihood it is just a calculation error somewhere.

Yours, in service,

Jason

Hi dear Jason

what is relation between mean of long memory and Hurst exponent?

Some methodological issues in re-scaled range analysis

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2695753

What a fair coin with memory looks like

Hello Jamal,

Very cool research! Thank you for providing the link. Did you actually flip a coin, or are these simply computer model “flips” where there is a 50:50 probability? Also, separately, how many trials does each “run” of coin-flipping represent?

Yours, in service,

Jason

hi jason

it’s an excel simulation of coin tosses

i can send you the excel file of you would like

the graph that you see represents 10,000 coin tosses

Hi Jamal,

I do not need to see the research, thank you for offering to send it on. I am not a computing expert, but wonder if the “memory” you documented is the result of the fact that most computers’ random number generators are not perfectly random. In other words, I wonder if the ‘memory’ would be present in actual, physical world, coin flips? Obviously, this would be an arduous task to replicate the number of trials.

Yours, in service,

Jason

i programmed the memory in to mimic persistence.

Wow! You are so thorough! Well done!

The Hurst exponent of ozone

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2748016

Hello Jason,

Loved your work, such a clear description of the R/S analysis is rare in the literature.

I’ve implemented this method to data that is sorted by depth rather than time (in geology time and depth are analogues) but my H seems to be too large by a factor of ten, i.e. while I’m expecting 0.73 my slope is 7.3.

I understand that this method looks at the range of the data, and shouldn’t really matter what units your range is….

I was wondering if you have any Ideas?

Hello Roozbeh,

Thank you for your kind words…I tried very hard to make the instructions clear. I am sorry that they have failed you in this instance : )

As you know, since you have built out your own R/S model, the calculation is very involved. So it is very difficult for me to evaluate what might be wrong with your calculation based on the limited amount of information you provided. My advice would be to download the spreadsheet that is associated with this article and to work through our calculation as compared with mine. I have a hunch that your difficulty may be in the very final step where slope is calculated based on the number of ranges you have selected.

Yours, in service,

Jason

Hi, Jason thanks for the work on RRA. It is very simplified unlike other sources I came across. Just one question, Is it possible to test a hypothesis based on your data or construct confidence level?

Hi Robert,

I am glad that the explanation of how to do rescaled range analysis was easy to understand for you. Yea!

Thank you for your question. I am no statistician, however, I do know that both hypothesis testing and confidence intervals rely on knowing the shape of the distribution of the data. After all, these techniques work because we know the mathematics that describe a distribution. We then make comparisons to these distributions, assuming they are the benchmark. So not knowing the shape of the distributions makes these techniques impossible.

The most common distribution is of course the standard normal distribution, which supposedly describes random processes. In rescaled range analysis only a Hurst exponent of 0.5 describes a random sample of data. Anything other than that is considered non-random. So the usual hypothesis testing and confidence intervals are not going to apply except in that special circumstance. Knowing the shape of the distributions of non-0.50 Hurst exponent samples is too tough for me to answer.

Does anyone in the audience have any information on distribution shapes for non-0.50 Hurst exponent data?

Yours, in service,

Jason

You’re right Jason. No asymptotic distribution has been developed for RRA. Therefore it makes it difficult to conduct a hypothesis test. Although Weron (2002) has contructed confidence intervals using Monte-Carlo simulation.

I have been looking for an explanation like this for a couple of week now, thank you…

I have 2 Questions:

1-) Step 5: Do we use it to calculate the Standard deviation? because I got the standard deviation from a library, so I presume I could skip this step.

2-) I tried clicking the Excel sheet, and it took me to a Page not found message, is there a place i can find the spread sheet?