Do Returns Matter More When You Watch? The Reason to “Monitor” A Fund Before Investing

Categories: Philosophy
Do Returns Matter more when you Watch? The reason to "monitor" a fund before investing.

When I was four, my grandmother would often supervise me at mealtime. Like many grandmothers, she believed her mission was to get me to finish all the food on my plate whether I was hungry or not. First, she would let me eat on my own. Then, when I’d try to leave the table, she’d arch an eyebrow and ask where I was going.

“I’m done eating,” I’d reply.
“But your plate is still full!” she would say. It wasn’t.
“No — see, I ate some of the oatmeal.”
“I didn’t see.” She would sit next to me and fold her hands. “Show me.”

At that point, no evidence I could muster would be enough to convince my grandmother. She would have to witness my eating firsthand.

Institutional investors behave in a similar way to my grandmother when selecting fund managers. After picking a manager for a shortlist, presumably based on the manager’s long-term track record, an institutional investor will “monitor” the manager’s performance for six months, a year, or longer before making the investment. As a fund manager, I used to find this behavior puzzling and a little frustrating. What exactly are they looking for? If the manager performs poorly during the relatively brief monitoring period, does this tell the institutional investor more than the long-term track record did before the monitoring started? In other words, do returns matter more when an investor is watching?

To answer, let’s take a short mathematical detour. Suppose you found a manager that outperformed its benchmark (say the S&P 500 Index) in seven of the last eight years. This manager, let’s call it Pricey Capital Advisers, has accumulated a certain mystique and not a little glamor because of its exceptional track record. Pricey has high-profile clients and fees. But are the fees worth it? What are the chances Pricey achieved this performance through sheer luck (i.e., without any superior skill)? Let’s make the simplifying assumption that a manager without skill — we’ll call it Random Capital — can outperform its benchmark in any given year with a probability of 50%1. Random’s probability of outperforming eight years in a row is about 0.39% ([50%]^8). Its probability of outperforming in exactly seven of the last eight years is about 3.13% (8 × [50%]^8)2. Therefore, the probability Random outperformed in at least seven of the last eight years is 3.52% (0.39% + 3.13%). That is, Random has a 3.5% chance of matching Pricey’s track record. Put another way, there’s a 3.5% chance Pricey is really Random, lacking in any skill. That’s a very low chance, so you feel pretty good about Pricey so far.

Now let’s add a fact: Suppose you picked Pricey because it had the best eight-year performance relative to its benchmark on a list of 100 “top” managers profiled in an article. Does this change the odds? In fact, it does. Suppose none of the 100 top managers have skill. What are the odds that at least one of them would beat its benchmark in at least seven of the last eight years? Random has a 3.52% chance of achieving this result, so it has a 96.48% (1 – 3.52%) chance of not achieving it. In a group of 100 Randoms, the chance that none achieve the desired result is 2.79% (96.48%^100). So, the chance that at least one of them achieves it is 97.2% (1 – 2.79%)3. In other words, we can be almost certain that Pricey’s track record was due to luck, not skill.

What’s happened here? We did not change Pricey’s track record, only the context of the manager selection process. By picking the best-performing manager from a large group, we have introduced a powerful selection bias that renders historical performance all but useless as a measure of skill. One way to reduce this bias is to monitor the manager after putting it on the shortlist. If we monitor Pricey and it continues to beat its benchmark the following year, the odds that its performance was generated through sheer luck drops by 50%, from 97.2% to 48.6%. We might do even better if we measure quarterly rather than annual performance. More frequent measurements may allow us to reduce the bias more quickly during the monitoring period — as long as the measurements are aligned with the time horizon of the investment strategy. So, monitoring is not quite as unreasonable as it appears. Perhaps my grandmother had a point after all….


1. This assumption is probably heroic because it ignores fees, transaction costs, and behavioral biases that would likely make an unskilled manager’s security selection considerably worse than random.



2. Underperformance could have happened in any one of the last eight years, so there are eight scenarios. Each scenario has a probability of 50%^8. More generally, this can be calculated using the .



3. Another big and probably unwarranted assumption here is that each manager’s performance is independent..



If you liked this post, don’t forget to subscribe to Inside Investing via Email or RSS.

Please note that the content of this site should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute.

Leave a comment

Your email address will not be published. Required fields are marked *