Can Quantamental Save Active Investing?
“Investment is most intelligent when it is most businesslike.” — Ben Graham, The Intelligent Investor
It’s not easy being an active investor these days. More than 93% of active US equity managers underperformed their benchmark over the past 15 years. International and fixed-income managers have also underperformed. A third of all assets in the United States are now in passive funds, up from a fifth a decade ago. Freakonomics recently did a podcast on active management titled “The Stupidest Thing You Can Do with Your Money.”
Ouch!
Freakonomics cites an old study that found only 2% to 3% of mutual fund managers had enough skill to cover their costs. Perhaps the best managers move to hedge funds, where they enjoy fewer investment constraints and better pay? Sadly, this argument has lost steam as some of the most respected hedge fund managers underperformed or quit in 2017. Warren Buffett’s victory in his high-profile 10-year bet against a basket of hedge funds hasn’t helped either.
While money is fleeing active funds, quant strategies have attracted $1.5 trillion in assets and continue to grow. Following the money, some of the largest fundamental managers have tried to marry their approach with a quantitative framework. This is how the term “quantamental” was coined. Integration has not been easy, however, and several of these marriages have turned rocky.
Traditional fundamental managers are missing the point. Investors are fleeing because returns no longer justify the fees. Managers should not blame external factors like the long bull market, quantitative easing (QE), or the rise of exchange-traded funds (ETFs). Even if these were the prime culprits, they are far beyond the ability of managers to control.
Managers should instead take a long, critical look at their investment process. Instead of searching for ways to bolt on new technology, they should identify weaknesses in their process and fix them. Instead of starting with “alternative datasets,” they should start with in-house data on investment decisions. There are several reasons for this:
- In-house data is the only truly “proprietary” dataset — no other investor has it.
- In-house data offers a competitive advantage to large fundamental managers with many analysts and a long track record of decisions to analyze.
- In-house data provides the best chance to improve the manager’s “edge.” That’s a bigger prize than any single investment or strategy can offer.
Quantamental is a powerful way to assess the investment process with objectivity and speed. Machine learning and big data technologies are new, but the framework for improving a process doesn’t have to be reinvented. A simple, old-school tool like Six Sigma’s DMAIC works just fine.
The first step is to define the purpose of the investment process. That sounds simple and obvious, but managers need a quantitative goal consistent with the product’s objective. Are they after relative or absolute returns? Over what time period? Are they minimizing the drawdown? Or are they maximizing the Sharpe, Sortino, or information ratio?
To avoid an academic detour, the goal can often be sourced from investor communications. For example, Greenlight Capital’s most recent quarterly report begins with the fund’s absolute returns and those of the S&P 500. This suggests answers to many of the questions above.
The typical fundamental investment process can be divided into several steps to make measurement and analysis tractable. Each step consists of a set of decisions. In an optimal process, each decision adds value. The optimal may not be attainable — nobody’s perfect — but it is helpful as a mental model.
For example, during the idea sourcing stage, analysts allocate their limited attention across a wide funnel of ideas to decide which to pursue further. This is similar to a triage process. The analysts’ decisions add value if the “right” ideas are selected for further research and due diligence. A common error at this stage is to overweight the specifics of the analyzed investment and underweight the baseline performance associated with the type or “class” of investment. Daniel Kahneman describes this in Thinking, Fast and Slow. Consider the following puzzle from the book:
“Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail. Is Steve more likely to be a librarian or a farmer?”
Most respondents guess librarian based on the story presented in the question. However, male farmers outnumber male librarians 20 to 1 in the United States. Base rates matter. Steve is much more likely to be a farmer.
A quantamental or statistical approach combats this bias by providing “base rate” performance for the ideas or securities in the funnel. The chart below, for example, helps analysts concentrate on the most productive idea sources.
Other relevant base rates include by security type (preferred vs. common, secured vs. unsecured), geography, sector, or fundamental characteristic (leverage, profitability, growth, valuation).
In addition to providing base rates, quantitative analysis works with a priori data, focusing on process rather than outcome. Here are three applications:
- Show analysts whether they are picking the best available ideas from their funnel. “Best” is defined quantitatively, measuring alignment with the product’s stated investment strategy. For example, a manager we worked with had “value” in the title of its flagship fund but chose “expensive” securities for its portfolio. Reconciling the discrepancy helped its analysts understand what to look for.
- Handicap which ideas are most likely to be accepted by the investment committee based on a priori characteristics. If the committee has never accepted a telecom stock trading at greater than 30-times trailing earnings, for example, this is probably worth taking into account.
- Identify common patterns among securities the analysts select or reject for further analysis. The goal here is to surface potential bias at the individual, team, or organizational level. Not all biases are bad, but all should be intentional.
That’s the idea sourcing stage. But what about the opposite end of the process: portfolio construction? One of the key decisions here is how to size new positions. Fundamental managers typically size them in proportion to the conviction they have developed in the investment. More “promising” ideas are sized larger.
This make sense in principle, but does it work in practice? Quantamental analysis helps evaluate whether the sizing decisions help or hurt performance and whether the conviction is warranted.
This portfolio construction framework is reasonably effective. Better performing ideas receive significantly larger allocations (T-stat of 3.09). Some managers display little (or negative) correlation between size and performance. They have an opportunity to improve performance by switching to equal-size position targets for new ideas.
Performance vs. Initial Size in Portfolio (2015)
Quantamental analysis works throughout the investment process. Each of the questions below facilitates a specific quantitative analysis similar to the two we reviewed:
The goal of this approach is not to replace human analysts with machines, but to leverage the strengths of each. We are better at asking questions. Machines can help with the answers.
If you liked this post, don’t forget to subscribe to the Enterprising Investor.
All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image credit: ©Getty Images/erhui1979
Your article today, April 19, was very good.
Diligence and logic are lacking with many advisors and portfolio managers.
Alon, Would be interesting to know your opinion on Ray Dalio’s “believability weighted decisions” using internal data at Bridgewater and their app “Dots”. Also what do you think about Brier Score as a measure of analysts performance?
Hi Alexei. I haven’t read Dalio’s Principles yet – it’s on my list. In general it’s hard to argue with a process where you get a bunch of people to debate something a point and arrive a consensus. In a perfect world, the “believability weights” should be based on the facts marshaled in the argument rather than the person doing the marshalling (ie don’t shoot the messenger) and the way one updates their “believability” weights might be modeled in a Bayesian framework (the more data, the less flexiblity to update).
As for Brier Scores, I find them helpful in assessing probabilistic predctions for binary outcomes, but outcomes in capital markets tend to be continuous, ie. alpha over a certain period. Maybe there’s a Brier extension for continuous outcomes?