Enterprising Investor
Practical analysis for investment professionals
28 September 2020

Human–Machine Collaboration and Model-Aware Investing

It’s been said that people don’t become wiser with age, they just become “more so”

Whatever we did well — and more importantly, whatever we did poorly — is magnified. The same is true when we add computers and data to human decision making.

Algorithmic / machine learned / artificially intelligent (AI) tools are increasingly ubiquitous in the investing world. They set investors’ risk tolerance in portfolio management and are applied to alternative data selection as well as actual securities selection, among other tasks.

Subscribe Button

The debate about whether to “use AI” is thus a touch naïve: These tools will surface in even the most fundamentals-oriented discretionary buy-and-hold investor’s research process. The right focus then is on “model awareness”: How can we leverage the fact that machine learning, alternative data, and AI are not only widespread, but increasing in influence?

Model-Aware Investing

Model awareness is our term for how to think about machine learning, AI, large data sets, and so on as a category, or a spectrum of rule-, machine-, or data-driven processes driving the capital markets. To be model aware, every fiduciary, allocator, and manager should start with a holistic focus on the process question: Where is the most opportunity and risk?

It lies with people.

Remove human drivers and pedestrians from the roads and self-driving cars would perform flawlessly. The collaboration between humans and machines is the “lowest bandwidth” connection each has. Think about how easily we can turn a doorknob and walk outside or a computer can render a complex image. Compare that to how hard it is to represent our problem or obtain feedback about its results. Human–machine collaboration is both the key to success and an opportunity vector to exploit.

Ad tile for Artificial Intelligence in Asset Management

Human–Machine Collaboration

The problem and opportunity is in how we view computer- and model-based approaches in the markets. They’re either on our team or on the other team.

Humans and machines can audit each other’s approaches: Can we replicate existing human results with a machine-learned model? And if so, what do our standard tools tell us about the resulting model’s flaws?

We can “counter” the models that computers build and reliably predict relationships they will like or dislike.

The concept of “alpha decay” is real. Something is coming to take our alpha generation away. We can use the flaws in human-machine collaboration to exploit that problem by viewing each other as adversaries.

Adversarial machine learning is a suite of tools and techniques that seeks to overcome intelligent opposition. For example, a group of researchers used image-perturbing eyeglass frames to make sophisticated deep learning networks identify Reese Witherspoon as Russell Crowe.

Even the most advanced, well-defined problem space can be countered. What can we learn from this? That it is critical to supervise and adjust models to address “intelligent opposition” behavior. A simple actionable method is to create a “red team” for an existing discretionary approach or form a human red team to counter a model- or rule-based strategy.

The “red team” concept is borrowed from espionage and military organizations. It means creating an internal opposing team to read the same facts, play devil’s advocate, and support the opposite conclusions. We all have our own informal versions of red teams: We worry about manipulations in GAAP / IFRS earnings vs. cash or about slippage from large block trades and modify our analyses and plans accordingly.

To formalize such a red team model, we might include these approaches, with the additional “counterfactual” data points, in our data sets, and act as though an intelligent opponent was seeking to counter us. This echoes Nassim Taleb’s clarion call to think about how our methods would fare in “all possible worlds,” not just the one world we had in mind. This way we can build out strategies that profit from decay and disorder.

AI Pioneers in Investment Management

Hybrid Human–Machine Behaviors

After we separate ourselves from the machines and “audit” each other, we should remember that humans and machines are not really that separate. Machines often replicate human social biases. Human–machine collaboration may improve certain biases, but it can also worsen, create, or transform others:

  • Improve: Taking decisions out of human hands can alleviate or even solve some behavioral biases. For example, the hedonic treadmill — feeling losses more acutely than gains — is not a problem for a well-configured algorithm.
  • Worsen: How models are designed — often their assumptions, parameters, hyperparameters, and interactions with people — may exacerbate some issues. Correlated volatility spikes across markets and asset classes are tightly tied to this amplification effect. Computers approach and retreat from the asymptotes of their parameters quickly, almost like a mathematical “reflecting boundary.”
  • Create: The continuing rise and reliance on model-, rule-based, and new data sources have led to new behavioral biases. “Hybrid” human–machine issues include black box effects. These inexplicable outcomes — correlated volatility swings, for example — develop out of nowhere and disappear just as mysteriously. Hidden machine–machine interactions can also pop up, such as “machine learning collusion” wherein machines conspire with each other without human direction.
  • Transform: Human behavioral dimensions take on new forms when they are bound to computing or data sets. The peak-end rule, in which the best and worst points and the end of a phenomenon are felt more acutely than the rest of the experience, presents in novel ways when people and machines collaborate.

What can we do today? We can start by thinking about how this set of collaboration gaps affects our strategies. Can we “red team” or “counter” our models and human processes? What hybrid behavioral dimensions will alter our key assumptions about how humans view the world?

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / Dong Wenjie

About the Author(s)
Uzi Hadar, CFA

Uzi Hadar, CFA, is a portfolio manager at Duo Reges Capital Management, a Seattle-based long-short quantamental investment management firm that seeks to exploit human-machine collaboration gaps created by rule or model-based trading. Duo Reges, which means "two kings" in Latin, focuses on the “hard edges” of how humans and machines collaborate in the financial markets, both successfully and unsuccessfully. Its core strategy is to forecast the resulting long and short momentum by clustering market participants into “personas” to which it recommends securities they will like (longs) or dislike (shorts). Hadar has 20 years' experience as a seasoned alternative investments executive leading both liquid and illiquid strategies including as a private equity sponsor and advisor. He also has a background in investment banking and has advised and collaborated extensively with emerging growth companies, industry leaders, alternative investment firms, family offices, and institutional investors. Hadar earned his MBA from the Darden School at the University of Virginia.

Andy Chakraborty

Andy Chakraborty is a portfolio manager at Duo Reges Capital Management, a Seattle-based long-short quantamental investment management firm that seeks to exploit human-machine collaboration gaps created by rule- or model-based trading. Duo Reges, which means "two kings" in Latin, focuses on the “hard edges” of how humans and machines collaborate in the financial markets, both successfully and unsuccessfully. Its core strategy is to forecast the resulting long and short momentum by clustering market participants into “personas” to which it recommends securities they will like (longs) or dislike (shorts). Chakraborty has 15 years of corporate investment and statistical model development experience as a financial and data science leader for Amazon, most recently as chief data scientist for AWS S3 and Amazon Retail Systems. He has held various corporate analytics and investment roles at Microsoft and Sprint. He also has five years of experience running complex semiconductor fab operations for Intel. Chakraborty earned his MBA from the Darden School at the University of Virginia.

Leave a Reply

Your email address will not be published. Required fields are marked *



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close