Practical analysis for investment professionals
26 November 2021

Machine Learning: Explain It or Bust

“If you can’t explain it simply, you don’t understand it.”

And so it is with complex machine learning (ML).

ML now measures environmental, social, and governance (ESG) risk, executes trades, and can drive stock selection and portfolio construction, yet the most powerful models remain black boxes.

ML’s accelerating expansion across the investment industry creates completely novel concerns about reduced transparency and how to explain investment decisions. Frankly, “unexplainable ML algorithms [ . . . ] expose the firm to unacceptable levels of legal and regulatory risk.”

In plain English, that means if you can’t explain your investment decision making, you, your firm, and your stakeholders are in deep trouble. Explanations — or better still, direct interpretation — are therefore essential.

Subscribe Button

Great minds in the other major industries that have deployed artificial intelligence (AI) and machine learning have wrestled with this challenge. It changes everything for those in our sector who would favor computer scientists over investment professionals or try to throw naïve and out-of-the-box ML applications into investment decision making. 

There are currently two types of machine learning solutions on offer:

  1. Interpretable AI uses less complex ML that can be directly read and interpreted.
  2. Explainable AI (XAI) employs complex ML and attempts to explain it.

XAI could be the solution of the future. But that’s the future. For the present and foreseeable, based on 20 years of quantitative investing and ML research, I believe interpretability is where you should look to harness the power of machine learning and AI.

Let me explain why.

Finance’s Second Tech Revolution

ML will form a material part of the future of modern investment management. That is the broad consensus. It promises to reduce expensive front-office headcount, replace legacy factor models, lever vast and growing data pools, and ultimately achieve asset owner objectives in a more targeted, bespoke way.

The slow take-up of technology in investment management is an old story, however, and ML has been no exception. That is, until recently.

The rise of ESG over the past 18 months and the scouring of the vast data pools needed to assess it have been key forces that have turbo-charged the transition to ML.

The demand for these new expertise and solutions has outstripped anything I have witnessed over the last decade or since the last major tech revolution hit finance in the mid 1990s.

The pace of the ML arms race is a cause for concern. The apparent uptake of newly self-minted experts is alarming. That this revolution may be coopted by computer scientists rather than the business may be the most worrisome possibility of all. Explanations for investment decisions will always lie in the hard rationales of the business.

Tile for T-Shape Teams report

Interpretable Simplicity? Or Explainable Complexity?

Interpretable AI, also called symbolic AI (SAI), or “good old-fashioned AI,” has its roots in the 1960s, but is again at the forefront of AI research.

Interpretable AI systems tend to be rules based, almost like decision trees. Of course, while decision trees can help understand what has happened in the past, they are terrible forecasting tools and typically overfit to the data. Interpretable AI systems, however, now have far more powerful and sophisticated processes for rule learning.

These rules are what should be applied to the data. They can be directly examined, scrutinized, and interpreted, just like Benjamin Graham and David Dodd’s investment rules. They are simple perhaps, but powerful, and, if the rule learning has been done well, safe.

The alternative, explainable AI, or XAI, is completely different. XAI attempts to find an explanation for the inner-workings of black-box models that are impossible to directly interpret. For black boxes, inputs and outcomes can be observed, but the processes in between are opaque and can only be guessed at.

This is what XAI generally attempts: to guess and test its way to an explanation of the black-box processes. It employs visualizations to show how different inputs might influence outcomes.

XAI is still in its early days and has proved a challenging discipline. Which are two very good reasons to defer judgment and go interpretable when it comes to machine-learning applications.


Interpret or Explain?

Image depicting different artificial intelligence applications

One of the more common XAI applications in finance is SHAP (SHapley Additive exPlanations). SHAP has its origins in game theory’s Shapely Values. and was fairly recently developed by researchers at the University of Washington.

The illustration below shows the SHAP explanation of a stock selection model that results from only a few lines of Python code. But it is an explanation that needs its own explanation.

It is a super idea and very useful for developing ML systems, but it would take a brave PM to rely on it to explain a trading error to a compliance executive.


One for Your Compliance Executive? Using Shapley Values to Explain a Neural Network

Note: This is the SHAP explanation for a random forest model designed to select higher alpha stocks in an emerging market equities universe. It uses past free cash flow, market beta, return on equity, and other inputs. The right side explains how the inputs impact the output.

Drones, Nuclear Weapons, Cancer Diagnoses . . . and Stock Selection?

Medical researchers and the defense industry have been exploring the question of explain or interpret for much longer than the finance sector. They have achieved powerful application-specific solutions but have yet to reach any general conclusion.

The US Defense Advanced Research Projects Agency (DARPA) has conducted thought leading research and has characterized interpretability as a cost that hobbles the power of machine learning systems.

The graphic below illustrates this conclusion with various ML approaches. In this analysis, the more interpretable an approach, the less complex and, therefore, the less accurate it will be. This would certainly be true if complexity was associated with accuracy, but the principle of parsimony, and some heavyweight researchers in the field beg to differ. Which suggests the right side of the diagram may better represent reality.


Does Interpretability Really Reduce Accuracy?

Chart showing differences between interpretable and accurate AI applications
Note: Cynthia Rudin states accuracy is not as related to interpretability (right) as XAI proponents contend (left).

Complexity Bias in the C-Suite

“The false dichotomy between the accurate black box and the not-so accurate transparent model has gone too far. When hundreds of leading scientists and financial company executives are misled by this dichotomy, imagine how the rest of the world might be fooled as well.” — Cynthia Rudin

The assumption baked into the explainability camp — that complexity is warranted — may be true in applications where deep learning is critical, such as predicting protein folding, for example. But it may not be so essential in other applications, stock selection among them.

An upset at the 2018 Explainable Machine Learning Challenge demonstrated this. It was supposed to be a black-box challenge for neural networks, but superstar AI researcher Cynthia Rudin and her team had different ideas. They proposed an interpretable — read: simpler — machine learning model. Since it wasn’t neural net–based, it did not require any explanation. It was already interpretable.

Perhaps Rudin’s most striking comment is that “trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from.”

Her point should be familiar to those with backgrounds in behavioral finance Rudin is recognizing yet another behavioral bias: complexity bias. We tend to find the complex more appealing than the simple. Her approach, as she explained at the recent WBS webinar on interpretable vs. explainable AI, is to only use black box models to provide a benchmark to then develop interpretable models with a similar accuracy.

The C-suites driving the AI arms race might want to pause and reflect on this before continuing their all-out quest for excessive complexity.

AI Pioneers in Investment Management

Interpretable, Auditable Machine Learning for Stock Selection

While some objectives demand complexity, others suffer from it.

Stock selection is one such example. In “Interpretable, Transparent, and Auditable Machine Learning,” David Tilles, Timothy Law, and I present interpretable AI, as a scalable alternative to factor investing for stock selection in equities investment management. Our application learns simple, interpretable investment rules using the non-linear power of a simple ML approach.

The novelty is that it is uncomplicated, interpretable, scalable, and could — we believe — succeed and far exceed factor investing. Indeed, our application does almost as well as the far more complex black-box approaches that we have experimented with over the years.

The transparency of our application means it is auditable and can be communicated to and understood by stakeholders who may not have an advanced degree in computer science. XAI is not required to explain it. It is directly interpretable.

We were motivated to go public with this research by our long-held belief that excessive complexity is unnecessary for stock selection. In fact, such complexity almost certainly harms stock selection.

Interpretability is paramount in machine learning. The alternative is a complexity so circular that every explanation requires an explanation for the explanation ad infinitum.

Where does it end?

One to the Humans

So which is it? Explain or interpret? The debate is raging. Hundreds of millions of dollars are being spent on research to support the machine learning surge in the most forward-thinking financial companies.

As with any cutting-edge technology, false starts, blow ups, and wasted capital are inevitable. But for now and the foreseeable future, the solution is interpretable AI.

Consider two truisms: The more complex the matter, the greater the need for an explanation; the more readily interpretable a matter, the less the need for an explanation.

Ad tile for Artificial Intelligence in Asset Management

In the future, XAI will be better established and understood, and much more powerful. For now, it is in its infancy, and it is too much to ask an investment manager to expose their firm and stakeholders to the chance of unacceptable levels of legal and regulatory risk.

General purpose XAI does not currently provide a simple explanation, and as the saying goes:

“If you can’t explain it simply, you don’t understand it.”

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / MR.Cole_Photographer


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

About the Author(s)
Dan Philps, PhD, CFA

Dan Philps, PhD, CFA, is head of Rothko Investment Strategies and is an artificial intelligence (AI) researcher. He has 20 years of quantitative investment experience. Prior to Rothko, he was a senior portfolio manager at Mondrian Investment Partners. Before 1998, Philps worked at a number of investment banks, specializing in the design and development of trading and risk models. He has a PhD in artificial intelligence and computer science from City, University of London, a BSc (Hons) from King’s College London, is a CFA charterholder, a member of CFA Society of the UK, and is an honorary research fellow at the University of Warwick.

4 thoughts on “Machine Learning: Explain It or Bust”

  1. David Botbol says:

    Many thanks for this well informed post.
    Could we say that the slow pace of AI adoption may also be a testimony to us – financial pros – having learned at least one lesson from “when genius failed” : “If you can’t explain it simply, you don’t understand it” ?

    1. Dan Philps says:

      Thanks for your comments David. I partly agree, but there has also been… 1) inertia in the face of the dramatic regime change regarding available investment information, and the tech to deal with it (ie ML); 2) a fear of style drift, as many quants (specifically) have intricately tied themselves to linear factor based investing, a fundamentally different approach to most ML.
      (Separately, see our Nov 2021 JFDS paper linked above for more on the relative advantages of interpretable-AI vs factor based investing.)

  2. Paul OBrien says:

    This all makes sense but leaves me with a question: What do you mean by “explain” and “understand”? Sure, a human can “explain” why they made a decision, but you cannot verify that they are giving an accurate explanation. The human mind is the ultimate black box. Are you aiming to hold AI to higher standard than you hold people?

  3. Dan Philps says:

    Thanks Paul. Can I take a different angle on this…

    The acid test is, when things go wrong, will your explanation be sufficient for your stake holders to understand why?

    Stakeholder knowledge, the complexity of what you’re trying to explain, and your ability to explain it, are the 3 key parameters.

    The explosion in data we’ve seen in the last 10years and the revolution in computing power that now allows us to draw deeper inferences from it (ie AI), means we need some complexity, but my message is, keep it simple where possible, and come up with better explanations. You could also better educate your stakeholders (the final parameter), but I gather the CFA Inst have that covered!

Leave a Reply

Your email address will not be published. Required fields are marked *



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close