Enterprising Investor
Practical analysis for investment professionals
05 February 2026

Where AI Ends and Investment Judgment Begins

Artificial intelligence is reshaping how investment professionals generate ideas and analyze investment opportunities. Not only is AI now able to pass all three CFA exam levels, but it can complete long, complex investment analysis tasks autonomously. Yet a close reading of the latest academic research reveals a more nuanced picture for professional investors. While recent advancements are striking, a closer reading of current research, reinforced by Yann LeCun’s recent testimony to the UK Parliament, points to a more structural shift.

Across academic papers, company studies, and regulatory reports, three structural themes recur. Together, they suggest that AI will not simply enhance investor skill. Instead, it will reprice expertise, elevate the importance of process design, and shift competitive advantages toward those who understand AI’s technical, institutional, and cognitive constraints.

This post is the fourth installment in a quarterly series on AI developments relevant to investment management professionals. Drawing on insights from contributors to the bi-monthly newsletter, Augmented Intelligence in Investment Management, it builds on earlier articles to take a more nuanced view of AI’s evolving role in the industry.

Capability Is Outpacing Reliability

The first observation is the widening gap between capability and reliability. Recent studies show that frontier reasoning models can clear CFA Level I to III mock exams with exceptionally high scores, undermining the idea that memorization-heavy knowledge confers durable advantage (Columbia University et al., 2025). Similarly, large language models increasingly perform well across benchmarks for reasoning, math, and structured problem solving, as reflected in new cognitive scoring frameworks for AGI (Center for AI Safety et al., 2025).

However, a body of research warns that benchmark success masks fragility in real-world scenarios. OpenAI and Georgia Tech (2025) show that hallucinations reflect a structural trade-off: efforts to reduce false or fabricated responses inherently constrain a model’s ability to answer rare, ambiguous, or under-specified questions. Related work on causal extraction from large language models further indicates that strong performance in symbolic or linguistic reasoning does not translate into robust causal understanding of real-world systems (Adobe Research & UMass Amherst, 2025).

For the investment industry, this distinction is critical. Investment analysis, portfolio construction, and risk management do not operate with stable ground truths. Outcomes are regime-dependent, probabilistic, and highly sensitive to tail risks. In such environments, outputs that appear coherent and authoritative, yet are incorrect, can carry disproportionate consequences.

The implication for investment professionals is that AI risk increasingly resembles model risk. Just as back tests routinely overstate real-world performance, AI benchmarks tend to overstate decision reliability. Firms that deploy AI without adequate validation, grounding, and control frameworks risk embedding latent fragilities directly into their investment processes.

From Individual Skill to Institutional Decision Quality

The second theme is that AI is commoditizing investment knowledge while increasing the value of the investment decision process. Evidence from AI use in production environments makes this clear. The first large-scale study of AI agents in production finds that successful deployments are simple, tightly constrained, and continuously supervised. In other words, AI agents today are neither autonomous nor causally “intelligent” (UC Berkeley, Stanford, IBM Research, 2025). In regulated workflows, smaller models are often preferred because they are more auditable, predictable, and stable.

subscribe

Behavioral research reinforces this conclusion. Kellogg School of Management (2025) shows that professionals under-use AI when its use is visible to supervisors, even when it improves accuracy. Gerlich (2025) finds that frequent AI use can reduce critical thinking through cognitive offloading. Left unmanaged, AI therefore introduces a dual risk of both under-utilization and over-reliance.

For investment organizations, the lesson is therefore structural: the benefits of AI do not accrue to individuals, but they accrue to investment processes. Leading firms are already embedding AI directly into standardized research templates, monitoring dashboards, and risk workflows. Governance, validation, and documentation increasingly matter more than raw analytical firepower, especially as supervisors adopt AI-enabled oversight themselves (State of SupTech Report, 2025).

In this environment, the traditional notion of the “star analyst” also weakens. Repeatability, auditability, and institutional learning may become the true source of sustainable investment success. Such an environment requires a distinct shift in how investment processes are designed. In the aftermath of the Global Financial Crisis (GFC), investment processes were largely standardized with a strong focus on compliance.

The emerging environment, however, requires investment processes to be optimized for decision quality. This shift is significant in scope and difficult to achieve, as it depends on managing individual behavioral change as a foundational layer of organizational adaptive capacity. This is something the investment industry has often sought to avoid through impersonal standardization and automation—and is now attempting again through AI integration, mischaracterizing a behavioral challenge as a technological one.

Why AI’s Constraints Determine Who Captures Value

The third theme focuses on the limitations of AI, rather than viewing it solely as a technological race. On the physical side, infrastructure limits are becoming binding. Research highlights that only a small fraction of announced US data center capacity is actually under construction, with grid access, power generation, and transmission timelines measured in years, not quarters (JPMorgan, 2025).

Economic models reinforce why this matters. Restrepo (2025) shows that in an artificial general intelligence (AGI)-driven economy, output becomes linear in compute, not labor. Economic returns therefore accrue to owners of chips, data centers, and energy. Compute infrastructure placement, chips, datacenters, energy, and platforms that manage allocation, is the controlling factor in capturing value as labor is removed from the equation for growth.

Institutional constraints also demand closer attention. Regulators are strongly expanding their AI capabilities, raising expectations for explainability, traceability, and control in the investment industry’s use of AI (State of SupTech Report, 2025).

Finally, cognitive constraints loom large. As AI-generated research proliferates, consensus forms faster. Chu and Evans (2021) warn that algorithmic systems tend to reinforce dominant paradigms, increasing the risk of intellectual stagnation. When everyone optimizes on similar data and models, differentiation disappears.

For professional investors, widespread AI adoption elevates the value of independent judgment and process diversity by making both increasingly scarce.

Implications for the Investment Industry

AI’s growing role in automating investment workflows clarifies what it cannot remove: uncertainty, judgment, and accountability. Firms that design their organizations around that reality are more likely to remain successful in the decade ahead.

Taken together, the evidence suggests that AI will act as a differentiator rather than a universal uplift, widening the gap between firms that design for reliability, governance, and constraint, and those that do not.

At a deeper level, the research points to a philosophical shift. AI’s greatest value may lie less in prediction than in reflection—challenging assumptions, surfacing disagreement, and forcing better questions rather than simply delivering faster answers.


References

Almog, D. AI Recommendations and Non-instrumental Image Concerns Preliminary working paper, Kellogg School of Management Northwestern University, April 2025

di Castri, S. et al. State of SupTech Report 2025, December 2025

Chu, J and J. Evans, Slowed canonical progress in large fields of science, PNAS, October 2021

Gerlich, M., AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, Center for Strategic Corporate Foresight and Sustainability, 2025

Hendryckx, et al. D, A Definition of AGI, https://arxiv.org/pdf/2510.18212, October 2025

Kalai, A, et al., Why Language Models Hallucinate, OpenAI, 2025, arXiv:2509.04664, 2025

Mahadevan, S. Large Causal Models from Large Language Models, Adobe Research, https://arxiv.org/abs/2512.07796, December 2025

Patel, J., Reasoning Models Ace the CFA Exams, Columbia University, December 2025

Restrepo, P., We Won’t Be Missed: Work and Growth in the Era of AGI, NBER Chapters, July 2025

UC Berkeley, Intesa Sanpaolo, Stanford, IBM Research, Measuring Agents in Production, , https://arxiv.org/pdf/2512.04123, December 2025


If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / Ascent / PKS Media Inc.


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

About the Author(s)
Markus Schuller

Markus Schuller is the founder and managing partner of Panthera Group, a multi-award-winning firm recognized for pioneering the practical integration of Human and Artificial Intelligence into institutional investment processes. Panthera’s proprietary Decision GPS platform blends behavioral science, and quantitative finance with machine intelligence to optimize investment decision-making. Alongside his entrepreneurial work, Schuller serves as an adjunct and visiting professor at EDHEC Business School, IE Business School, and the International University of Monaco, where he teaches in top-ranked Master in Finance programs. A former hedge fund manager, equity trader, and derivatives trader, he is a published researcher, keynote speaker, and regular contributor to leading academic and professional journals.

Michelle Sisto, PhD

Michelle Sisto, PhD, serves as associate dean EDHEC AI Center and associate professor of AI and Decision Sciences. She leads EDHEC's AI integration strategy across programs and spearheads research on AI-driven professional transformation. Her 30-year career in international higher education encompasses leadership roles including associate dean of Graduate Programs, MBA director, and strategic advisor to the Dean on Teaching and Learning. Sisto also serves on the boards of QTEM and of the Responsible AI Consortium, helping shape ethical AI adoption in business education. Originally from Washington, D.C., she holds a BS in Mathematics from Georgetown University, an MSc in Mathematics from Université Côte d'Azur, and a PhD in Finance from EDHEC.

Wojtek Wojaczek, PhD

Wojtek Wojaczek, PhD, is a finance executive, educator, and ecosystem strategist. He serves as an Adjunct Professor at EM Lyon and a guest lecturer in entrepreneurship at Cambridge Judge Business School. A Fellow Chartered Accountant and graduate of the Cambridge Executive MBA, Wojtek has held senior leadership roles at multinational firms including KPMG, Novartis, and The Adecco Group. Dr Wojaczek is the founder and director of the Innovation Policy Ignition Programme at Hughes Hall, University of Cambridge - an initiative that supports regional leaders in designing inclusive innovation action plans.

Franz Mohr

Franz Mohr serves as an economist at the Department for Innovation, Data Management and Financial Stability at the Austrian Financial Market Authority, where he specializes in monitoring systemic risks across derivatives, securities financing, and securitization markets. His expertise extends to pioneering data-driven supervisory approaches, leveraging big data technologies, advanced analytics, machine learning, and artificial intelligence.

Patrick J. Wierckx, CFA

Patrick J. Wierckx, CFA, is a Dutch investment professional with more than 25 years of experience in managing institutional equity portfolios. He has held senior positions, including head of Equities at one of the Netherlands' largest asset managers, and is the author of Investing in Hidden Monopolies: Why Customer Loyalty Creates Superior Moats and How You Can Profit. Wierckx is a CFA charterholder, an ESMA-registered Institutional Investment Advisor, and a member of CFA Institute. He holds a master’s degree in business economics.

Jurgen Janssens

Jurgen Janssens is a director at asUgo. He is a transformation specialist and holds various board memberships. He is human-centered, digitally fluent, and internationally seasoned, Janssens specializes in shaping customer strategy for companies, social profit institutions, and philanthropic organizations. With hands-on experience across sectors such as energy, media, manufacturing, mobility, healthcare, and financial services, Janssens operates from local to global scale (BE, LUX, FR, DE, CH, CZ, IT, PT, BRA and beyond). He leads strategic transformation and digital innovation programs, with a pragmatic, experience-based approach rooted in collaborative design and a sharp focus on the big picture. He is passionate about organizational evolution and social impact and is deeply engaged in NGO and philanthropic initiatives. He is an enthusiastic writer and AI aficionado.

Leave a Reply

Your email address will not be published. Required fields are marked *