Enterprising Investor
Practical analysis for investment professionals
06 April 2022

The EU Artificial Intelligence Act and Financial Services

Is artificial intelligence (AI) currently regulated in the financial services industry? “No” tends to be the intuitive reply.

But a deeper look reveals bits and pieces of existing financial regulations that implicitly or explicitly apply to AI — for example, the treatment of automated decisions in GDPR, algorithmic trading in MiFID II, algorithm governance in RTS 6, and many provisions of various cloud regulations.

Subscribe Button

While some of these statutes are very forward-looking and future-proof — particularly GDPR and RTS 6 — they were all written before the most recent explosion in AI capabilities and adoption. As a result, they are what I call “pre-AI.” Moreover, AI-specific regulations have been under discussion for at least a couple of years now, and various regulatory and industry bodies have produced high-profile white papers and guidance but no official regulations per se.

But that all changed in April 2021 when the European Commission issued its Artificial Intelligence Act (AI Act) proposal. The current text applies to all sectors, but as a proposal, it is non-binding and its final language may differ from the 2021 version. While the act strives for a horizontal and universal structure, certain industries and applications are explicitly itemized.

The act takes a risk-based “pyramid” approach to AI regulation. At the top of the pyramid are prohibited uses of AI, such as subliminal manipulation like deepfakes, exploitation of vulnerable persons and groups, social credit scoring, real-time biometric identification in public spaces (with certain exceptions for law enforcement purposes), etc. Below that are high-risk AI systems that affect basic rights, safety, and well-being, such as aviation, critical infrastructure, law enforcement, and health care. Then there are several types of AI applications on which the AI Act imposes certain transparency requirements. After that is the unregulated “everything else” category, covering — by default — more everyday AI solutions like chatbots, banking systems, social media, and web search.

While we all understand the importance of regulating AI in areas that are foundational to our lives, such regulations could hardly be universal. Fortunately, regulators in Brussels included a catchall, Article 69, that encourages vendors and users of lower-risk AI systems to voluntarily observe, on a proportional basis, the same standards as their high-risk-system-using counterparts.

Liability is not a component of the AI Act, but the European Commission notes that future initiatives will address liability and will be complementary to the act.

Ad tile for Artificial Intelligence in Asset Management

The AI Act and Financial Services

The financial services sector occupies a gray area in the act’s list of sensitive industries. This is something a future draft should clarify.

  • The explanatory memorandum describes financial services as a “high-impact” rather than a “high-risk” sector like aviation or health care. Whether this is just a matter of semantics remains unclear.
  • Finance is not included among the high-risk systems in Annexes II and III.
  • “Credit institutions,” or banks, are referenced in various sections.
  • Credit scoring is listed as a high-risk use case. But the explanatory text frames this in the context of access to essential services, like housing and electricity, and such fundamental rights as non-discrimination. Overall, this ties more closely to the prohibited practice of social credit scoring than financial services per se. Still, the final draft of the act ought to clear this up.

The act’s position on financial services leaves room for interpretation. Currently, financial services would fall under Article 69 by default. The AI Act is explicit about proportionality, which strengthens the case for applying Article 69 to financial services.

The primary stakeholder functions specified in the act are “provider,” or the vendor, and “user.” This terminology is consistent with AI-related soft laws published in recent years, whether guidance or best practices. “Operator” is a common designation in AI parlance, and the act provides its own definition that includes providers, vendors, and all other actors in the AI supply chain. Of course, in the real world, the AI supply chain is much more complex: Third parties are providers of AI systems for financial firms, and financial firms are providers of the same systems for their clients.

The European Commission estimates the cost of AI Act compliance at €6,000 to €7,000 for vendors, presumably as a one-off per system, and €5,000 to €8,000 per annum for users. Of course, given the diversity of these systems, one set of numbers could hardly apply across all industries, so these estimates are of limited value. Indeed, they may create an anchor against which the actual costs of compliance in different sectors will be compared. Inevitably, some AI systems will require such tight oversight of both vendor and user that the costs will be much higher and lead to unnecessary dissonance.

Tile for T-Shape Teams report

Governance and Compliance

The AI Act introduces a detailed, comprehensive, and novel governance framework: The proposed European Artificial Intelligence Board would supervise the individual national authorities. Each EU member can either designate an existing national body to take over AI oversight or, as Spain recently opted to do, create a new one. Either way, this is a huge undertaking. AI providers will be obliged to report incidents to their national authority.

The act sets out many regulatory compliance requirements that are applicable to financial services, among them:

  • Ongoing risk-management processes
  • Data and data governance requirements
  • Technical documentation and record-keeping
  • Transparency and provision of information to users
  • Knowledge and competence
  • Accuracy, robustness, and cybersecurity

By introducing a detailed and strict penalty regime for non-compliance, the AI Act aligns with GDPR and MiFID II. Depending on the severity of the breach, the penalty might be as high as 6% of the offending company’s global annual revenue. For a multinational tech or finance company, that could amount to billions of US dollars. Nevertheless, the AI Act’s sanctions, in fact, occupy the middle ground between those of GDPR and MiFID II, in which fines max out at 4% and 10%, respectively.

Financial Analysts Journal Current Issue Tile

What’s Next?

Just as GDPR became a benchmark for data protection regulations, the EU AI Act is likely to become a blueprint for similar AI regulations worldwide.

With no regulatory precedents to build on, the AI Act suffers from a certain “first-mover disadvantage.” However, it has been through thorough consultation, and its publication sparked energetic discussions in legal and financial circles, which will hopefully inform the final version.

One immediate challenge is the act’s overly broad definition of AI: The one proposed by the European Commission includes statistical approaches, Bayesian estimation, and potentially even Excel calculations. As the law firm Clifford Chance commented, “This definition could capture almost any business software, even if it does not involve any recognizable form of artificial intelligence.

Another challenge is the act’s proposed regulatory framework. A single national regulator would have to cover all sectors. That could create a splintering effect wherein a dedicated regulator would oversee all aspects of certain industries except for AI-related matters, which would fall under the separate, AI Act-mandated regulator. Such an approach would hardly be optimal.

In AI, one size might not fit all.

Moreover, the interpretation of the act at the individual industry level is almost as important as the language of the act itself. Either existing financial regulators or newly created and designated AI regulators should provide the financial services sector with guidance on how to interpret and implement the act. These interpretations should be consistent across all EU member countries.

While the AI Act will become a legally binding hard law if and when it is enacted, unless Article 69 materially changes, its provisions will be soft laws, or recommended best practices, for all industries and applications except those explicitly listed. That seems like an intelligent and flexible approach.

AI Pioneers in Investment Management

With the publication of the AI Act, the EU has boldly gone where no other regulator has gone before. Now we need to wait — and hopefully not for long — to see what regulatory proposals are made in other technologically advanced jurisdictions.

Will they recommend that individual industries take up EI regulations, that the regulations promote democratic values or strengthen state control? Might some jurisdictions opt for little or no regulation? Will AI regulations coalesce into a universal set of global rules, or will they be “balkanized” by region or industry? Only time will tell. But I believe AI regulation will be a net positive for financial services: It will disambiguate the current regulatory landscape and hopefully help bring solutions to some of the sector’s most-pressing challenges.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images / mixmagic


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

About the Author(s)
Wojtek Buczynski, CFA

Wojtek Buczynski, CFA, FRM, is a finance professional focused on emerging technologies (cloud and AI) strategy, regulation, and ethics in the financial services industry. He is a graduate of the London Business School's Master’s in Finance program and a CFA charterholder since 2016. He simultaneously works as a finance professional in London and studies for as a part-time PhD student in Cambridge, researching ethics and applications of AI in financial services. You can e-mail him on [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close