Views on improving the integrity of global capital markets
17 July 2018

Artificial Intelligence: The Future Is Human

Posted In: Fintech

In our previous blog on artificial intelligence, we implied that the current wave of hype around deep learning, machine learning, and artificial intelligence (AI) may well be the end of a process that began in the 1980s rather than the dawn of something truly revolutionary. This article in The Spectator on the dying dream of driverless cars provides further impetus to try and explain why this current wave of AI hype is about to break.

Intelligence for me but not for thee

The lack of any coherent or precise definition of AI encourages the same kind of hype bubbles that we have seen with “fintech” and “blockchain”— all three concepts can mean almost anything depending on the interpretation. The most well-known attempt to define AI, the Turing test, requires that a computer be able to fool a human into thinking he or she is interacting with another human. As Filip Piekniewski in his excellent blog notes, this has caused AI research to be defined as a solution to a game where a human is the judge of success. Hence, deep-learning algorithms that can play the game “Go” are deemed artificially intelligent, because they play a human-designed game better than humans; animals, however, are not considered intelligent even though they are clearly a miracle of the universe.

It is probably fair to say that current AI research is mostly deep-learning research that is focused on solving some finite-domain task or game. Image recognition, natural-language processing, and driverless cars are all instances of deep-learning algorithms solving for an outcome (e.g., dog or not dog, Hello Siri, do not crash into pedestrians). As Piekniewski explains, AI research typically involves verbalizing the rules of some game that is designed to mimic some aspect of real-world human activity (i.e., driving) and then running deep learning algorithms to “solve” that game.

This can and does, as our last blog explained, result in extremely useful automation.  For example, a machine-learning algorithm that is able to dynamically autofocus an electron microscope greatly improves the effectiveness of that device. However, what it does not result in is intelligence.

It’s intelligence, Jim, but not as we know it

Even examples of stellar deep-learning success (e.g., image recognition) come at the price of solutions and behaviours that are completely different from how a human would perform the same task. Take image recognition, for example. Toddlers do not need to be trained on millions of images of dogs before they can recognise a dog in an image; more importantly, toddlers will immediately understand the abstract concept of a dog, which they can then use to identify dogs in different contexts. In an article in Axios, Geoffrey Hinton, considered the father of deep learning, suggests AI researchers will have to “throw it all away and start again.”

The reason for this is what’s known as Moravec’s Paradox, which states that trivial instances of reality are unfathomably more complex than the most complex games. Phrased differently, high-level reasoning such as solving a game of “Go” requires relatively little computation when compared to even low-level sensorimotor skills.

Things humans perceive as difficult or requiring intelligence are those that our brain has evolved to master only relatively recently. Examples are abstract thought, games (such as chess and Go), and logical reasoning. Ironically, these are the very things that are easy for machines (e.g., deep learning) to replicate precisely because evolution has not spent that much time on them. By comparison, the computations involved for football star Cristiano Ronaldo to unconsciously detect a rapidly travelling object against a background of thousands of faces and spotlights and fire hundreds of different muscles with perfect timing to leap into the air and score a goal with an inch-perfect overhead kick at a contact height of 2.7 metres would probably require a cosmic computer powered by a black hole.

Even trivial activities performed by an infant (e.g., recognising faces, distinguishing between objects, moving around in 3D space) are almost impossible for machines. These unconscious activities, as well as the innate human “common sense,” are what Piekniewski believes are crucial for intelligence; they constitute a giant blind spot that is leading current deep-learning research into a dead-end.

Keep calm, no intelligence required

Steven Pinker, everyone’s favourite positivity-contrarian, noted that while stock analysts and engineers may be replaced by machines, gardeners, receptionists, and cooks are secure in their jobs for decades to come because their jobs are not based on processes or linear logical reasoning. For CFA Institute members, the relevant takeaway is that Pinker likely underestimates the importance of interpersonal skills in the day-to-day job of a stock analyst or investment adviser, overestimates the amount of logical reasoning and statistical analysis involved, and thus also overestimates the rate at which advisers will be replaced!

Governments are bending over backward to win the race for the future on AI. It is unclear what, exactly, they are afraid of. Systematic use of invasive and intrusive surveillance data is far more likely to be a problem than a French version of Skynet. Similarly, capital-intensive economies with low demand for unskilled labour are already here.

We will know we have created artificial intelligence when a computer is able to confidently make outrageous promises about the future, based largely on hearsay and groupthink, have its expectations crushed by reality, and then throw the technological baby out with the bathwater from sheer disappointment. This machine will probably be operating on a blockchain.

If you liked this post, consider subscribing to Market Integrity Insights.


Photo Credit: ©Getty Images/Alfred Paseka/Science Photo Library

About the Author(s)
Sviatoslav Rosov, PhD, CFA

Sviatoslav Rosov, PhD, CFA, is Director, Capital Markets Policy EMEA at CFA Institute. He is responsible for developing research projects, policy papers, articles, and regulatory consultations that advance CFA Institute policy positions, focusing on market structure and wider financial market integrity issues.

1 thought on “Artificial Intelligence: The Future Is Human”

  1. Desmond Opare-Agyekum says:

    So what does the future hold for CFA investment professionals wide projected use of AI in the investment business. Will A CFA job be relevant with these technology breakthroughs with artificial intelligence?

Leave a Reply

Your email address will not be published. Required fields are marked *



By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close