Practical analysis for investment professionals
07 January 2016

Is Artificial Intelligence for Real?

Posted In: Future States

Did you know there’s an artificial intelligence (AI) that can decipher your age and how attractive you are?

Other applications have been used to make art and music. And, of course, people are starting to use it to make investments.

But the AI that makes art and music can’t make investments. It’s a safe bet there’s at least one person who is world class at all three. And that’s the heart of one of the largest questions in the world right now: Is it possible to create an AI that can match a human at a broad range of cognitive tasks?

To be clear, this is a challenge. There’s something special about human intelligence that we’re not quite able to put our fingers on yet. When Garry Kasparov played chess against IBM’s Deep Blue supercomputer, it was estimated that the computer was checking 200 million positions a second while Kasparov was checking five. Kasparov won their first tournament.

How would that be possible if there wasn’t some sort of secret sauce inside the human brain? But technology is progressing at a furious pace, and a growing group of developers are harnessing this progress to search for the secret recipe. With that in mind, we decided to ask the readers of CFA Institute Financial NewsBrief for their opinion.

And, boy, did opinions vary. They should, because this is both a controversial and critical issue. That technological progress is a component of economic growth is hardly a revelation. But creating a human-quality intelligence that is native to the silicon environment of a server farm, never sleeps, and is networked to the entire world has the potential to substantially revise everything we know about how technology moves forward. Take a second and think about how such an innovation would affect your ability to find work. Just watch a relatively primitive AI play the popular smartphone game 2048.

It’s logical to think you could find a job doing manual labor. But Luke Muehlhauser, the former executive director of the Machine Research Institute (MIRI), says no way: “Robots will be better than humans at manual labor, too.

Is it possible to create an artificial intelligence as smart as a human?

If you’re scared, there is some good news. A plurality (35%) of the 772 respondents think achieving human level intelligence is impossible.

It well may be. After all, creating music and feeling emotion are entirely different things. I’m writing this while listening to “focus enhancing” music from, and even though the music does do a good job of keeping me on task, it’s tough to think the machine felt anything while creating it.

No matter how far technology advances, it may just be impossible to replicate the feeling of wonder, awe, and infinite smallness that a human being experiences while staring up at the night sky.

But technology is getting pretty good at faking it. Last year, a poem written by an AI was accepted to The Archive, a literary magazine at Duke University.

Given that, it’s understandable why 11% of respondents believe that such an intelligence already exists. Writing a poem is still a ways away from the sort of AI depicted in Ex Machina, but it’s definitely the sort of thing that people are surprised computers can do.

That element of surprise is actually one of the more interesting elements about forecasting this trend. When a technology works, we immediately take it for granted. But the last time you flew on an airplane, a computer was doing most of the piloting!

And let’s not forget that in 1895 Lord Kelvin famously said that “Heavier than air flying machines are impossible.” Speaking on the occasion of the 100th anniversary of flight at Kitty Hawk, North Carolina, NASA’s then-deputy administrator, Frederick Gregory, declared Kelvin to have been “slightly in error“.

I wasn’t able to confirm whether he dropped the microphone afterwards. But that underscores a serious point: Declaring a technological innovation impossible is a form of hubris that invites mockery. Scientist and author Arthur C. Clarke offered a great turn of phrase: “If an elderly but distinguished scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.”

So I wouldn’t advise shorting technological progress. And the balance of respondents didn’t. Instead, they offered divergent forecasts for when an artificial intelligence would be developed. The second largest group of respondents (32%) believed it would happen by 2045, followed by 12% who thought it would happen by 2100. 11% believed that it would be possible, but the breakthrough would not come this century.

The median prediction in a survey of forecasts is 2042. There are plenty of reasons to take this with a grain of salt, particularly because predicting things is very hard. But if this has piqued your interest and if you’d like to learn more, I recommend first reading this excellent explainer of the state of the field at the moment. If then you think that this might be a trend worth investing in, take a look at the technology from the perspective of a venture capitalist.

And if reading this post has got you worried about a dystopic future, read this comic and chill out a bit. It might not be that bad.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.

All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

About the Author(s)
Sloane Ortel

Sloane Ortel is the founder of Invest Vegan, an ethics-first registered investment adviser that manages distinctive discretionary portfolios of public equities on behalf of aligned individuals and institutions. Before establishing her own firm, she joined CFA Institute’s staff as a sophomore at Fordham University and spent close to a decade helping members adapt to a changing investment landscape as a collaborator, curator, and commentator. She is also a co-host of Free Money, a podcast for sustainability-oriented investors with a sense of humor.

13 thoughts on “Is Artificial Intelligence for Real?”

  1. Peter Kinnon says:

    The problem is that all seem to assume that the evolution of technology is within our control. Individuals obviously can, to some extent, opt out but evolutionary processes are essential colligative.

    We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by “big science” and “big history” takes us way outside our perceptive comfort zone.

    The fact is that the evolution of the Internet, the present spearhead of technology, is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional “too hard” bin.

    This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.

    Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.

    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

    The separate issue of whether it well be malignant, neutral or benign towards we snout-less apes is less certain, and this particular aspect I have explored elsewhere.

    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

    The pattern observed in the exponential evolution of technology in the shared imagination of our species over the last two million years indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

    The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

    We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.

    Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the “life” process then we have the possibility of a bright future.

    If we don’t become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all , our infrastructures are becoming increasingly Internet dependent and Netty will only need to “pull the plug” to effect pest eradication.

    So it is to our advantage to try to effect the inclusion of desirable human behaviors in Netty’s psyche. In practice that equates to our species firstly becoming aware of our true place in natures machinery and, secondly, making a determined effort to “straighten up and fly right”
    The really significant AI will be the new predominate cognitive entity shortly to emerge and whose current embryonic form corresponds to what we know as the Internet.

    1. Will Ortel says:

      Peter —

      Interesting thoughts. I hadn’t thought of offering a name for the AI — Netty is a good one!

      I completely agree with your thesis statement. It’s tempting to believe we can assert control over whether or not a technological process comes to fruition. I don’t see how it could be done. Even in a circumstance where an entire category of basic research is made illegal in a large country (which is unlikely to say the least), the most that such a move seems likely to do is slow the rate of change.

      What I’d say though is that I’m not so sure the evolved intelligence will be even loosely analogous to human intelligence. I think the best formulation of that is in the article. Will it be able to ponder the night sky and feel small? Increasing calculation capacity and fomenting creativity are vastly different things. My guess is that the path forward is a symbiosis where humans provide the things they’re evolved for like creativity and companionship. And AI provides the things it’s evolved for like breadth of awareness and calculation ability.

      Still, the scenario you raise is worth being aware of. Thanks for reading!


      1. Peter Kinnon says:

        Sure, Will, the emotional aspects of an organism’s consciousness are, of course, mostly determined by the requirements of its evolutionary niche. Because that of Netty will be very different from ours, her emotional landscape will differ correspondingly.

        But as her knowledge base will have derived from us there would seem to be no good reason why an empathy could not be established. After all, most of we humans develop such bonds with our pets, even despite the huge linguistic gulf.

        Furthermore, having, we can assume, a language capability even greater than ours, she would almost certainly have the associated reflexivity that characterises our own special sense of self and is a prerequisite for the experience of “wonder” to which you refer.

        Would you not agree?


  2. Jason Warren says:

    Kasparov and DeepBlue played a single tournament. The computer won. Kasparov wanted a rematch, but IBM elected not to participate.

    1. Will Ortel says:

      Hi Jason —

      I wonder: would you mind tracking down a source for that? This NPR transcript and many other sources corroborate that they played in both 1996 and 1997, and Kasparov won the first match.

      Cheers, all best, and thanks for reading!


      1. Jason Warren says:

        I was a member of the group at IBM in Poughkeepsie that built the Deep Blue machine with help from the Research Division in Yorktown Heights. I know there was but a single tournament. Kasparov’s request for a second trywas declined.

        1. Paul McCaffrey says:

          Hi Jason, Just saw this. It’s funny, I was in Poughkeepsie myself around that time for college and remember driving by that big IBM facility. I’m wondering though if there could be a nomenclature issue re: Deep Blue? I did some digging on this and Kasparov seems to have beaten some incarnation of Deep Blue on 17 February 1996. He even wrote about it for Time. This article runs through the chain of events leading up to Deep Blue’s eventual victory in 1997.

          1. Jason Warren says:

            I had forgotten about the Philadelphia match entirely. As I now recall, the machine in Philly was a spit-‘n-bailing wire job that may actually have been running at the Yorktown Heights research facility. I do recall helping push the “real” Deep Blue up a ramp to help load it on a truck bound for NYC.

  3. Jon Lubar says:

    Not considered here is the human abuse of both society and the marketplace by what passes for AI, abuse that has it’s roots in the human frailties and failings that have afflicted us since time immemorial. One example of this that is happening right now is the attempt to subjugate and/or outright replace teachers with “AI” systems that have yet to prove themselves superior to human teachers in any way, but will still be leveraged into place in our public education system simply because those pushing it have wealth and access to the halls of power in DC. One absurdity among many that stand out is that those pushing this change assert that by ignoring the actual causes of problems that do affect student outcomes, those problems can be solved….by data. They avert their eyes from recent experiences where the data has pointed to long and well understood problems that the use of data cannot touch since data has no power of persuasion that is greater than let alone comparable to political inertia and the entropy of ideology. The sources of error in one of the primary data collection areas, standardized testing, are ignored, as are the ways that the overall system is being gamed to produce desired economic and ideological outcomes. The very real danger, the harm that is happening now is that the teaching profession in particular and the public education system in general is being reduced to rubble while there is nothing of any quality to replace it with. Another problem too little discussed is that when the sum of knowledge and experience of a profession resides not in humanity but in server farms, it is far more vulnerable to loss via various exercises of malice or by natural disasters. A human reservoir is far more resilient and distributed. It must be acknowledged that not all of the damage being done to public education is being done by “AI” systems, though that is ramping up as the weapon of “AI’ becomes understood by those who have malinformed and outright malicious intent about the future of education in America. Stepping back for a wider view, the premise that cradle to grave data collection and analysis will replace human experience and intuition for things like job placement and hiring may in fact come to pass, and the reduction of human experience to what data can capture and explain may in fact replace who we actually are with a dataset of questionable worth. What gets measured gets managed. There are things that big data and AI can do well and efficiently for us, but the admonition “Just because you can, doesn’t mean that you should.” is not being taken into account. Disruptive change only occurs when a truly superior, fully realized model exists that can fully replace it’s predecessor. If that does not exist, then what we are left with is destructive change, often described as building the plane while it is in the air. That shouldn’t fly with any of us.

Leave a Reply

Your email address will not be published. Required fields are marked *

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.