Since the term AI has been coined, the AI world has seen periods of winters and springs; which is usually referred to as “AI winter” and “AI spring”. During AI springs; we have seen massive investments, over-confident predictions, over-promises. And, once those are not delivered, it leads to AI winters. During AI winters, we have seen budget cuts and less confidence in the development of AI.

In this article, I will dive into the paper called “Why AI is Harder Than We Think” by “Melanie Mitchell” [1]. In this paper, she describes the 4 fallacies made by AI researchers that lead to overconfident predictions.

Examples of Over-Confident Predictions

The paper starts with some examples of over-confident predictions:

  • The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver”
  • In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020”
  • Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software…everything”

At the moment I am writing this article in 2022, we know that self-driving cars did not take over the world. The paper continues giving examples from the 1950s and 1960s:

  • In 1958, for example, the New York Times reported …: “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence”
  • In 1960 Herbert Simon declared that “Machines will be capable, within twenty years, of doing any work that a man can do”
  • The following year, Claude Shannon …: “I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame”

And, as we know now; we don’t have machines that are capable of doing any work a man can do. So, what is going on? Why do researchers keep making those predictions?

The author of the paper provides 4 fallacies:

  • Fallacy 1: Narrow intelligence is on a continuum with general intelligence
  • Fallacy 2: Easy things are easy and hard things are hard
  • Fallacy 3: The lure of wishful mnemonics
  • Fallacy 4: Intelligence is all in the brain

In the next sections, we will go through those 4 fallacies that the author explains in the paper.

Fallacy 1: Narrow intelligence is on a continuum with general intelligence

Whenever there is an advance in a specific task where AI is applied, we tend to see it as a “first step” towards general AI. When Deep Blue, IBM’s Watson, and OpenAI’s GPT-3, all of them were hailed as “a step toward general intelligence”. When people see a computer system doing something amazing, it is assumed that general artificial intelligence is coming.

The paper quotes engineer Stuart Dreyfus, brother of philosopher Hubert Dreyfus:

“It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon”

Fallacy 2: Easy things are easy and hard things are hard

People assume that the easy things for humans will be easy for machines and hard things for humans will be hard for machines. The paper says:

That is, the things that we humans do without much thought — looking
out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone — turn out to be the hardest challenges for machines.

The author mentions Google DeepMind’s claim that AlphaGo solved “the most challenging of domains”. The paper asks this question:

Challenging for whom?

A problem that is hard for people does not necessarily mean that it will be hard for computers.

Fallacy 3: The lure of wishful mnemonics

We keep using keywords like “understanding”, “goal”, “learning” … Machines do not understand, they do not have a goal and they do not learn in the sense of human learning. The more we use those keywords, the more we fall into that fallacy. The author gives examples:

  • … one of IBM’s top executives proclaimed that “Watson can read all of the health-care texts in the world in seconds” and IBM’s website claims that its Watson program “understands context and nuance in seven languages” …
  • … DeepMind co-founder Demis Hassabis tells us that “AlphaGo’s goal is to beat the best human players not just mimic them” …
  • .. AphaGo’s lead research David Silver described one of the program’s matches thus: “We can always ask AlphaGo how well it thinks it’s doing during the game. …It was only towards the end of the game that AlphaGo thought it would win” …

Machines are not reading, understanding, winning, and thinking in the way that people think.

Not only that, but the data sets we use also have this kind of naming convention that may lead to misunderstandings: “Stanford Question Answering Dataset”, “RACE Reading Comprehension Dataset”, “General Language Understanding Evaluation”, … When there is an advance in an algorithm that increases the accuracy on the dataset; we see headlines like “Computers are getting better than humans at reading”, “Microsoft’s AI model has outperformed humans in natural-language understanding”, … Those are contributing to being over-confident.

Fallacy 4: Intelligence is all in the brain

We have the idea of intelligence is all about the brain, but what about the body? Does the body not impact intelligence? The author says:

The assumption that intelligence is all in the brain has led to speculation that, to achieve human-level AI, we simply need to scale up machines to match the brain’s “computing capacity” and then develop the appropriate “software” for this brain-matching “hardware.

The paper also has a sentence that may be the reason why people think the computing power is the key to intelligence, even without a body:

… computer scientist Rod Brooks argues, “The reason for why we got stuck
in this cul-de-sac for so long was because Moore’s law just kept feeding us, and we kept thinking, ‘Oh, we’re making progress, we’re making progress, we’re making progress.’ But maybe we haven’t been”

Conclusion

In this article, we talked about the 4 fallacies that researchers became victims of. The author finishes his article with some questions and that is how we will finish this article too:

These fallacies raise several questions for AI researchers. How can we assess actual progress toward “general” or “human-level” AI? How can we assess the difficulty of a particular domain for AI as compared with humans? How should we describe the actual abilities of AI systems without fooling ourselves and others with wishful mnemonics? To what extent can the various dimensions of human cognition (including cognitive biases, emotions, objectives, and embodiment) be disentangled? How can we improve our intuitions about what intelligence is?

References

[1] https://arxiv.org/pdf/2104.12871.pdf