AI is often portrayed as either the ultimate solution to all our problems or a looming threat that must be handled with extreme caution. These are the two polar extremes of a debate that surrounds any transformative technology, and the reality likely lies somewhere in the middle. Time to Reset AI Expectations.

Thank you for reading this post, don't forget to subscribe!

At the recent 2024 MIT Sloan CIO Symposium, AI was the central theme, with numerous keynotes and panels devoted to the topic. The event also featured informal roundtable discussions that touched on legal risks in AI deployment, AI as a driver for productivity, and the evolving role of humans in AI-augmented workplaces.

Time to Reset AI Expectations

A standout moment was the closing keynote, “What Works and Doesn’t Work with AI,” delivered by MIT professor emeritus Rodney Brooks. Brooks, who directed the MIT AI Lab from 1997 to 2003 and was the founding director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) until 2007, offered insights to distinguish between the hype and reality of AI. A seasoned robotics entrepreneur, Brooks founded several companies, including iRobot, Rethink Robotics, and Robust.AI.

In his keynote, Brooks introduced his “Three Laws of Artificial Intelligence,” which serve to ground our understanding of AI:

  1. When an AI system performs a task, human observers often overestimate its competence in related areas. These estimates are usually wildly exaggerated.
  2. The most successful AI deployments involve a human in the loop, whose intelligence smooths out the system’s rough edges.
  3. Without carefully defining the boundaries of an AI system’s deployment, there is always a long tail of special cases that may take decades to discover and address.

Brooks reminded the audience that AI has been a formal academic discipline since the 1950s when its pioneers believed that nearly every aspect of human intelligence could, in principle, be encoded as software and executed by increasingly powerful computers.

Decades of Efforts

In the 1980s, leading AI researchers were confident that within a generation, AI systems capable of human-like cognitive abilities could be developed. They secured government funding to pursue this vision. However, these projects underestimated the complexities of replicating human intelligence, particularly cognitive functions like language, thinking, and reasoning, in software. After years of unmet expectations, these ambitious projects were largely abandoned, leading to the so-called AI winter—a period of reduced interest and funding in AI.

AI experienced a resurgence in the 1990s with a shift towards a statistical approach that analyzed patterns in vast amounts of data using sophisticated algorithms and high-performance supercomputers. This data-driven approach yielded results that approximated intelligence and scaled far better than the earlier programming-based models.

Over the next few decades, AI achieved significant milestones, including Deep Blue’s 1997 victory over chess grandmaster Garry Kasparov, Watson’s 2011 win in the Jeopardy! Challenge, and AlphaGo’s 2016 triumph over Lee Sedol, one of the world’s top Go players. AI also made strides in autonomous vehicles, as evidenced by the successful completion of the 2007 DARPA Grand Challenge and the 2012 DARPA Robotics Challenge for disaster response robots.

Is It Different Now?

Following these achievements, AI seemed poised to “change everything,” according to Brooks. But is it really? Since 2017, Brooks has published an annual Predictions Scorecard, comparing predictions for future milestones in robotics, AI, machine learning, self-driving cars, and human space travel.

“I made my predictions because, then as now, I saw an immense amount of hype surrounding these topics,” Brooks said. He observed that the media and public were making premature conclusions about the impact of AI on jobs, road safety, space exploration, and more.

“My predictions, complete with timelines, were meant to temper expectations and inject some reality into what I saw as irrational exuberance.”

So why have so many AI predictions missed the mark? Brooks, who has a penchant for lists, attributes this to what he calls the Seven Deadly Sins of Predicting the Future of AI. In a 2017 essay, he described these “sins”:

  1. Overestimating and Underestimating: This relates to Amara’s Law, which states that we tend to overestimate the effect of a technology in the short run and underestimate it in the long run. “Artificial Intelligence has the distinction of being overestimated repeatedly—in the 1960s, the 1980s, and I believe again now,” wrote Brooks.
  2. Indistinguishable from Magic: This sin is tied to Arthur C. Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic. Brooks noted, “If a technology is far enough removed from what we currently understand, it becomes indistinguishable from magic, which clouds our understanding of its limitations.”
  3. Exponentials: Brooks cautioned against “exponentialism,” the belief that exponential growth, like that seen with Moore’s Law, will continue indefinitely. “Exponential arguments often fail to account for the physical and economic limits that eventually slow down or halt progress.”
  4. Performance versus Competence: People often generalize from an AI’s performance on a specific task to its overall competence, but today’s AI systems are extremely narrow in their capabilities. Human-style generalizations do not apply.
  5. Speed of Deployment: Brooks argued that many people mistakenly believe that new AI systems will be quickly and seamlessly integrated into the real world. In reality, innovations in AI and robotics take much longer to deploy widely than most expect.
  6. Hollywood Scenarios: Brooks pointed out that if we do develop super-intelligent AI, it won’t come as a sudden shock. Such advancements will occur gradually, and by then, our world will be filled with various forms of intelligence.
  7. Suitcase Words: Coined by MIT AI pioneer Marvin Minsky, “suitcase words” are terms with multiple meanings that lead to confusion. “Learning,” for instance, means something entirely different in the context of machine learning than in human learning. Brooks warned that such words can mislead people about the true capabilities of AI.

The takeaway? While AI has made remarkable progress, there’s still a long journey ahead. It’s Time to Reset AI Expectations.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more