Author Jeremy Wagstaff wrote a very thought provoking article on the future of AI, and how much of it we could predict based on the past. This insight expands on that article.

Thank you for reading this post, don't forget to subscribe!

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn. These machines can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Many people think of artificial intelligence in the vein of how they personally use it. Some people don’t even realize when they are using it.

Artificial intelligence has long been a concept in human mythology and literature. Our imaginations have been grabbed by the thought of sentient machines constructed by humans, from Talos, the enormous bronze automaton (self-operating machine) that safeguarded the island of Crete in Greek mythology, to the spacecraft-controlling HAL in 2001: A Space Odyssey.

Artificial Intelligence comes in a variety of flavors, if you will.

Artificial intelligence can be categorized in several ways, including by capability and functionality:

  • Narrow AI
    • Also known as weak AI, this type of AI is trained for a specific task or a limited range of tasks.
  • Strong AI
    • Also known as general AI or artificial general intelligence (AGI), this type of AI is designed to perform any intellectual task that a human can.
  • Limited memory AI
    • Also known as semi-autonomous AI, this type of AI can remember past events and use that information to inform future decisions. Google’s self-driving car is an example of limited memory AI.
  • Reactive machines
    • This is the oldest and most basic form of AI, and these machines are very limited. They can’t create memories or use past experiences to shape current decisions, so they can’t learn and improve over time.
  • Reactive AI
    • This foundational type of AI gives the same output each time it’s given identical inputs, and it can’t adopt intuitive functionalities.
  • Augmented intelligence
    • Also known as intelligence amplification (IA), this type of AI uses machines to enhance the capabilities of human workers.
  • Machine learning
    • This type of AI allows computers to learn without being explicitly programmed by building algorithms that can receive input data and use statistics to predict an output.
      • Supervised learning involves manually telling a program the correct output repeatedly until it learns to do it itself. 
      • Unsupervised learning involves giving a program a ton of data and allowing it to make its own connections. 
  • Deep learning
    • This subset of AI allows computers to learn from large data sets and perform complex tasks like image recognition or speech analysis. 

You likely weren’t even aware of all of the above categorizations of artificial intelligence. Most of us still would sub set into generative ai, a subset of narrow AI, predictive ai, and reactive ai. Reflect on the AI journey through the Three C’s – Computation, Cognition, and Communication – as the guiding pillars for understanding the transformative potential of AI. Gain insights into how these concepts converge to shape the future of technology.

Beyond a definition, what really is artificial intelligence, who makes it, who uses it, what does it do and how.

Artificial Intelligence Companies – A Sampling

  1. OpenAI
  2. DeepMind
  3. IBM Watson
  4. NVIDIA
  5. Microsoft AI
  6. Amazon Web Services (AWS) AI
  7. Google AI
  8. Intel AI
  9. Baidu AI
  10. Apple AI

AI and Its Challenges

Artificial intelligence (AI) presents a novel and significant challenge to the fundamental ideas underpinning the modern state, affecting governance, social and mental health, the balance between capitalism and individual protection, and international cooperation and commerce. Addressing this amorphous technology, which lacks a clear definition yet pervades increasing facets of life, is complex and daunting. It is essential to recognize what should not be done, drawing lessons from past mistakes that may not be reversible this time.

In the 1920s, the concept of a street was fluid. People viewed city streets as public spaces open to anyone not endangering or obstructing others. However, conflicts between ‘joy riders’ and ‘jay walkers’ began to emerge, with judges often siding with pedestrians in lawsuits. Motorist associations and the car industry lobbied to prioritize vehicles, leading to the construction of vehicle-only thoroughfares. The dominance of cars prevailed for a century, but recent efforts have sought to reverse this trend with ‘complete streets,’ bicycle and pedestrian infrastructure, and traffic calming measures. Technology, such as electric micro-mobility and improved VR/AR for street design, plays a role in this transformation. The guy digging out a road bed for chariots and Roman armies likely considered none of this.

Addressing new technology is not easy to do, and it’s taken changes to our planet’s climate, a pandemic, and the deaths of tens of millions of people in traffic accidents (3.6 million in the U.S. since 1899). If we had better understood the implications of the first automobile technology, perhaps we could have made better decisions.

Similarly, society should avoid repeating past mistakes with AI. The market has driven AI’s development, often prioritizing those who stand to profit over consumers. You know, capitalism. The rapid adoption and expansion of AI, driven by commercial and nationalist competition, have created significant distortions. Companies like Nvidia have soared in value due to AI chip sales, and governments are heavily investing in AI technology to gain competitive advantages.

Listening to AI experts highlights the enormity of the commitment being made and reveals that these experts, despite their knowledge, may not be the best sources for AI guidance. The size and impact of AI are already redirecting massive resources and creating new challenges. For example, AI’s demand for energy, chips, memory, and talent is immense, and the future of AI-driven applications depends on the availability of computing resources.

The rise in demand for AI has already led to significant industry changes. Data centers are transforming into ‘AI data centers,’ and the demand for specialized AI chips and memory is skyrocketing. The U.S. government is investing billions to boost its position in AI, and countries like China are rapidly advancing in AI expertise.

China may be behind in physical assets, but it is moving fast on expertise, generating almost half of the world’s top AI researchers (Source: New York Times).

The U.S. has just announced it will provide chip maker Intel with $20 billion in grants and loans to boost the country’s position in AI.

Nvidia is now the third largest company in the world, entirely because its specialized chips account for more than 70 percent of AI chip sales.

Memory-maker Micro has mostly run out of high-bandwidth memory (HBM) stocks because of the chips’ usage in AI—one customer paid $600 million up-front to lock in supply, according to a story by Stack.

Back in January, the International Energy Agency forecast that data centers may more than double their electrical consumption by 2026 (Source: Sandra MacGregor, Data Center Knowledge).

AI is sucking up all the payroll: Those tech workers who don’t have AI skills are finding fewer roles and lower salaries—or their jobs disappearing entirely to automation and AI (Source: Belle Lin at WSJ).

Sam Altman of OpenAI sees a future where demand for AI-driven apps is limited only by the amount of computing available at a price the consumer is willing o pay.

“Compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute.”

Sam Altman, OpenAI CEO

This AI buildup is reminiscent of past technological transformations, where powerful interests shaped outcomes, often at the expense of broader societal considerations. Consider early car manufacturers. They focused on a need for factories, components, and roads. AI’s rapid integration into various aspects of life necessitates careful scrutiny and a discussion about the kind of world society wants to create. This conversation should not be dominated by those who build AI but should involve diverse perspectives to ensure a balanced and thoughtful approach to AI’s future.

Put a pin in that. Think about weapons technology. Build better guns. Build better cannons. Build better tanks, planes, ships, and now drones. Let’s not forget about the proliferation of nuclear arms in the midst of all this. The “build a better mousetrap” journey led to the creation of a tool to essentially eliminate the entire planet, rather than win a war. A bit like losing sight of the forest for the trees.

Default here to the great Dr. Seuss. Among his wonderful children’s stories full of characters and colors and rhymes, there was always a gentle moral. The moral of “The Butter Battle Book” is the dangers of retaliation and the importance of accepting differences. It was inspired by The Cold War between The United States and The Soviet Union. The Butter Battle Book may not seem controversial at first glance because of the innocent alliteration on the cover, but it was denounced, by some, and banned for attempting to speak to kids and adults about an escalating bitter conflict on opposite corners of the globe. The issue for the wartime atmosphere that builds up the tension in the conflict between the Zooks and Yooks is, of course, which side to butter bread on. Ironically, the situation intensifies all the way to that standoff point of assured mutual devastation for reasons having nothing to do with butter at all.

At the culmination of the great butter debate technology had far outrun the original issue of bread and butter to a weapon of total devastation. How does this relate to artificial intelligence?

Historically, societies have failed to adequately prepare for technological advancements, leading to negative consequences. For instance, the health risks of asbestos, cigarette smoke, pesticides, and leaded gasoline were known but ignored due to powerful industry interests. Lead wasn’t even naturally occuring in gasoline. For four decades all scientific research into leaded gasoline was underwritten by oil giants Du Pont, GM, and standard oil. The fox was in the hen house, friends.

AI’s potential harms to mental health, employment, the environment, privacy, and even existential threats should be carefully considered and addressed.

In 1956, the term artificial intelligence was first used. It describes how human intellectual processes are simulated by machines, especially computer systems. AI, to put it simply, is the process of building machines with human-like thinking, learning, and problem-solving abilities. AI started to move from science fiction to reality in the late 20th and early 21st centuries as a result of this concept’s evolution over time.

It was made possible by developments in machine learning and data storage technologies. AI is becoming a necessary component of today’s corporate environment. It is employed to tailor client experiences, identify trends in data, and automate operations. Businesses may use AI to make decisions more quickly and intelligently, freeing up human resources for more difficult jobs. Gaining a thorough understanding of its capabilities—which include generative and predictive functions—is becoming more and more important for companies looking to stay on the cutting edge.

We still don’t fully understand how and why AI models work, and we’re increasingly outsourcing the process of improving AI to the AI itself. For instance, Evolutionary Model Merge is becoming a popular technique for combining multiple existing models to create a new one. This adds further complexity and opacity to an already intricate system. Harry Law, in his Learning from Examples newsletter, notes that “the central insight here is that AI could do a better job of determining which models to merge than humans, especially when it comes to merging across multiple generations of models.”

We haven’t even agreed on what AI is. There is no established science of AI, and we lack a consensus on definitions. The interdisciplinary nature of AI means it can be approached from different angles, with varying frameworks, assumptions, and objectives. While this diversity could be exciting and refreshing, it simultaneously impacts us all in rapidly evolving ways.

We also don’t really know who owns what in the AI landscape. One thing is certain: the major players dominate. As Martin Peers wrote in The Information’s Briefing, “It is an uncomfortable truth of the technology industry currently that we really have little clue what lies behind some of the world’s most important partnerships.” Big Tech, wary of regulators, is avoiding acquisitions by forming partnerships that “involve access to a lot of the things one gets in acquisitions.” For example, consider Microsoft’s multi-billion dollar alliance with OpenAI and, more recently, its partnership with Inflection AI.

AI in corporate and nationalist pursuits, may be another in a long list of dangers ignored. Totally not dishing AI, here. Rather wondering where the governance and guidance will be implemented and will it be soon enough. In the past years, our understanding of AI has increased as programs like ChatGPT, DALL-E, and Midjourney have become normal tools in our daily lives. 

David Williams, economist, sees ai as the dehumanisation of curiosity.

In the early days of the internet, we had to specify what we wanted from Google searches, who and what we wanted to spend our day with on Facebook and other social media platforms, and how we wanted to discuss on Twitter. We also had to be witty, intelligent, and sharp. All we had to do on Tiktok was scroll. In the end, it became clear that the primary point of social media was not for us to produce and distribute knowledge-rich content, but rather for the platforms to contract out the costly part—creating entertainment—to people who would be prepared to sell their own selves or do pranks.

That stage is not yet attained by AI. Still. In this early summer of Google, we need to consider what we want our technology to accomplish for us. The search prompt would be waiting for us, blinking cursor-style, much like it does in ChatGPT or Claude. Still, this is only a stage. Soon, generative AI will be able to predict our desires—or at least a distorted version of them. It will produce a lowest-common-denominator version that will take away our ability to calculate and think, as well as our desire to perform hard tasks, because it doesn’t require us to say it out. This will allow us to see in text how much of our time is being wasted.

Personally, I won’t be totally impressed until AI can fix my typoes and predict what I really wanted in my internet search.

While its roots go back decades, modern AI really accelerated in the last few years. As AI advances and seeps into technology we use, the lines between different types of AI can be hard to distinguish.

The Great AI Debate and What it Should Be 

People who understand it too well and those who don’t grasp it at all typically engage in debates about it. While the latter, or those who create it, frequently assert that they alone are capable of mapping our future, the former tend to hunt for sound bites. A conversation about what we want technology to do for us is what’s lacking. This is a conversation about the direction we want to take our world, not about artificial intelligence. Although it should be apparent, there is almost never a conversation about it. This is due in part to our obsession with technology and the unwillingness of vested interests to be open about potential consequences. The harmful impacts of social media created in the West have never been properly discussed, but our politicians seem content.

AI is fundamentally changing lives at an unprecedented pace. Allowing those developing AI to lead the debate about its future without broader societal input is a mistake that may not be correctable. Society must engage in a comprehensive discussion about AI’s role and implications, ensuring that technological progress aligns with human values and goals.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Guide to Creating a Working Sales Plan
Public Sector Solutions

Creating a sales plan is a pivotal step in reaching your revenue objectives. To ensure its longevity and adaptability to Read more