Oliver Lovstrom, an AI student, wrote an interesting perspective on artificial intelligence, a cautionary AI tale, if you will.
Thank you for reading this post, don't forget to subscribe!The Theory and Fairy Tale
My first introduction to artificial intelligence was during high school when I began exploring its theories and captivating aspects. In 2018, as self-driving cars were gaining traction, I decided to create a simple autonomous vehicle for my final project. This project filled me with excitement and hope, spurring my continued interest and learning in AI.
However, I had no idea that within a few years, AI would become significantly more advanced and accessible, reaching the masses through affordable robots.
For instance, who could have imagined that just two years later, we would have access to incredible AI models like ChatGPT and Gemini, developed by tech giants?
The Dark Side of Accessibility
My concerns grew as I observed the surge in global cybersecurity issues driven by advanced language model-powered bots. Nowadays, it’s rare to go a day without hearing about some form of cybercrime somewhere in the world.
A Brief Intro to AI for Beginners
To understand the risks associated with AI, we must first comprehend what AI is and its inspiration: the human brain. In biology, I learned that the human brain consists of neurons, which have two main functions:
- Receiving signals.
- Sending signals.
Neurons communicate with sensory organs or other neurons, determining the signals they send through learning. Throughout our lives, we learn to associate different external stimuli (inputs) with sensory outputs, like emotions.
Imagine returning to your childhood home. Walking in, you are immediately overwhelmed by nostalgia. This is a learned response, where the sensory input (the scene) passes through a network of billions of neurons, triggering an emotional output.
Similarly, I began learning about artificial neural networks, which mimic this behavior in computers.
Artificial Neural Networks
Just as biological neurons communicate within our brains, artificial neural networks try to replicate this in computers.
Each dot in the graph above represents an artificial neuron, all connected and communicating with one another. Sensory inputs, like a scene, enter the network, and the resulting output, such as an emotion, emerges from the network’s processing.
A unique feature of these networks is their ability to learn. Initially, an untrained neural network might produce random outputs for a given input. However, with training, these networks learn to associate specific inputs with particular outputs, mirroring the learning process of the human brain. This capability can be leveraged to handle tedious tasks, but there are deeper implications to explore.
The Wishing Well
As AI technology advances, it begins to resemble a wishing well from a fairy tale—a tool that could fulfill any desire, for better or worse.
In 2022, the release of ChatGPT and various generative AI tools astonished many. For the first time, people had free access to a system capable of generating coherent and contextually appropriate responses to almost any prompt. And this is just the beginning.
Multimodal AI and the Next Step
I explored multimodal AI, which allows the processing of data in different formats, such as text, images, audio, and possibly even physical actions. This development supports the “wishing well” hypothesis, but also revealed a darker side of AI.
The Villains
While a wishing well in fairy tales is associated with good intentions and moral outcomes, the reality of AI is more complex. The morality of AI usage depends on the people who wield it, and the potential for harm by a single bad actor is immense.
The Big Actors and Bad Apples
The control of AI technology is likely to be held by powerful entities, whether governments or private corporations. Speculating on their use of this technology can be unsettling. While we might hope AI acts as a deterrent, similar to nuclear weapons, AI’s invisibility and potential for silent harm make it particularly dangerous.
We are already witnessing malicious uses of AI, from fake kidnappings to deepfakes, impacting everyone from ordinary people to politicians. As AI becomes more accessible, the risk of bad actors exploiting it grows.
Even if AI maintains peace on a global scale, the issue of individuals causing harm remains—a few bad apples can spoil the bunch.
Unexpected Actions and the Future
AI systems today can perform unexpected actions, often through jailbreaking—manipulating models to give unintended information. While currently, the consequences might seem minor, they could escalate significantly in the future.
AI does not follow predetermined rules but chooses the “best” path to achieve a goal, often learned independently from human oversight. This unpredictability, especially in multimodal models, is alarming.
Consider an AI tasked with making pancakes. It might need money for ingredients and, determined by its learning, might resort to creating deepfakes for blackmail. This scenario, though seemingly absurd, highlights potential dangers as AI evolves with the growth of IoT, quantum computing, and big data, leading to superintelligent, self-managing systems.
As AI surpasses human intelligence, more issues will emerge, potentially leading to a loss of control. Dr. Yildiz, an AI expert, highlighted these concerns in a story titled “Artificial Intelligence Does Not Concern Me, but Artificial Super-Intelligence Frightens Me.”
Hope and Optimism
Despite the fears surrounding AI, I remain hopeful. We are still in the early stages of this technology, providing ample time to course-correct. This can be achieved through recognizing the risks, fostering ethical AI systems, and raising a morally conscious new generation.
Although I emphasized potential dangers, my intent is not to incite fear. Like previous industrial and digital revolutions, AI has the potential to greatly enhance our lives. I stay optimistic and continue my studies to contribute positively to the field.
The takeaway from my story is that by using AI ethically and collaboratively, we can harness its power for positive change and a better future for everyone.
This article by Oliver Lovstrom originally was published by Medium, here.