About a month ago, Jon Stewart did a segment on AI causing people to lose their jobs. He spoke against it. Well, his words were against it, but deep down, he’s for it—and so are you, whether you realize it or not. AI Impact on Workforce is real, but is it good or bad?
Thank you for reading this post, don't forget to subscribe!The fact that Jon Stewart can go on TV to discuss cutting-edge technology like large language models in AI is because previous technology displaced jobs. Lots of jobs. What probably felt like most jobs. Remember, for most of human history, 80–90% of people were farmers. The few who weren’t had professions like blacksmithing, tailoring, or other essential trades. They didn’t have TV personalities, TV executives, or even TVs.
Had you been born hundreds of years ago, chances are you would have been a farmer, too. You might have died from an infection. But as scientific and technological progress reduced the need for farmers, it also gave us doctors and scientists who discovered, manufactured, and distributed cures for diseases like the plague. Innovation begets innovation. Generative AI is just the current state of the art, leading the next cycle of change.
The Core Issue
This doesn’t mean everything will go smoothly. While many tech CEOs tout the positive impacts of AI, these benefits will take time. Consider the automobile: Carl Benz patented the motorized vehicle in 1886. Fifteen years later, there were only 8,000 cars in the US. By 1910, there were 500,000 cars. That’s 25 years, and even then, only about 0.5% of people in the US had a car. The first stop sign wasn’t used until 1915, giving society time to establish formal regulations and norms as the technology spread.
Lessons from History
Social media, however, saw negligible usage until 2008, when Facebook began to grow rapidly. In just four years, users soared from a few million to a billion. Social media has been linked to cyberbullying, self-esteem issues, depression, and misinformation. The risks became apparent only after widespread adoption, unlike with cars, where risks were identified early and mitigated with regulations like stop signs and driver’s licenses.
Nuclear weapons, developed in 1945, also illustrate this point. Initially, only a few countries possessed them, understanding the catastrophic risks and exercising restraint. However, if a terrorist cell obtained such weapons, the consequences could be dire. Similarly, if AI tools are misused, the outcomes could be harmful.
Just this morning a news channel was covering an AI bot that was doing robo-calling. Can you imagine the increase in telemarketing calls that could create? How about this being an election cycle year?
AI and Its Rapid Adoption
AI isn’t a nuclear weapon, but it is a powerful tool that can do harm. Unlike past technologies that took years or decades to adopt, AI adoption is happening much faster. We lack comprehensive safety warnings for AI because we don’t fully understand it yet. If in 1900, 50% of Americans had suddenly gained access to cars without regulations, the result would have been chaos. Similarly, rapid AI adoption without understanding its risks can lead to unintended consequences.
The adoption rate, impact radius (the scope of influence), and learning curve (how quickly we understand its effects) are crucial. If the adoption rate surpasses our ability to understand and manage its impact, we face excessive risk.
Proceeding with Caution
Innovation should not be stifled, but it must be approached with caution. Consider historical examples like x-rays, which were once used in shoe stores without understanding their harmful effects, or the industrial revolution, which caused significant environmental degradation. Early regulation could have mitigated many negative impacts.
AI is transformative, but until we fully understand its risks, we must proceed cautiously. The potential for harm isn’t a reason to avoid it altogether. Like cars, which we accept despite their risks because we understand and manage them, we need to learn about AI’s risks. However, we don’t need to rush into widespread adoption without safeguards. It’s easier to loosen restrictions later than to impose them after damage has been done.
Let’s innovate, but with foresight. Regulation doesn’t kill innovation; it can inspire it. We should learn from the past and ensure AI development is responsible and measured. We study history to avoid repeating mistakes—let’s apply that wisdom to AI.
Content updated July 2024.