The rapid evolution and widespread adoption of AI tools have policymakers racing to establish effective regulations and laws. The Road for AI Regulation will be rocky at times, and will create new challenges for businesses. Law professor Michael Bennett explores the state of AI regulation in 2024.
Thank you for reading this post, don't forget to subscribe!The concept of artificial intelligence, or synthetic minds capable of thinking and reasoning like humans, has been around for centuries. Ancient cultures often expressed ideas and pursued goals similar to AI, and in the early 20th century, science fiction brought these notions to modern audiences. Works like The Wizard of Oz and films such as Metropolis resonated globally, laying the groundwork for contemporary AI discussions.
Road for AI Regulation
In 1956, John McCarthy and Marvin Minsky organized the Dartmouth Summer Research Project on Artificial Intelligence, coining the term “artificial intelligence” and setting the stage for serious efforts to realize this long-standing dream. Over the next five decades, AI’s development ebbed and flowed, but with the Digital Age’s exponential growth in computational power and plummeting costs, AI moved from speculative fiction to technological reality.
By the early 2000s, significant funding accelerated AI advancements, particularly in machine learning, leading to breakthroughs that have integrated AI into business operations.
Generative AI Tools Go Mainstream
In 2023, generative AI systems—tools based on machine learning algorithms that create new content like images, text, or audio—became a major topic of public discourse. Companies scrambled to understand and implement tools like OpenAI’s ChatGPT-4 and other large language models. The potential benefits became clear: increased efficiency, reduced human error, cost savings through automation, and the discovery of unexpected insights in vast data sets.
As AI’s capabilities and business benefits grew, so did its complexities and risks, prompting governments worldwide to find ways to protect the public without stifling innovation.
The Urgency of AI Regulation
Governments are now intensely focused on regulating AI, driven by perennial concerns like consumer protection, civil liberties, intellectual property rights, and fair business practices. However, the competition for AI supremacy is just as crucial. To attract talent and businesses, governments must create predictable regulatory environments where AI enterprises can flourish.
This places governments in a difficult position, balancing the need to protect citizens from AI’s potential downsides while fostering the development of these transformative technologies.
A Snapshot of AI Regulation
Lawmakers are under increasing pressure to regulate AI, with regulations and proposals spreading almost as quickly as AI applications.
U.S. Regulatory Protections and Policies
In the U.S., AI regulation is a priority at all government levels. Federal efforts focus on AI risk assessment, particularly regarding the creation and outcomes of algorithms, often described as “black box” systems that are challenging to regulate. The Algorithmic Accountability Act, currently under debate in Congress, could require entities using AI in critical decisions to assess its impacts before and after deployment. Other proposed laws, like the DEEP FAKES Accountability Act and the Digital Services Oversight and Safety Act, aim to increase transparency and accountability in AI-generated content.
Significantly, the Biden Administration’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a new level of commitment to responsible AI deployment. This order sets out eight goals, including mitigating AI risks, protecting workers, enhancing consumer protections, and positioning the U.S. as a global leader in AI governance.
States and cities are also stepping up. California, Connecticut, Texas, and Illinois are developing their own AI regulations, while New York City leads with Local Law 144, focusing on AI in employment decisions.
Global AI Regulations
The European Union’s AI Act, effective as of August 1, 2024, applies to all 27 member countries and is expected to create significant regulatory reverberations globally. It mandates developers working with high-risk AI to test, document, and mitigate associated risks.
China’s Interim Measures for the Management of Generative Artificial Intelligence Services, enacted in August 2023, and Canada’s Artificial Intelligence and Data Act are other examples of global regulatory efforts. Across the Americas and Asia, at least eight other countries are developing their AI regulations.
Potential Impact of AI Regulation on Companies
For businesses operating across multiple jurisdictions, the proliferation of AI regulations poses significant challenges. Over the next 12 to 18 months, companies should prepare for an increase in regulatory proposals and enforced laws. This will likely lead to greater complexity in business operations and higher compliance costs, requiring new expertise and regular updates from AI law specialists.
One major challenge will be navigating the interaction between AI regulations and existing laws. For instance, in the U.S., the Federal Trade Commission has signaled its intent to regulate exaggerated AI claims, meaning companies must understand how old laws apply to new technologies.
Companies will also need to be cautious in choosing vendors for AI certification, ensuring they comply with new regulations and demonstrate trustworthiness to consumers and insurers.
The road ahead for AI regulation will be challenging, but companies that stay focused on their core missions, embrace AI responsibly, and seek high-quality legal advice will be well-positioned to navigate this rapidly growing business scape.