Background – Who Calls AI Ethical
On March 13, 2024, the European Union (EU) enacted the EU AI Act, a move that some argue has hindered its position in the global AI race. This legislation aims to ‘unify’ the development and implementation of AI within the EU, but it is seen as more restrictive than progressive. Rather than fostering innovation, the act focuses on governance, which may not be sufficient for maintaining a competitive edge.
Thank you for reading this post, don't forget to subscribe!The EU AI Act embodies the EU’s stance on Ethical AI, a concept that has been met with skepticism. Critics argue that Ethical AI is often misinterpreted and, at worst, a monetizable construct. In contrast, Responsible AI, which emphasizes ensuring products perform as intended without causing harm, is seen as a more practical approach. This involves methodologies such as red-teaming and penetration testing to stress-test products.
This critique of Ethical AI forms the basis of this insight,and Eric Sandosham article here.
The EU AI Act
To understand the implications of the EU AI Act, it is essential to summarize its key components and address the broader issues with the concept of Ethical AI.
The EU defines AI as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. It infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Based on this definition, the EU AI Act can be summarized into several key points:
- Risk Classification: AI use cases with human impact are classified as prohibited, high-risk, or minimal-risk.
- General Purpose AI Models: Solutions like OpenAI’s ChatGPT and Google’s Gemini are classified based on systemic risk.
- Prohibited AI Solutions: These include AI for social scoring, emotional recognition in the workplace, inference of discriminatory information, and predictive policing.
- High-Risk AI Solutions: These involve AI used in contexts such as vehicular safety, medical devices, critical infrastructure, recruitment, biometric surveillance, financial access, and healthcare access.
- Minimal-Risk AI Solutions: Examples include chatbots, AI-generated content, spam filtering, product recommendation systems, and personal/administrative task automation.
Fear of AI
The EU AI Act appears to be driven by concerns about AI being weaponized or becoming uncontrollable. Questions arise about whether the act aims to prevent job disruptions or protect against potential risks. However, AI is essentially automating and enhancing tasks that humans already perform, such as social scoring, predictive policing, and background checks. AI’s implementation is more consistent, reliable, and faster than human efforts.
Existing regulations already cover vehicular safety, healthcare safety, and infrastructure safety, raising the question of why AI-specific regulations are necessary. AI solutions automate decision-making, but the parameters and outcomes are still human-designed. The fear of AI becoming uncontrollable lacks evidence, and the path to artificial general intelligence (AGI) remains distant.
Ethical AI as a Red Herring
In AI research and development, the terms Ethical AI and Responsible AI are often used interchangeably, but they are distinct. Ethics involve systematized rules of right and wrong, often with legal implications. Morality is informed by cultural and religious beliefs, while responsibility is about accountability and obligation. These constructs are continuously evolving, and so must the ethics and rights related to technology and AI.
Promoting AI development and broad adoption can naturally improve governance through market forces, transparency, and competition. Profit-driven organizations are incentivized to enhance AI’s positive utility. The focus should be on defining responsible use of AI, especially for non-profit and government agencies.
Towards Responsible AI
Responsible AI emphasizes accountability and obligation. It involves defining safeguards against misuse rather than prohibiting use cases out of fear. This aligns with responsible product development, where existing legal frameworks ensure products work as intended and minimize misuse risks. AI can improve processes such as recruitment by reducing errors compared to human solutions.
AI’s role is to make distinctions based on data attributes, striving for accuracy. The concern is erroneous discrimination, which can be mitigated through rigorous testing for bias as part of product quality assurance.
Conclusion
The EU AI Act is unlikely to become a global standard. It may slow AI research, development, and implementation within the EU, hindering AI adoption in the region and causing long-term harm. Humanity has an obligation to push the boundaries of AI innovation. As a species facing eventual extinction from various potential threats, AI could represent a means of survival and advancement beyond our biological limitations.