Grammarly’s AI Regulatory Master Class: Key Insights on Navigating AI Compliance

Thank you for reading this post, don't forget to subscribe!

On August 27, 2024, Grammarly hosted an AI Regulatory Master Class webinar, featuring Scout Moran, Senior Product Counsel, and Alan Luk, Head of Governance, Risk, and Compliance (GRC). The event provided a comprehensive overview of the current and upcoming AI regulations affecting organizations’ AI strategies, along with guidance on evaluating AI solution providers, including those offering generative AI.

While the webinar avoided deep legal analysis and did not serve as legal advice, Moran and Luk spotlighted key regulations emerging from both the U.S. and European Union (EU), highlighting the rapid response of regulatory bodies to AI’s growth.

Overview of AI Regulations

The AI regulatory landscape is changing quickly. A May 2024 report from law firm Davis & Gilbert noted that nearly 200 AI-related laws have been proposed across various U.S. states. Grammarly’s presentation emphasized the need for organizations to stay updated, as both U.S. and EU regulations are shaping the future of AI governance.

The EU AI Act: A New Regulatory Framework

The EU AI Act, which took effect on August 2, 2024, applies to AI system providers, importers, distributors, and others connected to the EU market, regardless of where they are based. As Moran pointed out, the Act is designed to ensure AI systems are deployed safely. Unsafe systems may be removed from the market, establishing a regulatory baseline that individual EU countries can strengthen with more stringent measures.

However, the Act does not fully define “safety.” Legal experts Hadrien Pouget and Ranj Zuhdi noted that while safety requirements are crucial to the Act, they are currently broad, allowing room for further development of standards.

The Act prohibits certain AI practices, such as manipulative systems, those exploiting personal vulnerabilities, and AI used to assess or predict criminal risk. AI systems are categorized into four risk levels: unacceptable, high-risk, limited risk, and minimal risk. High-risk systems—such as those in critical infrastructure or public services—face stricter regulation, while minimal-risk systems like spam filters have fewer requirements. Full enforcement of the Act will begin in 2025.

U.S. AI Regulations

Unlike the EU, the U.S. focuses more on national security than consumer safety in its AI regulation. The U.S. Executive Order on Safe, Secure, and Trustworthy AI addresses these concerns. At the state level, Moran highlighted trends such as requiring clear disclosure when interacting with AI and giving individuals the right to opt out of having their data used for AI model training. States like California and Utah are leading the way with specific laws (SB-1047 and SB-149, respectively) addressing accountability and disclosure in AI use.

Key Considerations When Selecting AI Vendors

Moran stressed the importance of thoroughly vetting AI vendors. Organizations should ensure vendors meet cybersecurity standards, such as SOC 2, and clearly define how their data will be used, particularly in training large language models (LLMs). “Eyes off” agreements, which prevent vendor employees from accessing customer data, should also be considered.

Martha Buyer, a frequent contributor to No Jitter, emphasized verifying the originality of AI-generated content from providers like Grammarly or Microsoft Copilot. She urged caution in ensuring the ownership and authenticity of AI-assisted outputs.

The Importance of Strong Third-Party Agreements

Luk highlighted Grammarly’s commitment to data privacy, noting that the company neither sells customer data nor uses it to train models. Additionally, Grammarly enforces agreements preventing its third-party LLM providers from doing so. These contractual protections are crucial for safeguarding customer data.

Organizations should also ensure third-party vendors adhere to strict guidelines, including securing customer data, encrypting it, and preventing unauthorized access. Vendors should maintain updated security certifications and manage risks like bias, which, while not entirely avoidable, must be actively addressed.

Staying Ahead in a Changing Regulatory Environment

Both Moran and Luk stressed the importance of ongoing monitoring. Organizations need to regularly reassess whether their vendors comply with their data-sharing policies and meet evolving regulatory standards. As AI technology and regulations continue to evolve, staying informed and agile will be critical for compliance and risk mitigation.

In conclusion, organizations adopting AI-powered solutions must navigate a dynamic regulatory environment. As AI advances and regulations become more comprehensive, remaining vigilant and asking the right questions will be key to ensuring compliance and reducing risks.

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Salesforce’s Quest for AI for the Masses
Roles in AI

The software engine, Optimus Prime (not to be confused with the Autobot leader), originated in a basement beneath a West Read more