Apple Developing Update After AI System Generates Inaccurate News Summaries
Apple is working on a software update to address inaccuracies generated by its Apple Intelligence system after multiple instances of false news summaries were reported.
The BBC first alerted Apple in mid-December to significant errors in the system, including a fabricated summary that falsely attributed a statement to BBC News. The summary suggested Luigi Mangione, accused of killing United Healthcare CEO Brian Thompson, had shot himself, a claim entirely unsubstantiated.
Other publishers, such as ProPublica, also raised concerns about Apple Intelligence producing misleading summaries.
While Apple did not respond immediately to the BBC’s December report, it issued a statement after pressure mounted from groups like the National Union of Journalists and Reporters Without Borders, both of which called for the removal of Apple Intelligence. Apple assured stakeholders it is working to refine the technology.
A Widespread AI Issue: Hallucinations
Apple joins the ranks of other AI vendors struggling with generative AI hallucinations—instances where AI produces false or misleading information.
In October 2024, Perplexity AI faced a lawsuit from Dow Jones & Co. and the New York Post over fabricated news content attributed to their publications. Similarly, Google had to improve its AI summaries after providing users with inaccurate information.
On January 16, Apple temporarily disabled AI-generated summaries for news apps on iPhone, iPad, and Mac devices.
The Core Problem: AI Hallucination
Chirag Shah, a professor of Information Science at the University of Washington, emphasized that hallucination is inherent to the way large language models (LLMs) function.
“The nature of AI models is to generate, synthesize, and summarize, which makes them prone to mistakes,” Shah explained. “This isn’t something you can debug easily—it’s intrinsic to how LLMs operate.”
While Apple plans to introduce an update that clearly labels summaries as AI-generated, Shah believes this measure falls short. “Most people don’t understand how these headlines or summaries are created. The responsible approach is to pause the technology until it’s better understood and mitigation strategies are in place,” he said.
Legal and Brand Implications for Apple
The hallucinated summaries pose significant reputational and legal risks for Apple, according to Michael Bennett, an AI adviser at Northeastern University.
Before launching Apple Intelligence, the company was perceived as lagging in the AI race. The release of this system was intended to position Apple as a leader. Instead, the inaccuracies have damaged its credibility.
“This type of hallucinated summarization is both an embarrassment and a serious legal liability,” Bennett said. “These errors could form the basis for defamation claims, as Apple Intelligence misattributes false information to reputable news sources.”
Bennett criticized Apple’s seemingly minimal response. “It’s surprising how casual Apple’s reaction has been. This is a major issue for their brand and could expose them to significant legal consequences,” he added.
Opportunity for Publishers
The incident highlights the need for publishers to protect their interests when partnering with AI vendors like Apple and Google.
Publishers should demand stronger safeguards to prevent false attributions and negotiate new contractual clauses to minimize brand risk.
“This is an opportunity for publishers to lead the charge, pushing AI companies to refine their models or stop attributing false summaries to news sources,” Bennett said. He suggested legal action as a potential recourse if vendors fail to address these issues.
Potential Regulatory Action
The Federal Trade Commission (FTC) may also scrutinize the issue, as consumers paying for products like iPhones with AI capabilities could argue they are not receiving the promised service.
However, Bennett believes Apple will likely act to resolve the problem before regulatory involvement becomes necessary.