Apple’s AI summarizer feature, which aims to condense news into digestible headlines, has sparked controversy after generating false information. Reporters Without Borders (RSF), a global press freedom organization, has called for the feature’s removal, citing risks to media credibility and public trust.
The incident came to light when Apple’s AI tool incorrectly summarized a BBC report, sending users a notification claiming that Luigi Mangione, a suspect in a high-profile murder case, had died by suicide. In reality, this detail was not part of the BBC’s reporting. The BBC expressed concerns to Apple and demanded action, but the tech giant has yet to respond.
Vincent Berthier, head of RSF's technology and journalism desk, criticized the tool, stating, “AI systems operate on probabilities, not facts. Such errors harm media credibility and misinform the public.” RSF warned that the immature state of AI makes it unreliable for generating accurate news summaries and urged Apple to act responsibly.
The tool, introduced in June, allows users to receive condensed news summaries in various formats, such as paragraphs or bullet points. However, its limitations have been evident since its public rollout in October. In another mishap, the AI summarizer inaccurately reported that Israeli Prime Minister Benjamin Netanyahu had been arrested. In reality, the news concerned an arrest warrant issued by the International Criminal Court, but the notification oversimplified this, leading to widespread confusion.
Critics argue that the AI summaries present significant risks to both media outlets and public trust. The summaries are generated under the media publisher's banner, which could lead audiences to wrongly attribute inaccuracies to the news outlet itself. This jeopardizes both the outlet’s reputation and the integrity of journalism.
AI tools have been a contentious topic in the news industry. While some outlets cautiously integrate AI for content creation, others fear the implications of tools like Apple’s, which operate independently of editorial oversight. The issue is compounded by allegations that many AI systems are trained on copyrighted materials without permission. Major news organizations, including The New York Times, have taken legal action, while others, like Axel Springer, have opted for licensing deals with AI developers.
RSF emphasized the broader dangers posed by premature AI adoption, highlighting the need for stringent safeguards before deploying such tools. Meanwhile, the BBC reaffirmed its commitment to providing trustworthy information and urged Apple to address the flaws in its AI system.
Apple has yet to release an official statement regarding the controversy, leaving questions about the future of its AI summarizer unresolved. The situation underscores the growing tension between technological innovation and ethical journalism, as media outlets navigate the challenges of AI integration.