Apple Intelligence Falsely Attributes Fake News To BBC, Broadcaster Files Complaint
AI-generated summary inaccurately reported that Luigi Mangione had died by suicide, even citing the BBC’s news website as the source of the fictitious article. In reality, the person is in US custody.
Just a week after its debut in the UK, Apple Intelligence has encountered a controversy involving hallucinations, prompting a formal complaint from the British Broadcasting Corporation (BBC). The issue arose when a piece of fabricated, AI-generated news was falsely attributed to the BBC. Apple Intelligence, which leverages generative AI to summarise and consolidate notifications, webpages, and messages for users, appears to have mishandled the content in this instance.
According to the BBC, the AI-generated summary inaccurately reported that Luigi Mangione had died by suicide, even citing the BBC’s news website as the source of the fictitious article. In reality, Mangione, who was charged last week with the murder of UnitedHealthcare CEO Brian Thompson, is currently in US custody.
ALSO READ | Top 10 Games Announced At Game Awards 2024: From Witcher 4 To Elden Ring Night Rain, Here Are The Trailers
A BBC spokesperson said, “BBC News is the most trusted news media in the world. It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications,” and added that it has contacted Apple to raise this concern and fix it.
Can't Trust AI?
The BBC also highlighted another issue with Apple Intelligence’s summarisation feature, reporting that it had misrepresented content from articles published by The New York Times. In one instance, the AI-generated summary inaccurately stated, “Netanyahu arrested,” despite the fact that the Israeli Prime Minister has not been arrested. Instead, on November 21, 2024, the International Criminal Court (ICC) issued an arrest warrant against him and two others.
Adding to the concerns, a recent study conducted by Columbia Journalism School revealed multiple cases where publishers' content was inaccurately attributed and taken out of context. Researchers asked ChatGPT to identify the sources of block quotes from 200 news articles published by outlets like The New York Times, The Washington Post, and Financial Times, uncovering frequent instances of misrepresentation.