BBC accuses Perplexity AI of reproducing content without permission

The BBC has issued a legal warning to US-based artificial intelligence firm Perplexity, accusing it of reproducing BBC content “verbatim” without permission. In a letter addressed to Perplexity CEO Aravind Srinivas, the broadcaster demanded that the company cease the use of its material, delete any existing content, and offer financial compensation for what has already been used.
This marks the first time the BBC has taken such formal action against an AI company. The corporation described Perplexity’s actions as a breach of copyright under UK law and a violation of the BBC’s terms of use.
“This constitutes copyright infringement in the UK and breach of the BBC’s terms of use,” the letter stated.
The move follows the BBC’s earlier research this year which identified that four popular AI chatbots, including Perplexity AI, were inaccurately summarising news stories, including those published by the BBC. The broadcaster argued that such misrepresentation falls short of its editorial standards, which emphasise accuracy and impartiality.
“It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences – including UK licence fee payers who fund the BBC – and undermining their trust in the BBC,” the letter added.
Perplexity responded with a statement, saying, “The BBC's claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google's illegal monopoly.” The company did not clarify the connection it sees between the BBC’s actions and Google, nor did it provide further comment.
The dispute centres on the broader and increasingly contentious issue of how AI models obtain their data. Many AI platforms, including chatbots and image generators, rely on a process known as web scraping, which uses automated bots to extract data from websites. While websites can use the “robots.txt” file to signal which parts of their content should not be accessed by such tools, compliance with this mechanism is voluntary and not always observed.
The BBC stated in its letter that Perplexity had ignored its robots.txt directives, despite blocking two of the company’s crawlers. Mr Srinivas previously denied this in an interview with Fast Company, claiming that the firm’s crawlers respected such restrictions.
Perplexity has also said it does not use scraped website content to train foundational AI models, arguing instead that it synthesises real-time information from trusted sources to generate its chatbot responses. The platform, which describes itself as an 'answer engine', has gained popularity for delivering concise, web-based summaries in response to user queries.
The Professional Publishers Association (PPA), representing more than 300 media brands, voiced its support for the BBC, stating it was “deeply concerned that AI platforms are currently failing to uphold UK copyright law.” The PPA warned that the widespread practice of scraping publisher content without consent “directly threatens the UK’s £4.4 billion publishing industry and the 55,000 people it employs.”
The row highlights growing friction between AI developers and content producers over copyright, content usage, and the economic value of journalistic work. In January, Apple suspended an AI feature that generated misleading headlines from BBC News notifications after the broadcaster raised complaints.
The BBC’s action adds to mounting scrutiny of how AI tools interact with copyrighted material, as calls increase for clearer regulation and accountability in the deployment of generative AI technologies.
News