RSF calls on the European Union to bolster safeguards for reliable information in the AI Act’s Code of Practice despite pressure from industry giants

The European AI Office, which is responsible for implementing the European AI Act, is being pressured by tech companies to weaken the code of practice that helps regulate general-purpose artificial intelligence (AI). Reporters Without Borders (RSF) urges the European AI Office to stand up to these corporations and strengthen the text by addressing its shortcomings in protecting journalism and the right to reliable information.
When asked to weigh in on the initial version of the General-Purpose AI Code of Practice in late November 2024, RSF expressed concerns about the lack of concrete measures to protect journalism and reliable information in the text, which will act as a self-regulation tool for AI providers, as provided by the European AI Act. In the second round of negotiations, the European AI Office submitted a draft of the code with even weaker protections for journalism. Now, the European Commission has delayed the third version of the Code as tech companies seek to further dilute the text and threaten not to sign it. The battle over the AI Code of Practice is a crucial regulatory issue, and the European Commission must stand strong.
“Every week, the threats to reliable information posed by unregulated AI become more evident. In Europe, the German parliamentary elections were marked by disinformation campaigns that used generative AI to create fake news sites. Instead of taking note and stepping up efforts, the European AI Office stepped back by removing the one mention of AI’s impact on the media that was present in the initial version of the code. We call on the AI Office to reverse this decision and reiterate our demand: label the infringement of European citizens’ right to reliable information as a “systemic risk,” which would oblige AI providers to proceed with the utmost vigilance.
The code’s classification of “systemic risks,” which outlines and categorises the risks its signatories must assess and mitigate, mentions neither the media, nor journalism, nor reliable information. This list only cites one such risk, “the facilitation of large-scale manipulation.” In its current form, the Code narrowly targets intentional, large-scale manoeuvers and excludes from its scope the numerous real, documented threats already afflicting journalism and the right to information.
What’s more, an extraordinary range of dangers — deepfakes damaging journalists’ reputations, AI-generated fake news sites, chatbots’ dissemination of propaganda content, the systematic production of inaccurate information by large language models, and the accusation that the rise in false information stems from traditional news outlets, which contributes to the mass erosion of trust in the media — are overlooked by the latest version of the code. This gap in the text is broad and dangerous, and must be urgently addressed.