AI Act: a first text that does not address issues related to the information space

As the European Parliament prepares to vote on the proposed Artificial Intelligence Act (AI Act), Reporters Without Borders (RSF) regrets that this foundational law fails to protect the right to information by not classifying algorithms involved in the production and distribution of information as “high risk” systems requiring stricter regulation before being placed on the market.

The AI Act constitutes a significant advance by including such measures as the mandatory labelling of AI-generated content, for which RSF has advocated, and transparency about the use of copyright-protected content in language model training databases. But the bill on which MEPs will vote on 13  March is incomplete. The proposed measures are not enough to protect the right to information. The absence of legal provisions specific to informational issues in the final version constitutes a major shortcoming.

AI systems that produce information should be subjected to the same protection as systems used in such crucial areas such as education, justice or elections, by being classified as “high risk.” AI systems in this category are subjected to close scrutiny before they can be placed on the market. 

“Access to information is one of the pillars of democracy, and failing to regard systems affecting this right as “high risk” is therefore incomprehensible. This lack of protection is all the more worrying given that more than half of the world relies on artificial intelligence algorithms to obtain news and information via social media. The AI Act is an ideal legislative mechanism for forcing those who build these systems to be transparent and to put these AIs at the service of information. The initial bases established by the AI Act, on which MEPs are voting this month, will have to be broadened and consolidated to fill this gap.”

Vincent Berthier

Head of RSF’s Tech Desk 

The AI Act classifies systems according to the risks they pose to democracy. The two strictest categories are “Unacceptable risk,” which designates systems that are completely banned in the EU, and “High risk,” designating systems that are allowed into the EU market only if they comply with very high requirements with regard to data quality, risk assessment, transparency and human validation.

The content curation and recommendation algorithms used on major online information platforms should be subject to such requirements, as should chatbots, search engines and content production tools such as ChatGPT.

The Forum on Information and Democracy, of which RSF is a co-founder, has produced a comprehensive report providing governments and AI designers with very specific recommendations for the democratic safeguards needed to protect our online information arena. The report recommends, for example, that the content of training databases should made accessible by means of ergonomic interfaces, such as search engines. It also calls on governments to support the development of databases of languages and cultures that are poorly represented in AI systems, in particular by funding media producing content in these languages.

Published on
Updated on 12.03.2024