Four recommendations for combating the threat to the right to information posed by deepfakes

Deepfakes – convincing but fake digital representations of persons created by means of artificial intelligence (AI) – not only constitute a direct threat to the reputation of the persons targeted but also to the public’s right to reliable and trustworthy information. Reporters Without Borders (RSF) is recommending four ways to combat dangerous deepfakes.

Deepfakes give the illusion of content taken from the real world, such as photographs, video or audio recordings. When created to convincingly imitate a person without that person’s consent and to mislead the public about factual reality, deepfakes constitute a direct threat to the right to reliable news and information and must be considered dangerous.

In response to the spread of dangerous deepfakes for disinformation purposes, RSF calls for the right to information to be protected by means of ad hoc legislative and regulatory measures, by deploying appropriate technical standards in the media and on digital platforms, and by updating professional journalistic standards.

RSF’s four key recommendations for combating the spread of deepfakes that endanger the right to information :

  • RSF recommends establishing criminal penalties for any person or organisation creating or disseminating deepfakes with the intent or effect of deceiving or manipulating others. Affecting the outcome of an election would be regarded as aggravating circumstances, while provision could be made for good faith exceptions
  • RSF recommends that individuals and organisations that develop or provide access to generative AI systems should be required to implement all necessary technical precautions to prevent the direct or indirect use of their systems to create deepfakes that are dangerous for the right to information. They must be held legally responsible when they fail to comply with this requirement.
  • RSF encourages the development and rapid adoption, by media and digital platforms, of technical standards guaranteeing the origin and traceability of audio-visual content, such as those developed by the Coalition for Content Provenance and Authenticity (C2PA).
  • RSF encourages journalists and media to maintain a clear and transparent distinction between authentic and synthetic audio-visual content. The media must favor the use of authentic footage and recordings to depict actual events, as urged by the Paris Charter on AI and Journalism.

More and more public figures, including journalists, have had their voices or physical appearance convincingly simulated by deepfakes in recent months. Thanks to rapid advances in AI, deepfakes simulating photographs or sound recordings have become indistinguishable from authentic content, while tools for creating video deepfakes are about to be made available to the public.

International initiatives for better regulation

RSF joins the hundreds of personalities from the world of technology and culture in demanding regulation to effectively combat the creation and distribution of dangerous deepfakes. RSF’s secretary-general, Christophe Deloire, signed an open letter published on 21 February requesting that criminal penalties should be established for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes and that AI software providers should be required to implement measures to prevent the creation of such content. Those signing the letter included the US whistleblower behind the “Facebook Files” Frances Haugen, the Dutch expert in high-tech regulation Marietje Schaake, and two US academics who specialise in AI, Stuart Russell and Gary Marcus.

Many experts and civil society organisations, including RSF, have already been sounding the alarm about the dangers that deepfakes pose to the integrity of information and democracy. Published last November, the Paris Charter on AI and Journalism, which was initiated by RSF and is supported by 16 international organisations, calls on media and journalists to “refrain from creating or using AI-generated content mimicking real-world captures and recordings or realistically impersonating actual individuals.”

At the European level, the Digital Services Act (DSA) requires very large platforms and search engines to ensure ”that an item of information, whether it constitutes a generated or manipulated image, audio or video […] is distinguishable through prominent markings.”

On 16 February, in an initiative welcomed by RSF, Google, Microsoft, TikTok and Meta undertook to label AI-generated images. Although necessary, such measures must be reinforced at the technical level and complemented at the legal level in order to be effective. Indeed, detection systems are fallible and malicious actors will always be able to continue to develop and use illegal tools that do not watermark synthetic images.

“Aside from the disinformation resulting directly from their illusory portrayal of reality, dangerous deepfakes threaten our capacity to trust all information. Their proliferation is liable to generate suspicions about the authenticity of all content, including journalistic content. Such a situation would be perilous for democracy, especially during elections. Effectively combating deepfakes requires action at all levels, from their creation to their dissemination, including by means of the law.

Arthur Grimonpont
Responsable du bureau IA et enjeux globaux de RSF
Published on