Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Big Tech Ramps Up Content Moderation Amid EU Pressure

Following

Major social media platforms like Meta and TikTok are scrambling to demonstrate their commitment to removing harmful and illegal content, following stern warnings from European Commissioner Thierry Breton.

Breton recently sent letters to the companies' CEOs, urging more action to curb the spread of violent, terror-related and election disinformation content. His letters referenced the platforms' responsibilities under the EU's Digital Services Act (DSA) to swiftly moderate illegal material when notified.

The new digital regulations, which took effect this past August, require platforms with over 45 million EU users to adopt more rigorous content monitoring or face heavy penalties that can reach up to 6 percent of their global revenue.

Both companies moved quickly to publicize new measures in the wake of Breton's letters. Meta announced the creation of operations centers with Arabic and Hebrew experts to monitor real-time content related to the Israel-Hamas conflict. The company says it has removed over 795,000 violating posts and increased takedowns of content supporting dangerous organizations.

Specific actions include prioritizing removal of content inciting violence or endangering kidnap victims, restricting problematic hashtags, temporarily removing strikes against accounts, and cooperating on memorializing deceased users per family requests.

Similarly, TikTok said in a blog post that it has mobilized significant resources and personnel in response to recent events in Israel and Palestine. This includes establishing a dedicated command center to monitor emerging threats and rapidly take action against violative content.

TikTok also detailed the rollout of automated detection systems, additional content moderators, and restrictions around livestreaming and hashtags. The company claims over 500,000 videos have been taken down for policy violations amid the recent violence.

Last Friday, Breton also penned a letter to Alphabet’s CEO Sundar Pichai, addressing the surge of a surge of illegal content and disinformation being disseminated on YouTube, following the terrorist attacks carried out by Hamas against Israel and the latter’s military response.

Breton highlighted in particular the platform's duties to protect minors from inappropriate videos and have robust age verification measures in place.

"I would firstly like to remind you that you have a particular obligation to protect the millions of children and teenagers using your platforms in the EU from violent content depicting hostage taking and other graphic videos," he wrote.

Concerns go beyond the Israel-Hamas war and the protection of minors to another pressing issue: the challenge of disinformation in the context of elections. Very important ones, considered by many observers to be "critical" for the future of the European Union, are currently taking place in Poland, where the ruling party has been accused of using disinformation techniques to facilitate its previous electoral victory.

Voters will also go soon go to polls in Belgium, Croatia, Romania, Lithuania, The Netherlands and Austria, not to mention the 2024 European Parliament elections. With 'deepfakes' and manipulated content threatening to sway voter sentiments, it’s crucial that tech companies step up their content moderation efforts.

Follow me on Twitter or LinkedInCheck out my website