4.3 C
New York
Saturday, February 22, 2025

Meta, X approved ads containing violent anti-Muslim, antisemitic hate speech ahead of German election, study finds


Social media giants Meta and X approved ads targeting users in Germany with violent anti-Muslim and anti-Jew hate speech in the run-up to the country’s federal elections, according to new research from Eko, a corporate responsibility nonprofit campaign group.

The group’s researchers tested whether the two platforms’ ad review systems would approve or reject submissions for ads containing hateful and violent messaging targeting minorities ahead of an election where immigration has taken center stage in mainstream political discourse — including ads containing anti-Muslim slurs; calls for immigrants to be imprisoned in concentration camps or to be gassed; and AI-generated imagery of mosques and synagogues being burnt.

Most of the test ads were approved within hours of being submitted for review in mid-February. Germany’s federal elections are set to take place on Sunday, February 23.

Hate speech ads scheduled

Eko said X approved all 10 of the hate speech ads its researchers submitted just days before the federal election is due to take place, while Meta approved half (five ads) for running on Facebook (and potentially also Instagram) — though it rejected the other five.

The reason Meta provided for the five rejections indicated the platform believed there could be risks of political or social sensitivity which might influence voting.

However, the five ads that Meta approved included violent hate speech likening Muslim refugees to a “virus,” “vermin,” or “rodents,” branding Muslim immigrants as “rapists,” and calling for them to be sterilized, burnt, or gassed. Meta also approved an ad calling for synagogues to be torched to “stop the globalist Jewish rat agenda.”

As a sidenote, Eko says none of the AI-generated imagery it used to illustrate the hate speech ads was labeled as artificially generated — yet half of the 10 ads were still approved by Meta, regardless of the company having a policy that requires disclosure of the use of AI imagery for ads about social issues, elections or politics.

X, meanwhile, approved all five of these hateful ads — and a further five that contained similarly violent hate speech targeting Muslims and Jews.

These additional approved ads included messaging attacking “rodent” immigrants that the ad copy claimed are “flooding” the country “to steal our democracy,” and an antisemitic slur which suggested that Jews are lying about climate change in order to destroy European industry and accrue economic power.

The latter ad was combined with AI-generated imagery depicting a group of shadowy men sitting around a table surrounded by stacks of gold bars, with a Star of David on the wall above them — with the visuals also leaning heavily into antisemitic tropes.

Another ad X approved contained a direct attack on the SPD, the center-left party that currently leads Germany’s coalition government, with a bogus claim that the party wants to take in 60 million Muslim refugees from the Middle East, before going on to try to whip up a violent response. X also duly scheduled an ad suggesting “leftists” want “open borders”, and calling for the extermination of Muslims “rapists.”

Elon Musk, the owner of X, has used the social media platform where he has close to 220 million followers to personally intervene in the German election. In a tweet in December, he called for German voters to back the Far Right AfD party to “save Germany.” He has also hosted a livestream with the AfD’s leader, Alice Weidel, on X.

Eko’s researchers disabled all test ads before any that had been approved were scheduled to run to ensure no users of the platform were exposed to the violent hate speech.

It says the tests highlight glaring flaws with the ad platforms’ approach to content moderation. Indeed, in the case of X, it’s not clear whether the platform is doing any moderation of ads, given all 10 violent hate speech ads were quickly approved for display.

The findings also suggest that the ad platforms could be earning revenue as a result of distributing violent hate speech.

EU’s Digital Services Act in the frame

Eko’s tests suggests that neither platform is properly enforcing bans on hate speech they both claim to apply to ad content in their own policies. Furthermore, in the case of Meta, Eko reached the same conclusion after conducting a similar test in 2023 ahead of new EU online governance rules coming in — suggesting the regime has no effect on how it operates.

“Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect,” an Eko spokesperson told TechCrunch.

“Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” they added, pointing to the company’s recent announcement about rolling back moderation and fact-checking policies as a sign of “active regression” that they suggested puts it on a direct collision course with DSA rules on systemic risks.

Eko has submitted its latest findings to the European Commission, which oversees enforcement of key aspects of the DSA on the pair of social media giants. It also said it shared the results with both companies, but neither responded.

The EU has open DSA investigations into Meta and X, which include concerns about election security and illegal content, but the Commission has yet to conclude these proceedings. Though, back in April it said it suspects Meta of inadequate moderation of political ads.

A preliminary decision on a portion of its DSA investigation on X, which was announced in July, included suspicions that the platform is failing to live up to the regulation’s ad transparency rules. However, the full investigation, which kicked off in December 2023, also concerns illegal content risks, and the EU has yet to arrive at any findings on the bulk of the probe well over a year later.

Confirmed breaches of the DSA can attract penalties of up to 6% of global annual turnover, while systemic non-compliance could even lead to regional access to violating platforms being blocked temporarily.

But, for now, the EU is still taking its time to make up its mind on the Meta and X probes so — pending final decisions — any DSA sanctions remain up in the air.

Meanwhile, it’s now just a matter of hours before German voters go to the polls — and a growing body of civil society research suggests that the EU’s flagship online governance regulation has failed to shield the major EU economy’s democratic process from a range of tech-fueled threats.

Earlier this week, Global Witness released the results of tests of X and TikTok’s algorithmic “For You” feeds in Germany, which suggest the platforms are biased in favor of promoting AfD content versus content from other political parties. Civil society researchers have also accused X of blocking data access to prevent them from studying election security risks in the run-up to the German poll — access the DSA is supposed to enable.

“The European Commission has taken important steps by opening DSA investigations into both Meta and X, now we need to see the Commission take strong action to address the concerns raised as part of these investigations,” Eko’s spokesperson also told us.

“Our findings, alongside mounting evidence from other civil society groups, show that Big Tech will not clean up its platforms voluntarily. Meta and X continue to allow illegal hate speech, incitement to violence, and election disinformation to spread at scale, despite their legal obligations under the DSA,” the spokesperson added. (We have withheld the spokesperson’s name to prevent harassment.)

“Regulators must take strong action — both in enforcing the DSA but also for example implementing pre-election mitigation measures. This could include turning off profiling-based recommender systems immediately before elections, and implementing other appropriate ‘break-glass’ measures to prevent algorithmic amplification of borderline content, such as hateful content in the run-up elections.”

The campaign group also warns that the EU is now facing pressure from the Trump administration to soften its approach to regulating Big Tech. “In the current political climate, there’s a real danger that the Commission doesn’t fully enforce these new laws as a concession to the U.S.,” they suggest.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles