Europe’s new law designed to tame illegal online content is off to a slow start – and faces giant obstacles.
By Anda Bologa, for CEPA
Europe’s Digital Services Act promised to safeguard democracy from the digital world’s dangerous hazards: disinformation, hate speech, and manipulative online campaigns targeting elections. But the law’s high-minded ambitions have collided with the challenges of on-the-ground enforcement – and look set to clash with the incoming Trump administration.
Romanian authorities suspect China’s TikTok of meddling in the country’s presidential elections. US platforms are veering away from content moderation, attempting to woo the victorious President-elect Donald Trump. Elon Musk has slashed X’s safeguards, while Mark Zuckerberg’s Meta retreats from fact-checking.
A stark difference exists in how Europe and the US police content. Section 230 of the US Communications Decency Act gives American social media networks broad immunity for user-generated posts. A new Trump administration looks set to reinforce a hands-off approach.
Yet Europe’s DSA demands the opposite – increased accountability and active enforcement. With no equivalent law in sight in the US, tech companies face minimal incentive to adopt Europe’s strict rules.
The result will be a fragmented democratic internet and a transatlantic split. Platforms might adopt stricter moderation for Europe while allowing incendiary or extreme content in the US. But disinformation does not respect borders. Officials in Brussels must contend with US business models shaped by free-speech norms that clash with European interventionism.
Nowhere is the DSA’s struggle more visible than in Meta’s abrupt decision to end its US-based third-party fact-checking program in favor of a crowd-sourced “Community Notes.” Chief Global Affairs Officer Joel Kaplan argues that Meta’s previous fact-checking system became biased and restricted too much speech. Critics fear this shift means Meta will adopt a hands-off stance in other regions, including the EU.
On paper, the DSA subjects Meta and other Very Large Online Platforms (VLOPs) – services with at least 45 million monthly EU users – to strict oversight. They must perform detailed risk assessments. They must be transparent about content moderation. And they must cooperate with vetted third parties (including fact-checkers). But nothing in the law requires Meta to fund or maintain independent fact-checkers. If Meta continues to file risk assessments and offer transparency data, it could still claim compliance – all while allowing lies, hate speech, and disinformation to spread.
Romania’s election interference probe into TikTok highlights how small EU states struggle to rein in global platforms. Bucharest believes automated influencer campaigns skewed public sentiment before the last presidential election. Although TikTok claims full cooperation, its global reach outmatches Romania’s limited enforcement capacity, prompting the Brussels-based European Commission to open a formal DSA probe into the platform’s recommender systems and political advertising.
If the Commission finds that TikTok failed to address these systemic risks, it could levy substantial fines, up to 6% of global sales, and require drastic changes to business practices — yet it is unclear whether a full ban would ever be considered. A robust financial penalty might not prevent interference in future elections. And while the US is on the verge of banning TikTok, Europe has not even begun to consider such a possibility.
Elon Musk’s X (formerly Twitter) offers a third DSA test. After acquiring the social media network, Musk shrank moderation teams and rolled back transparency tools that once helped researchers track misinformation. Critics say it has made it easier for hate speech and disinformation to flourish.
The Commission is investigating. It’s difficult to prove X’s repeated failure to take down problematic content or mitigate disinformation. That takes time, resources, and official clarity on which posts qualify as illegal – all while Musk frames tight moderation as censorship. If the Commission acts too slowly or too timidly, disinformation might shape public debate long before regulators can intervene.
US tech CEOs are on the offensive against the DSA. “The DSA IS misinformation,” Musk fumed on X. Meta’s Zuckerberg qualifies it as “institutionalizing censorship and making it difficult to build anything innovative there.” The criticism infuriates EU leaders. “We know that it’s not true,” European Commission Vice President Henna Virkkunen told POLITICO. “In Europe freedom of speech is one of our fundamental values and it’s also respected and protected [in] our Digital Services Act. So it’s very misleading.”
If the EU wants the DSA to succeed, it needs fast and robust oversight. A rapid-response Commission unit dedicated to election crises would help. Quick involvement is vital; once disinformation is entrenched, it is hard to contain. It should require standardized transparency disclosures and mandatory external audits. This would help reveal whether Meta, TikTok, and X are quietly scaling back their moderation efforts or downplaying red flags.
A balanced penalty system might be needed because lengthy investigations let repeat violations slide. Incremental fines and other interim measures could deter slow-walking on compliance. A shared enforcement fund and cross-border collaboration would bolster national agencies that lack high-end digital forensics.
The DSA aims to prove that democracies can protect free expression while requiring serious accountability. EU regulators must act quickly and decisively to combat recent decisions that weaken content moderation, or they risk letting disinformation distort public discourse long before they can respond. Unless the EU adopts sharper tools, better funding, and closer cooperation among member states, the relentless churn of viral falsehoods threatens to overwhelm it.
By Anda Bologa, for CEPA
Anda Bologa is a non-resident Fellow with the Tech Policy Program at the Center for European Policy Analysis. During her tenure at the European Union Delegation to the United Nations, she was responsible for high-level negotiations on artificial intelligence resolutions and the United Nations Global Digital Compact.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.