The Digital Services Act represents the democratic world’s most ambitious effort to regulate social media. It’s off to a slow start.
By Bill Echikson, for CEPA
The European Union designed the 2023 Digital Services Act to force Google, Facebook, Twitter, Telegram, and other digital platforms to combat disinformation, online extremism, and outright scams. And yet, when France arrested Telegram founder Pavel Durov, it could not bring charges under the landmark act.
While the sprawling DSA symbolizes a vital turning point away from unbridled Internet freedom to intense regulatory scrutiny of social media, its early months of implementation demonstrate the difficulty of balancing free expression and government oversight. How the DSA will work, how it will be enforced, and whether it will provoke a transatlantic split remain giant question marks.
Start with the Telegram paradox. Durov’s Telegram, despite having almost one billion users, serves under 45 million in the EU, which is not enough to be designated under the DSA as a Very Large Online Platform. France arrested the Russian entrepreneur under its national speech laws, accusing him of running an encrypted platform for pedophiles, cybercriminals, and terrorists. Yet Telegram has also proved invaluable for government opponents, allowing them to communicate under dictatorial regimes.
The line between acceptable and unacceptable speech, between censorship and platform responsibility, is fuzzy. When I worked at Google, Italy indicted three of Google’s top executives for violating a handicapped Italian boy’s privacy. The so-called Vividown case stemmed from the upload of a terrible video showing a bunch of boys browbeating an autistic child. Although Google took down the video once notified, the executives faced criminal charges. Censorship collided with privacy.
Google lost the first trial. My own efforts to make the case into a fight for free expression failed. Google finally hired the lawyer who defended wrongly convicted American student Amanda Knox’s boyfriend and won on appeal. No Google executive went to prison in Italy.
Yet the case represented a vital warning: a pendulum was turning against the Internet’s unbridled freedom. In the ensuing years, pressure would grow to crack down on controversial content, and companies would be required to demonstrate “responsibility.”
When the Internet was developed, the US and Europe set clear limits on liability for digital platforms. Platforms weren’t held responsible for illegal material uploaded to their sites. Instead, they were responsible only for bringing down illegal material when informed. The rules in Europe were called the e-Commerce Directive. In the US, they were Section 230 of the Communications Decency Act.
Without these legal safe harbors, many of the internet’s success stories would never have gotten off the ground. Imagine if YouTube was held responsible for every upload, Blogger for every blogspot, and TripAdvisor for each restaurant or hotel review. User-generated content would have been too dangerous to publish.
Today, governments, courts, and public opinion demand that internet firms police and prevent illegal material from being posted on their platforms. Copyright owners believe the net feeds piracy. Police and intelligence services think it feeds extremist terrorism and want access to data from suspects. Politicians fear false news could tarnish elections and even force them from power.
While the European Union passed the DSA, the US has hesitated. The First Amendment allows much more free speech than Europe, which criminalizes certain subjects such as Holocaust denial. The Supreme Court, so radical in its approach to many issues, has avoided imposing clear new rules. Congress still seems to be blocked in approving primary federal tech legislation, even though bipartisan momentum is behind a new child safety act.
It is difficult to find agreement on how to reform the e-Commerce Directive or Section 230. If the crackdown is too severe, free expression suffers. Europe’s DSA attempts to strike a balance by preserving the ban on proactive general monitoring in the e-Commerce Directive. But it requires platforms to undertake a series of measures to control what people can post, what they sell, and what advertisements they see—all to protect others online. Firms that fail to comply face fines of up to 6% of their worldwide revenue.
Even armed with this potent weapon, regulators face significant enforcement challenges. The Brussels-based team in charge of DSA implementation counts only about a dozen members. Much of its time is spent ensuring that new EU legislation in areas such as product safety is separate from it.
National EU governments, supposed to support the EU team, have delayed appointing their DSA officials. Many of those appointed await guidance from Brussels. The regulators are conducting new consultations on how to use the law to reinforce child safety and how to appoint “trusted flaggers” to identify illegal content.
Enforcement has focused on a few high-profile cases. Brussels regulators forced TikTok to close a program rewarding users for spending time on the platform. It opened a probe into Meta’s decision to close its CrowdTangle transparency tool.
The most significant conflict comes with Elon Musk’s X, which, unlike Telegram, has more than 45 million EU users and is a Very Large Online Platform. European regulators have issued a preliminary finding that X’s content moderation policies breach the DSA. Before Musk’s recent interview with Republican candidate Donald Trump, European Commissioner Thierry Breton warned, ironically in an X tweet, against violating the DSA over free speech.
Musk responded with an expletive, decrying what he called censorship. Previously, he derided the DSA as “misinformation.” X could face billions in fines — and highlight US and EU divergences over how to regulate Internet freedom.
Instead of fighting about regulation, the best path forward might be to encourage bottom-up rather than top-down approaches to police content. Wikipedia and Reddit are positive models. Both sites have made the arduous journey away from permitting hate speech to winning trusted news sources. They self-police.
European officials administering the DSA don’t expect platforms to prevent all hate speech. They just want to see strong programs to limit the dangers. While the US may not legislate such requirements, such programs reassure their users and are suitable for business. Who wants to advertise on a toxic cesspool?
By Bill Echikson, for CEPA
Bill Echikson is a non-resident Senior Fellow for CEPA’s Digital Innovation Initiative and editor of Bandwidth.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.