Combating Online Misinformation: Diverging Approaches in the United States and European Union
As Big Tech gets more scrutiny for its role in spreading misinformation via online platforms, the United States and European Union pursue different paths.
By: Saaket Pradhan, Staff Member
Both the United States and European Union have in place longstanding rules that protect operators of social media companies and other digital platforms from liability for what users post on their sites. These rules were designed over twenty years ago to promote growth in the then-nascent internet sector, and they underpin how the web works today.
With misinformation from political elections and COVID-19 on unregulated social media platforms leading to harmful effects, both the United States and the EU are re-examining their rules and considering whether to rein in powerful and wide-reaching platforms such as Facebook, Twitter, and YouTube. The key question for policymakers is whether and how these platforms should be required to moderate user-generated content on these sites, and what legal ramifications the platforms should face if they are found to be in violation of these hypothetical future standards.
The American Approach
For the United States, the 2016 presidential election awakened many to the threat of disinformation on social media platforms. For example, Russia interfered in the American election by spreading disinformation on Facebook. However, unlike the European Union, the United States has not enacted regulations to prohibit false or misleading political advertisements on socials media.
Because of the absence of federal regulations of social media platforms, there is little action the federal government can take regarding content posted on social media. For example, with Twitter, the government does frequently flag content and requests that the offending content be removed from the social network, but since there is no legal requirement for Twitter to oblige these requests, Twitter routinely denies them.
With a lack of government regulation or guidance informing American social media companies’ response to the spread of disinformation, some companies have resorted to answering the question of what speech should be permitted and what should be banned on their own.
Social media platforms may moderate content under Section 230 of the Communications Decency Act (CDA), which gives online intermediaries broad immunity from liability for user-generated content posted on their sites. The purpose of this grant of immunity was to encourage platforms to take active roles in removing offensive content as “Good Samaritans” as well as to avoid free speech problems of censorship.
The European Approach
The European Union has handled internet regulation much more aggressively. Europeans’ fears about the harm caused by disinformation on social media have been growing over the past few years. The 2015 refugee crisis, the Brexit referendum in 2016, and a series of major elections, notably the German federal and French presidential elections in 2017, raised concerns about the scale of disinformation and its threat to European democracy. In response, Europe has been developing a working framework to try and rein in the power that the largest social media platforms have.
European laws have already started chipping away at the longstanding legal protections provided to tech platforms, such as requiring platforms to obtain licenses for copyrighted content before user posts are uploaded. In Germany and France, platforms are subject to fines if they fail to remove illegal “hate speech” as defined by those countries, as well as other harmful content, within a certain amount of time. The EU has also developed a voluntary Code of Conduct, with many of the largest tech platforms as signatories, which commits platforms to remove hate speech and demote disinformation.
In December 2020, the European Commission went a step further and introduced its most significant effort to update its digital regulations: the Digital Services Act (DSA) and the Digital Markets Act (DMA). These two proposed Acts, if adopted by the 27 Member States, would be the most sweeping overhaul of digital regulations in twenty years and enhance the EU’s reputation as having the most stringent regulations over Big Tech.
The Digital Services Act is a major overhaul of the EU’s digital rulebook and includes greater regulatory powers to fine social media giants for failing to clamp down on online falsehoods. Under its definitions, “large platforms” are those with more than 45 million users, equivalent to approximately 10% of the European Union’s population. Large platforms will have to appoint one or more compliance officers to ensure compliance with the DSA. Failure to comply could result in fines of up to 6% of total turnover in the previous fiscal year. The fines depend on the severity of the violation, how long the violations have been ongoing, and whether they recur. Failure to abide by these rules could lead to hefty fines up to ten percent of a company’s global revenue, or threats to break up firms that repeatedly break the new rules. EU Commission Executive President Margarethe Vestager has estimated that the new rules will take two years to come into force.
Looking Forward
During the Trump Administration, the lack of passing overarching regulation for Big Tech was contrasted with the president’s anger at many of the social media platforms for flagging his posts in what he alleged was an act of political censorship. In response to Twitter putting a fact-checking label on several of his tweets, then-President Trump unveiled an executive order aimed at scrapping Section 230 for social media sites seen as engaging in censorship or political conduct.
Unlike some of his opponents in the Democratic primary, Joe Biden did not place regulation of the tech giants. However, calls from Democratic Party activists for robust tech regulation have grown more urgent, especially in the aftermath of the January 6 Capitol riots, which exposed the dangers of unregulated speech on social media in a jarring way that led to the death of five people. The Biden Administration has signaled an interest in rethinking Section 230 and either revising or repealing it. Senator Richard Blumenthal (D-CT) believes that the U.S. Capitol riots that left one police officer and four rioters dead could further embolden the White House and Congress and encourage them to act.
In the waning days of the Trump Administration, an antitrust lawsuit was filed against Facebook and Google by the U.S. Department of Justice. The Biden Administration seems poised to continue this action. Its Democratic congressional allies believe there will be bipartisan support for such measures.
In the meantime, the European Union seeks to work with the Biden Administration on Big Tech regulation. The head of the European Commission, Ursula von der Leyen, called for cooperation between the EU and United States, pointing to the January 6 riots as an example of why social media needs to be regulated. She warned that such internet-inspired violence could happen in Europe as well.
In addition, von der Leyen views the unilateral ability of private companies to decide whom to censor as problematic. “No matter how right it may have been for Twitter to switch off Donald Trump's account five minutes after midnight, such serious interference with freedom of expression should be based on laws and not on company rules. It should be based on decisions of parliaments and politicians and not of Silicon Valley managers,” she said. As a first step toward such cooperation, Von der Leyen hopes to establish a joint technology council to find common ground in writing new regulations.
Overall, Europe’s existing internet regulations and policies are preparing it to better combat disinformation spread on social media. With the transformative proposal of the Digital Services Act, the European Union has taken an aggressive approach to regulating Big Tech. While the United States is far behind the EU with regard to establishing a framework for the modern online age, President Joe Biden has indicated his willingness to take a stronger approach to Big Tech regulation. However, it remains to be seen what legislation will be passed, or what will be the outcome of the antitrust lawsuits against Facebook and Google. In the meantime, the prospect of the United States and the European Union working together to establish new governing rules for Big Tech have the potential to take what is currently a divergence in regulation and turn it into a convergence.
Saaket Pradhan is a second-year student at Columbia Law School and a Staff member of the Columbia Journal of Transnational Law. He graduated from Columbia College in 2016 with a B.A. in Economics and Political Science. Prior to law school, Saaket worked as a Project Analyst at Mintz Levin.