The US Government’s Role in Regulating Social Media Disinformation

This article was written prior to Elon Musk’s offer to purchase Twitter.

In a 2020 report, the Department of Homeland Security named White supremacists as the single greatest domestic terror threat facing the US. In 2021, QAnon supporters, the Proud Boys, and other extremist groups carried out a direct attack on democracy in plain sight. These terrorist groups didn’t use covert codes and unconventional channels to communicate: all of their recruitment and organization took place openly on social media. 

Almost a year later, many still wonder: has anything been done to prevent such an egregious attack from happening again?

Americans who disliked the former administration celebrated when Twitter suspended then-President Donald Trump after the January 6 attack. And rightfully so. There is growing recognition that Twitter played a role in instigating the riots. Social media posts can cause tangible damage when companies set their own content standards and obfuscate the truth. Disinformation about the election festered and spread. Media corporations allowed certain profitable narratives to grow stronger, such as “The Big Lie,” Trump’s false notion that the presidential election was fraudulent and stolen. So, the prevailing question going forward is: what role should the federal government play in regulating the content on social media platforms? Sadly, little has progressed since January 6.

The Federal Communications Commission (FCC) is the government body responsible for regulating social media. However, the First Amendment prevents the Commission from exercising greater authority over content posted online. Regulation of other media forms like broadcasting have been the subject of constitutional debate in the context of many landmark Supreme Court cases. For instance, FCC v. Pacifica Foundation (1978) ruled that the government could regulate indecent speech (language or material deemed offensive) in broadcast news. 

For social media, however, there is a loophole to First Amendment restrictions found in Section 230 of the Communications Act. Section 230 states that companies are not liable for illicit or “otherwise objectionable” content posted by users on their platforms if they make a “good faith” attempt to remove it. There is no clear definition of what “good faith” means. Legislators intended for this “good samaritan” provision to allow flexibility for platforms to curate their content freely without immediate legal repercussions or restrictions. Furthermore, Section 230 grants the same immunity to both the neutral “platforms” that host content and the “publishers” that create their own.

In May 2020, over a year before the attack on the Capitol, then-President Trump signed an executive order requesting the National Telecommunications and Information Administration to draft a petition for the FCC to reinterpret Section 230. In response, the FCC chairman said that social media companies “do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.” Additionally, in 2021, the Texas Supreme Court began limiting Section 230 protections for social media platforms, ruling that Facebook was not shielded from liability for sex-trafficking recruitment occurring on its site. These seem to be steps in the right direction, but lawmakers are otherwise inactive. 

Congress updated the 1934 Communications Act almost sixty years later with the Telecommunications Act of 1996, but new legislation is needed to address the threat that social media poses to our democracy. Technological advancements since 1996 have been significant and Congress needs to adjust to the moment. To qualify for the protections of Section 230, social media companies should have their private regulatory mechanisms and practices approved by the FCC. The FCC should define a “good faith” effort by setting universal standards of conduct for the review and removal of certain content. The most popular social media companies, mostly run in the US, have similar policies for content moderation but apply them in different ways with full discretion. A law similar to NetzDG, the law in Germany that requires social media companies to quickly remove content like hate speech or face a fine, would ensure proper “good faith” implementation. 

A majority of the US population, particularly older people, are vulnerable to online threats because they don’t understand how social media operates. In addition to FCC regulation, it is crucial to fund national educational programs to raise awareness on how harmful information spreads online, who perpetrates it, and what individuals can do to protect themselves. For example, the Estonian government has set media literacy and online safety—via formal education and life-long learning—as a key policy goal. Estonia has seen a marked improvement in digital competence through the prioritization of such skills. Improving this knowledge for Americans will lead to more public demand for further government action. With this, there is greater likelihood that social media regulation will be implemented in the long-term. 

Tangible events that bring public attention to disinformation seem to drive the most urgent action. The most recent case of the US government working to combat the insidious use of social media content is regarding Russian disinformation about Ukraine. The Biden administration is fighting Russian propaganda by sharing intelligence to expose and discredit disinformation, instead of targeting social media companies themselves. This strategy of preemptively discrediting disinformation is one that has potential to be effective in this instance; however a multipronged approach is necessary for more general cases.

Unless regulated properly and comprehensively over time, insidious social media content will lead to further attacks on democracy. Congress needs to pass legislation granting the FCC authority to examine Section 230 protections and to enforce universal regulatory standards for social media companies. In addition, the government needs to put national programs in place to increase awareness and generate support for social media regulation among the public. Even as Facebook feels increasing pressure to take steps to counter misinformation, it lacks generally applicable standards. No private company should have the ability to dictate what the truth is in America without being held accountable.

Related articles

Next Stop Taxachusetts: Finding Funds for a 21st Century Transportation System

“In light of the significant financial challenges the MBTA is currently facing, specifically its tremendous debt burden, the precipitous decline in the receipt of projected and dedicated revenues since the inception of Forward Funding, and perhaps most importantly the inability to finance billions of dollars worth of critical state of good repair projects, the MBTA’s […]

Healthcare Reform After Obama

I. For those committed to the protection of vulnerable populations in the United States, perhaps the most reassuring part of a Hillary Rodham Clinton victory on November 8th would have been her administration’s ability to uphold and improve the Affordable Care Act (ACA). Despite Republican insistence that the central pillar of President Obama’s legacy is […]