Most Americans do not trust social media companies to make the right decisions about what should be allowed on their platforms, but trust the government even less to make those choices, according to a poll released on Tuesday by Gallup and the Knight Foundation.
The debate over online content moderation, already in the spotlight during the COVID-19 pandemic and run-up to the U.S. election, has intensified in recent weeks as Twitter and Facebook diverged on how to handle inflammatory posts by President Donald Trump.
Twitter last month began to place labels on some of Trump's controversial tweets, while Facebook ignored its employees' request to take down his posts and even fired an employee who continued to protest against the company's inaction.
The new poll found nearly two-thirds of Americans favor letting people freely express their views on social media, including views that are offensive.
However, 85 percent of respondents favored removing intentionally false or misleading health information and 81 percent supported removing intentionally misleading claims about elections or other political issues.
Respondents were more critical of companies doing too little than too much in policing harmful content. 71 percent Democrats and 54 percent independents thought companies were not tough enough, whereas Republicans were more divided.
Eight in 10 respondents said they do not trust Big Tech to make the right decisions on content. Most preferred companies making these rules over the government, though a slim majority of Democrats favored the government setting content limits or guidance.
Respondents tended to prefer the idea of having independent content oversight boards to govern policies, with 81 percent saying such boards were a good idea. Facebook is in the process of setting up an oversight board, which will hear a small number of content cases and make policy recommendations.
Almost two-thirds of respondents said they supported in principle the law that shields major internet companies from liability for users' content, Section 230 of the Communications Decency Act, which Trump and many lawmakers are pushing to pare back.
Long in the making
Observers say Twitter's recent actions on Trump's posts has the potential to open the floodgates for unprecedented regulation of the tech industry, but the roots that had come to shape the company's new approach in this regard were sowed as early as 2016.
As the election that year was believed to be rigged by Russian operatives who flagrantly abused Twitter, Facebook and other social media platforms, congress and the American public began to hold them accountable for not doing anything on these abuses. Following a congressional hearing in 2017, both companies had launched their respective programs to fight posts that were deemed as contaminating their platforms.
The two social media giants went to great length on self-policing, with Facebook hiring tens of thousands of content moderators to run "fact-checking" and Twitter beginning to remove fake accounts and insulting posts en mass.
These censoring programs, however, had all exempted policymakers because their messages were too "newsworthy," but Twitter, after weighing the pros and cons of leaving these public statements as they are, began to feel it was inappropriate to make exceptions for politicians, people familiar with the matter told the Washington Post.
This thinking was shaped after many members of the public complained to Twitter about the exception and criticisms were mounted against Facebook in 2018 over its reticence in government officials' inflammatory messages.
However, the absence of a fact-checking tool like that of Facebook's left Twitter with only two options to deal with content: leave it up or take it down, and since the newsworthiness of politicians' statements discouraged it from outright removing them, it had to come up with a third option that could not only help the company fulfill its obligation but also limit its loss in news production.
Labeling hence became Twitter's strategy and it was first used on comments made by politicians outside the United States last year.
Last month, Twitter said it would begin to apply fact-check labels to misinformation about the coronavirus and its first flagging of Trump's tweet followed later in May.
(With input from agencies)
(Cover: Twitter and Facebook logos along with binary cyber codes are seen in this illustration. /Reuters)