Social media companies are taking steps to curb online disinformation and fake news on their platforms ahead of the U.S. elections in November when the stakes couldn't be higher.
Last week, Facebook said that it will restrict political ads in the week before the election and remove posts with false information about COVID-19 and the 2020 election.
The announcement came after the social network took down a fake news publication believed to be operated by the Russian group which allegedly incited chaos and confusion in the lead-up to the 2016 election, raising new concerns about foreign interference.
Facebook CEO Mark Zuckerberg said the company had removed more than 100 networks worldwide engaging in election interference over the last few years. "This election is not going to be business as usual. We all have a responsibility to protect our democracy," Zuckerberg said earlier this month.
"Fake news" became a thing after the last U.S. election. Viral social media posts containing outlandish claims about U.S. presidential candidates, published by hyper-partisan accounts and sketchy click-bait websites weeks before the 2016 election, were believed to have influenced voter sentiment.
It later emerged that the bulk of the fabricated stories had been the work of "content farms," run by young people in the Balkans who were in it for quick ad bucks. Though widely accepted, public evidence for the alleged Russian connection was barely there.
But those allegations became the focus of U.S. political energy in the following two years, while tech giants like Facebook and Twitter landed in the hot seat in Congress for allowing false information to spread on their platforms.
Facebook was caught in scandals around fake news and election interference after the 2016 U.S. election. /VCG
Facebook was caught in scandals around fake news and election interference after the 2016 U.S. election. /VCG
In just a few years, "fake news" is recognized as a real threat to democracy around the world. Dozens of countries already have their "fake news" laws of varying severity, but neither government nor industry-led measures have sufficiently dealt with this problem without controversy.
The 'infodemic'
In 2020, the global pandemic combined with the most contentious race in decades creates the perfect storm as fearmongers and conspiracy theorists from every corner of the internet come out in force.
In April, the United Nations Secretary-General formally identified a "mis-infodemic," or false news about the COVID-19 pandemic. In a world preoccupied with the coronavirus, conspiracy theories about the origins of the virus, dangerous fake health advice, rampant discrimination and stigma related to the disease have spread like wildfire online.
It got so bad that the World Health Organization (WHO) had to add a "myth busters" section to its website. Major social media sites have been directing users searching for virus-related information to the WHO's website atop their front pages.
But all this didn't stop the pandemic becoming heavily politicized in the U.S., where epidemic prevention and control measures, including sheltering at home and mask-wearing, have turned into points of contention this election season. Far-right conspiracy theorists like "QAnon" are commanding unprecedented attention at this time.
But fake news about the coronavirus is more proof that what is said online doesn't stay online, when believing the virus is a "hoax," drinking bleach and burning down 5G towers have real-life consequences.
A supporter of far-right conspiracy group 'QAnon' at a Trump rally. /AP
A supporter of far-right conspiracy group 'QAnon' at a Trump rally. /AP
More than a platform
Big tech's role in the proliferation of fake news has raised some serious questions about how online information should be regulated. Companies are constantly being accused of not doing enough as well as overstepping, forcing them to toe a delicate line between social responsibility and free speech.
During the George Floyd protests in June, Facebook triggered a revolt from advertisers for allowing an inflammatory post from President Donald Trump to stay online. Then in August, it failed to stop armed militia organizing on the platform before a fatal shooting incident involving one of the members in Kenosha, Wisconsin.
Zuckerberg has always maintained that Facebook is a platform, not a media company. But these days, the platform has grown more powerful than most news organizations in distributing information.
Since 2016, a plethora of things, including mainstream news outlets, has been labeled "fake news." When it becomes a norm to call biased media reports or opinions one disagrees with as such – popularized by none other than Trump – it blurs the line between fighting fake news and censorship.
In their efforts to fact-check information and weed out fake news, including those propagated by government officials, the industry continues to face scrutiny over the handling of political speech, especially during election seasons.
After Twitter flagged some of Trump's tweets, the White House threatened to strip internet companies of the protection afforded by a 1996 law which stipulates that online service providers shall not be treated as publishers.
Read more: Twitter labels China and Russia 'state-affiliated media' accounts but not BBC, NPR & VOA
Trump supporters hold up a sign calling mainstream news media "fake news" during a rally in 2018. /AP
Trump supporters hold up a sign calling mainstream news media "fake news" during a rally in 2018. /AP
Algorithm the real culprit?
A 2018 study by MIT published in Science magazine found that on Twitter, fake news spread six times faster and generate 70 percent more retweets than real stories.
Cindy Otis, a former CIA analyst and author of True or False: A CIA Analyst's Guide to Spotting Fake News, said that the highly charged nature of social media amplifies online disinformation. "It's never really the 'let's step back and pause' kind of takes that go viral. It's the emotionally charged, pithily worded hot takes that go viral," she said.
In addition, social media algorithms have been designed to keep their users glued by creating digital echo chambers that make users victims of their own browsing habits. As a result, people become more prone to believing things that confirm their partisan biases and rejecting everything else outside their bubbles, further polarizing public opinions on everything from face masks to racism.
Once people start believing something, setting the record straight is very difficult, said Hany Farid, a professor from the School of Information at UC Berkeley. When it comes to conspiracy theories, Farid pointed to the phenomenon called the "boomerang effect" – when you tell someone that what they think is wrong, it only entrenches them further.
"This is the world we live in now – where facts have become something other than what they should be," he said.