On January 6, hours after President Donald Trump repeated falsehoods about the 2020 election to a crowd of his supporters in Washington D.C., a mob stormed and vandalized the US Capitol, forcing the evacuation of legislators and interrupting their work to certify the presidential election result. Five people were killed, one of them shot by a Capitol Police officer. It looks like the shocking incident was the last straw for the major social media companies, who responded with their strongest action yet against Trump and the harmful disinformation he has spread. Twitter announced a 12-hour suspension, Facebook and Instagram banned the president’s profile for the rest of his term, and YouTube said it will begin restricting accounts that post videos containing false information about the election.
With this sweeping action, Silicon Valley’s leading social media firms answered to some extent the chorus of voices calling for the president to lose the digital platforms through which he has aimed to discredit the 2020 election’s results and, some say, to incite violence.
In many ways, the actions were also a reversal for the social media companies, albeit one toward which they had been gradually building. In a 2018 blog post, Twitter said that blocking a world leader from the platform or removing their tweets “would hide important information people should be able to see and debate.” Of course, by the time it temporarily suspended Trump this week, Twitter had already taken to labeling, limiting or removing his offending tweets, a step it first took in May. Facebook, too, had previously balked at limiting Trump’s reach on its platform, with CEO Mark Zuckerberg citing Facebook’s strong “free expression” principles.
We don’t know yet what the big picture of disinformation will look like after January 21.
There’s already widespread anticipation of further action against Trump and the disinformation he has fomented. Kevin Roose of The New York Times reported that several Twitter and Facebook employees expect those bans to be extended beyond Trump’s term.
If he loses his bully pulpit on social media, an obvious short-term fix for Trump might be to migrate to Parler or Gab, two smaller social media services seen as friendlier to right-wing figures for their less strict moderation. This would surely limit Trump’s audience, but as Roose points out, the president has benefited from the close attention afforded his tweets by mainstream newsrooms. Whatever happens between now and Inauguration Day — and afterward — one thing seems clear: Trump will remain a newsworthy figure. How will newsrooms cover Trump when he is never more than a few keystrokes away, whether it’s on Twitter or in the comparative niches of Gab or Parler? And how will companies like Facebook continue to fulfill the role they tacitly accepted in the wake of the storming of the Capitol when harmful messaging is sure to continue circulating?
The platforms have set a new standard. Can they enforce it worldwide?
The platforms’ enforcement actions this week have been a response to concerns they don’t do enough to police disinformation. By acknowledging to the greatest extent yet their aim to limit the spread of harmful information, the companies have also raised questions about their worldwide reach — and responsibility.
Brazil’s president, Jair Bolsonaro, has been compared in many ways to Trump, particularly as a user of social media to bypass journalists he views as hostile in order to speak directly to his base. He’s also been the subject of enforcement action, after Twitter, Facebook and YouTube in March removed some of his posts containing misinformation about the coronavirus. If Brazil finds itself in a political crisis and Bolsonaro uses his digital presence to organize a rally that might turn violent, will social media platforms be prepared to head off unrest?
Look elsewhere, and a similar dilemma emerges for Facebook. In the Philippines, a country with an alarming level of violence against the press, President Rodrigo Duterte’s government has used official Facebook pages to target media outlets that criticize it. When social media platforms are a vehicle for threats and intimidation, will they take a closer look at the tension between a government’s right to communicate and the capacity for dangerous incitements to violence, not just in the US, but around the world?
Are the platforms equipped to prevent further political violence?
We already know that much of the chaos of January 6 was planned in wide-open online spaces.
Organizing a political rally online is legal on its own, of course, but inciting a violent riot isn’t. At First Draft, our monitoring team, like other organizations that work to counter disinformation, found numerous calls for violence in fringe spaces like The Donald, a gathering place for Trump supporters that was banned from Reddit in June. On Facebook, a Group with nearly 8,000 members planned travel to the January 6 rally.
Can social media platforms connect the dots between organizing that takes place on their platforms and the red flags that portend violence? What about when those red flags pop up elsewhere? Individual social media platforms can be focal points for harmful information, but its spread is complex and eludes the reach of any one service. On January 6, the authorities knew clashes were likely and prepared accordingly, although, it turned out, inadequately. But it doesn’t take coordination on the massive scale seen in Washington to lead to violence. Social media has been and will remain a driver for disinformation, unrest and worse in venues large and small.
To ensure that the shocking — and deadly — events of January 6 aren’t repeated, social media platforms will need to embrace the fragmentary nature and massive scale of harmful information online. This may be easier said than done.
Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.