Sarah Khan ’03
Grounded by the pandemic, the travel writer is finding creative outlets.
While President Trump’s social media musings are often in the news, Twitter was the one making headlines over the summer when, for the first time, the platform flagged one of Trump’s tweets for glorifying violence, and fact-checked two others. The actions were a major turning point, said Colin Crowell ’86, who left Twitter last December after eight years as vice president of global public policy and corporate philanthropy. “Navigating the emerging and evolving online terrain of disinformation to protect the integrity of vital civic conversations, without succumbing to excessive censorship, is critical to safeguarding the internet as a vibrant platform for human expression,” Crowell said. We recently asked him about the social media giant’s new approach.
Commercial entities shouldn’t have the role of determining truth or falsehood in the online public square. Ideally, that is the job of journalists—to hold those in power accountable and provide context to readers, viewers, and voters. But, when you have a platform as large as Twitter, it is important to recognize when there is a need to help users understand what they are viewing with additional context. Twitter can and should take action when online speech risks offline harm, like disenfranchising a voter or spreading disinformation about COVID-19. Social media companies must remove coordinated accounts disseminating disinformation.
It was the first time, and it was a new tool. It provided a screen that said, basically, this tweet is in violation of Twitter rules but it is remaining on the service in the public interest, because the public has a right to see it. Before, Twitter only had a binary choice: It could delete a tweet or leave it up. In addition to noting the fact that the tweet violated the rules, Twitter also took action to limit the ability of that content to go viral itself. It allowed journalists to comment on it. It allowed the public to comment on it. But the ability of any of the core content to go viral was hobbled.
There is an inherent tension. The rules of Twitter apply to everybody, but the company also knows it’s vital in a democracy to have an informed citizenry. What the president says is important to know, regardless of its content, because he is a democratically elected head of state. Arguably, if the content is provocative, inappropriate, or has the effect of misinforming voters, it’s even more important that that content is seen so that people can debate it, support it, refute it, and dissect it, and do so publicly.
Section 230 says a few things, but mainly it states online companies aren’t liable for the speech of others on their platform. If you get rid of that section, it creates a dilemma for the companies because they would have to over-censor and start taking down anything that might remotely run a liability risk, affecting the timeliness of the internet. Before content could be posted, you’d have to run it by lawyers. It would have the unwitting effect of making the largest companies more powerful because they could afford the legal teams and the legal bills. And any objections would then have to be fought out in the court system. It would really create a mess.
The fact that any individual anywhere in the world with a smartphone could bear witness to history and share it with the rest of the world instantaneously is, in itself, revolutionary. The thing that carried me through my Twitter experience was to see, on a fairly routine basis, how those historically less powerful or marginalized voices could be heard. Because out of these conversations come movements. And then out of the movements come the chance that societies advance and move forward.