How Can Platforms Deal with Toxic Content? Look to Wall Street

commentary

(The RAND Blog)

Icons and lights coming out of a cell phone on a flat surface, photo by David Peperkamp/Getty Images

Photo by David Peperkamp/Getty Images

by James V. Marrone

May 26, 2023

Social media has a content problem. It's not the content itself, although the list of toxic material is long: misinformation, propaganda, conspiracy theories, hate speech, and incitement to violence are just a few. The problem is the sheer volume of content, more than any group of humans can review. Companies like Meta employ thousands of humans and spend billions of dollars on moderation—but there is simply so much content to be vetted that these human teams require help; their work must be supplemented by computers.

Unfortunately, people are still a lot better than computers at this content moderation work, and people are expensive. Computer algorithms are trained to identify what is likely problematic content based on previous examples, people are able to understand what is problematic without requiring such patterns. People make mistakes, but algorithms make them too; and algorithms can be outsmarted. Sometimes they don't understand human language—so there's always a risk that toxic content will slip through the cracks.

The risk of toxic content is a constant, yet we navigate risks every day, sometimes without much thought. You might risk jaywalking on a city boulevard, but you probably won't risk walking across an interstate highway. Walking onto any street with cars will always be a little dangerous. The question is one of degrees, of how much risk one is willing to accept. When jaywalking, it's a question of the road and its cars—how many, how busy, how wide?

The problem tech companies face when assessing the risk of toxic content on social media platforms is how to go about identifying the threshold.

Share on Twitter

On social media, the question is quite similar: how much is too much? It might be relatively benign and even acceptable to allow a few pieces of political propaganda to slip through. But allowing a coordinated foreign propaganda campaign to spread across multiple platforms is almost certainly too much risk—although Russian operatives have repeatedly succeeded in doing just that, in 2016 (PDF), 2020, and again in 2022.

The problem tech companies face when assessing the risk of toxic content on social media platforms is how to go about identifying the threshold. What, in other words, presents too much risk? And what is a generally acceptable level of risk? Roads and cars have laws and traffic patterns. Content on the internet is much more slippery. Just who defines the risk threshold, and on what basis, are the questions at the heart of this increasingly vexing problem.

Fortunately, these questions aren't particularly new. The United States has been here before, just in a different sector. Leading up to the 2008 financial crisis, Wall Street foreshadowed the tech sector in several ways, most of all in its increasing reliance on algorithms. Whereas social media companies use algorithms to assist humans in content moderation and to figure out whether or not an account is a bot, financial firms were using them (and still do) to calculate portfolio risk. Just like tech's algorithms today, the financial algorithms in 2008 had serious limitations. The newest financial products were so complex that their prices could not really be calculated. Even the ratings (PDF) that investors use to gauge riskiness were poorly understood. As a result, the risk estimates were imprecise and opaque, even to the firms themselves.

This all took place in an era of deregulation, when markets were being trusted to develop and enforce their own standards through Self-Regulatory Organizations (SROs), such as stock exchanges. In the early 2000s, government officials—having granted them regulatory authority—believed that these SROs would properly address the risks of new financial products; 2008 proved them wrong.

Still, a faith in self-regulation seems to have reappeared, this time in discussions about regulating tech. Professors at Harvard and the University of Chicago have recently suggested that SROs are the best regulatory option for the tech sector. The success of that argument depends on the potential of still-nascent tech SROs such as the Digital Trust and Safety Partnership to define and monitor industry best practices for safety and content moderation. Unlike financial SROs, tech SROs appear even more toothless, as they are self-organized by the firms themselves and do not have any government-sanctioned regulatory authority. Still, some companies' recent actions, such as deplatforming Donald Trump in the wake of the January 6 insurrection, have made it appear as though self-regulation is already working.

But history proved time and time again (PDF) that self-regulation was insufficient to mitigate financial risks. So why should it work for tech? In fact, history is already repeating itself on social media. Gaps in content moderation allow malicious content to jump from fringe sites into the mainstream. Self-regulation isn't closing those gaps and likely won't in the future, because niche and fringe sites explicitly refuse to self-regulate. Such sites will always remain untethered by any industry standard, serving as Ground Zero for new conspiracy theories and misinformation. For example, the New York Times reported that election falsehoods posted on Donald Trump's fringe Truth Social platform were seen by over one million users on at least a dozen other sites, all within just 48 hours.

Instead of trusting in self-regulation, the United States could use its regulation of Wall Street after 2008 as a roadmap for regulating tech. The 2010 Dodd-Frank Act, for example, mandated (PDF) that financial systemic risk be monitored by a regulatory oversight body, now called the Financial Stability Oversight Council (FSOC). The Dodd-Frank framework offers several benefits for tech regulation. First, the framework mandates transparency, as audits preserve companies' privacy while ensuring fair comparisons by the regulator. And second, the framework offers flexibility, as regulatory standards can evolve over time to adapt to new technologies, such as deepfakes, and to new uses of existing technology, like the emoji drug code.

Putting risk at the center of a regulatory framework forces society to accept what is feasible, rather than what is ideal—the possible, not the impossible.

Share on Twitter

EU regulators have already shown what a tech version of Dodd-Frank might look like, with the passage of its landmark Digital Services Act. The DSA contains several stipulations that echo Dodd-Frank: the creation of an independent Board for Digital Services Coordinators, much like the FSOC; annual audits of tech platforms, which might look much like the Fed's stress tests; access to platforms' data, analogous to regulators' access to bank data; and the designation of Very Large Online Platforms that are subject to additional regulation, similar to FSOC designation of Systemically Important Financial Institutions.

Putting risk at the center of a regulatory framework forces society to accept what is feasible, rather than what is ideal—the possible, not the impossible. Online platforms connect users to people around the world, to the benefit of millions. But enjoying those benefits comes with an ever-present and ever-evolving risk of encountering toxic ideas and harmful speech. As a society, we might ask how much of that risk we are willing to tolerate—while taking, from recent history, lessons on regulating and managing that risk.


James Marrone is an economist at the nonprofit, nonpartisan RAND Corporation.

A version of this commentary originally appeared on Barrons on May 19, 2023.

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.