Algorithm accountability is easier said than done

Jay OwenReforming Global Finance, Global Citizen, Wealth of Networks, Information Technology Issues

“Ethical Markets welcomes all these actions to rein in social media, following on my 5 steps in “Steering Social Media Toward Sanity“ and our paper “Can AI Algorithms Be Ethical?“, referencing IT expert Dave Lauer on our global Advisory Board.

Hazel Henderson, Editor“

 

By Mathew Ingram

Over the past several years, Congress has held a seemingly never-ending series of hearings concerning “Big Tech,” the handful of companies that control much of our online behavior: Facebook, Twitter, and Google. Congressional committees have looked into whether the platforms allowed foreign agents to influence the 2016 election, whether their algorithms suppress certain kinds of speech, and whether they harm young women; in many cases, the hearings have also been a forum for grandstanding. This week saw the latest in the series, a hearing by the House Energy and Commerce Committee, called “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity.” The subject of the hearing was a piece of legislation that has been an ace in the hole for the platforms in all of their other congressional appearances: Section 230 of the Communications Decency Act.

Section 230 protects electronic service providers from liability for the content posted by their users—even if that content is harmful, hateful, or misleading. For the past few years, pressure has built within Washington for lawmakers to somehow find a way around it. That pressure came to a head in 2020 when former president Donald Trump, who had expressed concerns over alleged censorship of conservative speech on social media, signed an executive order asking the Federal Trade Commission to do something about Section 230 (even though the agency has no legal right to do so). Before he became president, Joe Biden said that he believed Section 230 needs to be revoked, immediately; since he took office, legislators have put forward a number of proposals in an attempt to do that. A recent proposal from Democratic Senator Amy Klobuchar would carve out an exception for medical misinformation during a health crisis, making the platforms liable for distributing anything the government defines as untrue.

Republican members of Congress have introduced their own proposals for a host of other Section 230 carve-outs, aimed at forcing platforms to keep certain kinds of content (mostly conservative speech) while forcing them to remove others, such as cyber-bullying. This week’s hearing was held to consider a number of other pieces of legislation aimed at weakening or even dismantling Section 230. They include one supported by four of the top Democratic members of the Energy and Commerce Committee, called “The Protecting Americans From Dangerous Algorithms Act,” which would open the platforms to lawsuits for making personalized recommendations to users that cause them harm. At least some of the hearing was taken up—as many previous ones have been—with statements from Republican members about how platforms like Facebook and Twitter allegedly censor conservative content, which studies have shown is not true.

Frances Haugen, the former Facebook staffer turned whistleblower who leaked thousands of documents to the Wall Street Journal and then to a consortium of other media outlets, has helped fuel the desire to hold the platforms to account. During her testimony this week, she took time to remind the committee that well-meaning efforts to do so can have unintended side effects. The 2018 law known as FOSTA-SESTA, for example, was designed to prevent sex trafficking, but Haugen noted that it also made things more difficult for sex workers and other vulnerable people. “I encourage you to talk to human rights advocates who can help provide context on how the last reform of 230 had dramatic impacts on the safety of some of the most vulnerable people in our society but has been rarely used for its original purpose,” she said, according to Mashable.

This message was echoed by others who testified at the hearing (the first of two; the second is scheduled for next week). “It’s irresponsible and unconscionable for lawmakers to rush toward further changes to Section 230 while actively ignoring human rights experts and the communities that were most impacted by the last major change to Section 230,” Evan Greer, director of Fight for the Future, told the committee. “The last misguided legislation that changed Section 230 got people killed. Congress needs to do its due diligence and legislate responsibly. Lives are at stake.” According to a recent review of the legislation by human-rights experts, FOSTA-SESTA has had “a chilling effect on free speech, has created dangerous working conditions for sex-workers, and has made it more difficult for police to find trafficked individuals.”

A number of critics of the more recent legislative attempts to do an end-run around Section 230 have also pointed to the difficulty of targeting the things that algorithms do, since there are a multitude of algorithms that are used by different platforms to do different things—to recommend content to users, for instance, but also to sort it and filter it—and defining which ones are bad and why is not easy. “I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI, told the House subcommittee hearing. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?” If scientists and tech scholars have difficulty answering these questions, it seems unlikely that Congress will.