A recent news article published in the CNN states that Meta, Twitter, Microsoft and others urge Supreme Court not to allow lawsuits against tech algorithms.
Big Tech’s liability
A wide range of businesses, internet users, academics and even human rights experts defended Big Tech’s liability shield Thursday in a pivotal Supreme Court case about YouTube algorithms, with some arguing that excluding AI-driven recommendation engines from federal legal protections would cause sweeping changes to the open internet.
The diverse group weighing in at the Court ranged from major tech companies such as Meta, Twitter and Microsoft to some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation. Even Reddit and a collection of volunteer Reddit moderators got involved.
In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case — Section 230 of the Communications Decency Act — is vital to the basic function of the web. Section 230 has been used to shield all websites, not just social media platforms, from lawsuits over third-party content.
Gonzalez v. Google
The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that Section 230 precludes such litigation. But the plaintiffs in the case, the family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube’s recommendation algorithm can be held liable under a US antiterrorism law.
In their filing, Reddit and the Reddit moderators argued that a ruling enabling litigation against tech-industry algorithms could lead to future lawsuits against even non-algorithmic forms of recommendation, and potentially targeted lawsuits against individual internet users.
“The entire Reddit platform is built around users ‘recommending’ content for the benefit of others by taking actions like upvoting and pinning content,” their filing read. “There should be no mistaking the consequences of petitioners’ claim in this case: their theory would dramatically expand Internet users’ potential to be sued for their online interactions.”
Yelp, a longtime antagonist to Google, argued that its business depends on serving relevant and non-fraudulent reviews to its users, and that a ruling creating liability for recommendation algorithms could break Yelp’s core functions by effectively forcing it to stop curating all reviews, even those that may be manipulative or fake.
“If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear,” Yelp wrote. “If Yelp had to display every submitted review … business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”
Section 230 ensures platforms can moderate content in order to present the most relevant data to users out of the huge amounts of information that get added to the internet every day, Twitter argued.
“It would take an average user approximately 181 million years to download all data from the web today,” the company wrote.
A new interpretation of Section 230
If the Supreme Court were to advance a new interpretation of Section 230 that safeguarded platforms’ right to remove content, but excluded protections on their right to recommend content, it would open up broad new questions about what it means to recommend something online, Meta argued in its filing.
“If merely displaying third-party content in a user’s feed qualifies as ‘recommending’ it, then many services will face potential liability for virtually all the third-party content they host,” Meta wrote, “because nearly all decisions about how to sort, pick, organize, and display third-party content could be construed as ‘recommending’ that content.”
A ruling finding that tech platforms can be sued for their recommendation algorithms would jeopardize GitHub, the vast online code repository used by millions of programmers, said Microsoft.
“The feed uses algorithms to recommend software to users based on projects they have worked on or showed interest in previously,” Microsoft wrote. It added that for “a platform with 94 million developers, the consequences [of limiting Section 230] are potentially devastating for the world’s digital infrastructure.”
Microsoft’s search engine Bing and its social network, LinkedIn, also enjoy algorithmic protections under Section 230, the company said.
According to New York University’s Stern Center for Business and Human Rights, it is virtually impossible to design a rule that singles out algorithmic recommendation as a meaningful category for liability, and could even “result in the loss or obscuring of a massive amount of valuable speech,” particularly speech belonging to marginalized or minority groups.
“Websites use ‘targeted recommendations’ because those recommendations make their platforms usable and useful,” the NYU filing said. “Without a liability shield for recommendations, platforms will remove large categories of third-party content, remove all third-party content, or abandon their efforts to make the vast amount of user content on their platforms accessible. In any of these situations, valuable free speech will disappear—either because it is removed or because it is hidden amidst a poorly managed information dump.”
Did you subscribe to our newsletter?
It’s free! Click here to subscribe!
Source: CNN