Yes, Section 230 Should Protect ChatGPT And Other Generative AI Tools

from the no-this-wasn’t-written-by-chatgpt dept

Question Presented: Does Section 230 Protect Generative AI Products Like ChatGPT?

As the buzz around Section 230 and its application to algorithms intensifies in anticipation of the Supreme Court’s response, ‘generative AI’ has soared in popularity among users and developers, begging the question: does Section 230 protect generative AI products like ChatGPT? Matt Perault, a prominent technology policy scholar and expert, thinks not, as he discussed in his recently published Lawfare article: Section 230 Won’t Protect ChatGPT.

Perault’s main argument follows as such: because of the nature of generative AI, ChatGPT operates as a co-creator (or material contributor) of its outputs and therefore could be considered the ‘information content provider’ of problematic results, ineligible for Section 230 protection. The co-authors of Section 230, former Representative Chris Cox and Sen. Ron Wyden, have also suggested that their law doesn’t grant immunity to generative AI. 

I respectfully disagree with both the co-authors of Section 230 and Perault, and offer the counter argument: Section 230 does (and should) protect products like ChatGPT.

It is my opinion that generative AI does not demand exceptional treatment. Especially since, as it currently stands, generative AI is not exceptional technology; an understandably provocative take to which we’ll soon return. 

But first, a refresher on Section 230.

Section 230 Protects Algorithmic Curation and Augmentation of Third-Party Content 

Recall that Section 230 says websites and users are not liable for the content they did not create, in whole or in part. To evaluate whether the immunity applies, the Barnes v. Yahoo! Court provided a widely accepted three-part test:

  1. The defendant is an interactive computer service; 
  2. The plaintiff’s claim treats the defendant as a publisher or speaker; and
  3. The plaintiff’s claim derives from content the defendant did not create. 

The first prong is not typically contested. Indeed, the latter prongs are usually the flashpoint(s) of most Section 230 cases. And in the case of ChatGPT, the third prong seems especially controversial. 

Section 230’s statutory language states that a website becomes an information content provider when it is “responsible, in whole or in part, for the creation or development” of the content at issue. In their recent Supreme Court case challenging Section 230’s boundaries, the Gonzalez Petitioners assert that the use of algorithms to manipulate and display third-party content precludes Section 230 protection because the algorithms, as developed by the defendant website, convert the defendant into an information content provider. But existing precedent suggests otherwise.

For example, the Court in Fair Housing Council of San Fernando Valley v. Roommate.com (aka ‘the Roommates case’)—a case often invoked to evade Section 230—held that it is not enough for a website to merely augment the content at issue to be considered a co-creator or developer. Rather, the website must have materially contributed to the content’s alleged unlawfulness.  Or, as the majority put it, “[i]f you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.” 

The majority also expressly distinguished Roomates.com from “ordinary search engines,” noting that unlike Roommates.com, search engines like Google do not use unlawful criteria to limit the scope of searches conducted (or results delivered), nor are they designed to achieve illegal ends. In other words, the majority suggests that websites retain immunity when they provide neutral tools to facilitate user expression. 

While “neutrality” brings about its own slew of legal ambiguities, the Roommates Court offers some clarity suggesting that websites with a more hands-off approach to content facilitation are safer than websites that guide, encourage, coerce, or demand users produce unlawful content. 

For example, while the Court rejected Roommate’s Section 230 defense for its allegedly discriminatory drop-down options, the Court simultaneously upheld Section 230’s application to the “additional comments” option offered to Roommates.com users. The “additional comments” were separately protected because Roommates did not solicit, encourage, or demand their users provide unlawful content via the web form. In other words, a blank web form that simply asks for user input is a neutral tool, eligible for Section 230 protection, regardless of how the user actually uses the tool. 

The Barnes Court would later reiterate the neutral tools argument, noting that the provision of neutral tools to carry out what may be unlawful or illicit content does not amount to ‘development’ for the purposes of Section 230. Hence, while the ‘material contribution’ test is rather nebulous (especially for emerging technologies), it is relatively clear that a website must do something more than just augmenting, curating, and displaying content (algorithmically or otherwise) to transform into the creator or developer of third-party content.

The Court in Kimzey v. Yelp offers further clarification: 

“the material contribution test makes a “‘crucial distinction between, on the one hand, taking actions (traditional to publishers) that are necessary to the display of unwelcome and actionable content and, on the other hand, responsibility for what makes the displayed content illegal or actionable.’”).”

So, what does this mean for ChatGPT?

The Case For Extending Section 230 Protection to ChatGPT

In his line of questioning during the Gonzalez oral arguments, Justice Gorsuch called into question Section 230’s application to generative AI technologies. But before we can even address the question, we need to spend some time understanding the technology. 

Products like ChatGPT use large language models (LLMs) to produce a reasonable continuation of human-sounding responses. In other words, as discussed here by Stephen Wolfram, renown computer scientist, mathematician, and creator of WolframAlpha, ChatGPT’s core function is to “continue text in a reasonable way, based on what it’s seen from the training it’s had (which consists in looking at billions of pages of text from the web, etc).” 

While ChatGPT is impressive, the science behind it is not necessarily remarkable. Computing technology reduces complex mathematical computations into step-by-step functions that the computer can then solve at tremendous speeds. As humans, we do this all the time, just much slower than a computer. For example, when we’re asked to do non-trivial calculations in our heads, we start by breaking up the computation into smaller functions on which mental math is easily performed until we arrive at the answer.

Tasks that we assume are fundamentally impossible for computers to solve are said to involve ‘irreducible computations’ (i.e. computations that cannot be simply broken up into smaller mathematical functions, unaided by human input). Artificial intelligence relies on neural networks to learn and then ‘solve’ said computations. ChatGPT approaches human queries the same way. Except, as  Wolfram notes, it turns out that said queries are not as sophisticated to compute as we may have thought: 

“In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computing the behavior of computational systems like cellular automata).

But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.

In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a “computationally shallower” problem than we thought. And in a sense this takes us closer to “having a theory” of how we humans manage to do things like writing essays, or in general deal with language.”

In fact, ChatGPT is even less sophisticated when it comes to its training. As Wolfram asserts:

“ChatGPT as it currently is, the situation is actually much more extreme, because the neural net used to generate each token of output is a pure “feed-forward” network, without loops, and therefore has no ability to do any kind of computation with nontrivial “control Flow.””

Put simply, ChatGPT uses predictive algorithms and an array of data made up entirely of publicly available information online to respond to user-created inputs. The technology is not sophisticated enough to operate outside of human-aided guidance and control. Which means that ChatGPT (and similarly situated generative AI products) are functionally akin to “ordinary search engines” and predictive technology like autocomplete. 

Now we apply Section 230. 

For the most part, the courts have consistently applied Section 230 to algorithmically generated outputs. For example, the Sixth Circuit in O’Kroley v. Fastcase Inc. upheld Section 230 for Google’s automatically generated snippets that summarize and accompany each Google result. The Court notes that even though Google’s snippets could be considered a separate creation of content, the snippets derive entirely from third-party information found at each result. Indeed, the Court concludes that contextualization of third-party content is in fact a function of an ordinary search engine. 

Similarly, in Obado v. Magedson, Section 230 applies to search result snippets. The Court says: 

Plaintiff also argues that Defendants displayed through search results certain “defamatory search terms” like “Dennis Obado and criminal” or posted allegedly defamatory images with Plaintiff’s name. As Plaintiff himself has alleged, these images at issue originate from third-party websites on the Internet which are captured by an algorithm used by the search engine, which uses neutral and objective criteria. Significantly, this means that the images and links displayed in the search results simply point to content generated by third parties. Thus, Plaintiff’s allegations that certain search terms or images appear in response to a user-generated search for “Dennis Obado” into a search engine fails to establish any sort of liability for Defendants. These results are simply derived from third-party websites, based on information provided by an “information content provider.” The linking, displaying, or posting of this material by Defendants falls within CDA immunity.

The Court also nods to Roommates

“None of the relevant Defendants used any sort of unlawful criteria to limit the scope of searches conducted on them; “[t]herefore, such search engines play no part in the ‘development’ of the unlawful searches” and are acting purely as an interactive computer service…

The Court goes further, extending Section 230 to autocomplete (i.e. when the service at issue uses predictive algorithms to suggest and preempt a user’s query): 

“suggested search terms auto-generated by a search engine do not remove that search engine from the CDA’s broad protection because such auto-generated terms “indicates only that other websites and users have connected plaintiff’s name” with certain terms.”

Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider  (i.e. a user). Further, nothing on the service expressly or impliedly encourages users to submit unlawful queries. In fact, OpenAI continues to implement guardrails that force ChatGPT to ignore requests that would demand problematic and / or unlawful responses. Compare this to Google Search which may actually still provide a problematic or even unlawful result. Perhaps ChatGPT actually improves the baseline for ordinary search functionality. 

Indeed, ChatGPT essentially functions like the “additional comments” web form in Roommates. And while ChatGPT may “transform” user input into a result that responds to the user-driven query, that output is entirely composed of third-party information scraped from the web. Without more, this transformation is simply an algorithmic augmentation of third-party content (much like Google’s snippets). And as discussed, algorithmic compilations or augmentations of third-party content are not enough to transform the service into an information content provider (e.g. Roommates; Batzel v. Smith; Dyroff v. The Ultimate Software Group, Inc.; Force v. Facebook). 

The Limit Does Exist

Of course, Section 230’s coverage is not without its limits. There’s no doubt that future generative AI defendants, like OpenAI, will face an uphill battle in persuading a court. Not only do defendants have the daunting challenge of explaining generative AI technologies for less technologically savvy judges, the current judicial swirl around Section 230 and algorithms does defendants no favors. 

For example, the Supreme Court could very well hand-down a convoluted opinion in Gonzalez that introduces ambiguity as to when Section 230 applies to algorithmic curation / augmentation. Such an opinion would only serve to undermine the precedence discussed above. Indeed, future defendants may find themselves embroiled in convoluted debate about AI’s capacity for neutrality. In fact, it would be intellectually dishonest to ignore emerging common law developments that preclude Section 230 from claims alleging dangerous / defective product designs (e.g. Lemmon v. Snap, A.M. v. Omegle, Oberdorf v. Amazon). 

Further, the Fourth Circuit’s recent decision in Henderson v. Public Data could also prove to be problematic for future AI defendants as it imposes contributive liability for publisher activities that go beyond those of “traditional editorial functions” (which could include any and all publisher functions done via algorithms). 

Lastly, as we saw in the Meta / DOJ settlement regarding Meta’s discriminatory practices involving algorithmic targeting of housing advertisements, AI companies cannot easily avoid liability when they materially contribute to the unlawfulness of the result. If OpenAI were to hard-code ChatGPT with unlawful responses, Section 230 will likely be unavailable. However, as you might imagine, this is a non-trivial distinction. 

Public Policy Demands Section 230 Protections for Generative AI Technologies

Section 230 was initially established with the recognition that the online world would undergo frequent advancements, and that the law must accommodate these changes to promote a thriving digital ecosystem. 

Generative AI is the latest iteration of web technology that has enormous potential to bring about substantial benefits for society and transform the way we use the Internet. And it’s already doing good. Generative AI is currently used in the healthcare industry, for instance, to improve medical imaging and to speed up drug discovery and development. 

As discussed, courts have developed precedence in favor of Section 230 immunity for online services that solicit or encourage users to create and provide content. Courts have also extended the immunity to online services that facilitate the submission of user-created content. From a legal standpoint, generative AI tools are not unique from any other online service that encourages user interaction and contextualizes third-party results. 

From a public policy perspective, it is crucial that courts uphold Section 230 immunity for generative AI products. Otherwise, we risk foreclosing on the technology’s true potential. Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts. 

In fact, generative AI products are arguably more vulnerable to frivolous lawsuits because they depend entirely upon whatever query or instructions its users may provide, malicious or otherwise. Without Section 230, developers of generative AI services must anticipate and guard against every type of query that could cause harm. 

Indeed, thanks to Section 230, companies like OpenAI are doing just that by providing guardrails that limit ChatGPT’s responses to malicious queries. But those guardrails are neither comprehensive nor perfect. And like with all other efforts to moderate awful online content, the elimination of Section 230 could discourage generative AI companies from implementing said guardrails in the first place; a countermove that would enable users to prompt LLMs with malicious queries to bait out unlawful responses subject to litigation. In other words, plaintiffs could transform ChatGPT into their very own personal perpetual litigation machine. 

And as Perault rightfully warns: 

“If a company that deploys an LLM can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk, companies will narrow the scope and scale of deployment dramatically. Without Section 230 protection, the risk is vast: Platforms using LLMs would be subject to a wide array of suits under federal and state law. Section 230 was designed to allow internet companies to offer uniform products throughout the country, rather than needing to offer a different search engine in Texas and New York or a different social media app in California and Florida. In the absence of liability protections, platforms seeking to deploy LLMs would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely…

…The result would be to limit expression—platforms seeking to limit legal risk will inevitably censor legitimate speech as well. Historically, limits on expression have frustrated both liberals and conservatives, with those on the left concerned that censorship disproportionately harms marginalized communities, and those on the right concerned that censorship disproportionately restricts conservative viewpoints.

The risk of liability could also impact competition in the LLM market. Because smaller companies lack the resources to bear legal costs like Google and Microsoft may, it is reasonable to assume that this risk would reduce startup activity.”

Hence, regardless of how we feel about Section 230’s applicability to AI, we will be forced to reckon with the latest iteration of Masnick’s Impossibility Theorem: there is no content moderation system that can meet the needs of all users. The lack of limitations on human awfulness mirrors the constant challenge that social media companies encounter with content moderation. The question is whether LLMs can improve what social media cannot.

Filed Under: , , , , , ,

Companies: openai


Source link