According to an analysis by Matt Peralt, a researcher at the University of North Carolina at Chapel Hill, the very popular program ChatGPT OpenAI will not be supported in terms of content legal responsibility as much as social networks. Social networks are not responsible for publishing content under the section 230 law and this responsibility rests with the users themselves, but apparently there will be no news of this law for the creation of OpenAI.
“Courts will likely find that ChatGPT and other LLMs are content providers,” Peralt explained, referring to the “large language models” (LLMs) that ChatGPT and many similar AI programs use. Referring to Section 230 of Title 47 of the United States Code, enacted by Congress in 1996, he predicted:
“As a result, companies that use these generative AI tools, such as OpenAI, Microsoft, and Google, cannot use Section 230 when it comes to AI-generated content.”
Section 230 has so far been used by Meta and other tech companies as a shield to absolve themselves of legal liability for content posted by users.
As Peralt explains, “Under current law, an interactive computer service (content host) is not liable for content posted by an information content provider (a content creator)” because section 230 provides that “no provider or user of an interactive computer It should not be considered as a publisher or speaker of information provided by another information content provider.
He went on to note that this has caused consternation among US lawmakers, who for some reason are unhappy with the content moderation policies of Twitter and other companies. Peralt believes that ChatGPT’s AI model is also unlikely to receive Section 230 protection. “Courts will likely find that ChatGPT and other LLMs are exempt from Section 230 protections,” he explains. Because they are information content providers rather than interactive computer services.