When another lawyer told me recently that he used ChatGPT to get a jump start on a work assignment, I realized that while I had heard so much about this program, as an IP lawyer I had never actually used it. The next day, I opened up ChatGPT and asked my first question: What are you?
I am ChatGPT, an AI language model developed by OpenAI. I have been trained on a vast amount of text from the internet, allowing me to generate human-like responses to a wide range of questions and prompts. My purpose is to assist users in generating text-based content, answering questions, and engaging in conversations on various topics. Is there anything specific you would like to know or discuss?
I was fascinated, and as I began to ask it questions, give it assignments, and check the results, I realized why so many legal issues about generative AI have emerged. The versatility of ChatGPT is attractive, and the thoroughness of its human-like responses make you want to trust its answers. But, as always, with the emergence of more sophisticated technology come new challenges.
For example, one problematic area is "hallucinations," or false information resulting from broad generative AI queries. In the Southern District of New York, a lawyer is facing sanctions for citing to fake cases generated by ChatGPT that were included in his brief. Even more startling is that ChatGPT repeatedly assured the lawyer that the cases were authentic when he sought confirmation multiple times of the authenticity of each case.
Generative AI's hallucination tendencies can also land companies in legal trouble. In fact, the first-ever defamation lawsuit was filed against OpenAir for the fabrication of false legal accusations. When a reporter used ChatGPT to summarize a legal complaint he was reporting, on which ChatGPT responded that Mark Walters, a radio host, was an involved party to the suit and was accused of defrauding and embezzling funds from a nonprofit. Not only were these statements false, but ChatGPT also created a bogus case number. The reporter didn't include the information in the article, but he contacted another party to confirm these details were false.
ChatGPT also presents a risk to intellectual property rights. More specifically, ChatGPT's ability to write code can threaten a company's ownership rights of its own product if ChatGPT is used to generate code for the product. For instance, it can call into question the status and protection of the generated content, and who holds the right to use the generated content. It can also call into question whether AI-generated work will be recognized as copyrighted work even though it does not satisfy a requirement of copyrighted work¬: that it is a product of human creation.
Moreover, any intellectual property generated by ChatGPT could be in jeopardy if improperly obtained data was used during the generative process. The Federal Trade Commission (FTC) has implemented an enforcement mechanism called "algorithmic disgorgement." This mechanism requires a company to either delete illegally obtained data or delete the algorithm or models the company developed using the data.
Algorithmic disgorgement also extends to the output generated from these models, such as information generated by ChatGPT. The impact of this mechanism may force users to destroy their work product created by AI models, if these models illegally obtained data that was later used to create the user's work product.
The FTC first employed this mechanism in a settlement with Cambridge Analytica, which was alleged to have used deceptive practices to gather personal information from Facebook users for advertising purposes. Cambridge Analytica was ordered to destroy all information, including algorithms or equations, that originated from the improperly obtained data.
I now think that when I asked ChatGPT what it was, its description of itself was missing a few things. Namely, that it is still a young, though innovative, technology that should be used cautiously. If people intend to use the technology for anything beyond simple enjoyment, they should be aware of the risks and consult legal counsel.
The Brand and Content Protection Group in Venable's IP Division is continually studying and tracking these issues on behalf of our clients. Please reach out to us with any questions, and we will continue to bring you updates here on new developments.