Amid a lawsuit filed by the New York Times (NYT) alleging that generative artificial intelligence (AI) ChatGPT infringed on copyright, OpenAI has claimed that the lawsuit should be dismissed, accusing the NYT of hacking.
According to Reuters and other sources on the 27th, OpenAI claimed in a document submitted to the U.S. District Court for the Southern District of New York the previous day that “the NYT paid someone to hack our products like ChatGPT and created 100 cases of copyright infringement through this.”
It went on to say that “the NYT tried tens of thousands of times (of hacking) to produce very abnormal results, and this (hacking) used deceptive prompts that blatantly violated our terms of use.”
CNBC, a U.S. economic media outlet, explained that the “hacking” mentioned by OpenAI refers to “Red Teaming,” a method used by AI trust and safety teams, ethicists, academia, and tech companies to test the vulnerabilities of AI systems.
“Red Teaming,” a term used in the security field, refers to activities that attempt actual attacks to assess and improve an organization’s security level. OpenAI claimed that the NYT used this method to allege that ChatGPT infringed on copyright.
The NYT has not commented on this matter.
OpenAI previously claimed in documents submitted to the court last month that the NYT deliberately manipulated ChatGPT to cause bugs and used this as the basis for claiming copyright infringement.
The company called the NYT’s lawsuit meritless and said they respect the NYT’s long history and still hope to build a constructive partnership with them.
The NYT filed a lawsuit last December claiming that “the defendant has the responsibility to compensate for billions of dollars in legal damages and actual damages related to the unauthorized reproduction and use of the NYT’s copyrighted materials of unique value,” alleging that millions of articles published by the company were used to train ChatGPT.
Most Commented