In a recent development, The New York Times has responded to claims made by OpenAI that the newspaper had “hacked” ChatGPT in order to create a lawsuit against the AI company. The Times strongly denied these accusations in a court filing, calling them false and irrelevant.
OpenAI alleged that The New York Times had attempted to manipulate ChatGPT to produce excerpts of NYT articles, a claim that the newspaper refuted. The Times defended its actions by stating that it had simply prompted ChatGPT to generate memorized training data, including NYT articles, in order to track potential copyright infringement.
The NYT further defended its tactics by highlighting instances where users had used ChatGPT to generate entire articles and bypass paywalls. However, the newspaper admitted that it does not know how many of its articles were utilized to train AI models, as OpenAI has not disclosed the datasets used.
OpenAI had disabled a feature on ChatGPT that allowed users to bypass paywalls, leading to complaints from users. The NYT pointed to public reports of users utilizing ChatGPT in this manner, contradicting OpenAI’s claims and reinforcing the need for further discovery in the ongoing lawsuit.
Overall, The New York Times maintains that its intentions were not to “hack” ChatGPT but rather to monitor potential copyright infringement. The lawsuit between the newspaper and OpenAI continues to unfold as both parties stand by their respective positions. Stay tuned for further updates on this developing story.
“Zombie enthusiast. Subtly charming travel practitioner. Webaholic. Internet expert.”