Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If they're reproducing NY Times articles, in full, that that is non-transformative. That's the point of the case.


> That's the point of the case.

No, its not. See the PDF of the actual case below.

The case is largely about OpenAI training on the NY Times articles without permission. They do allege that it can reproduce their articles verbatim at times, but that's not the central allegation as it's obviously a bug and not an intentional infringement. You have to get way down to item 98 before they even allege it.

https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec20...


They alleged it in point 4?

"Defendants have refused to recognize this protection. Powered by LLMs containing copies of Times content, Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples. See Exhibit J. These tools also wrongly attribute false information to The Times."


You're right. No idea how I missed that. Thanks!

Still, that's a bug not a feature. OpenAI will just respond that its already been fixed and pay them damages of $2.50 or something to cover the few times it happened under very specific conditions.


Just to double check that it was fixed, I asked ChatGPT what was on the front page of the New York times today and I get a summary with paraphrased titles. It doesn't reproduce anything exactly (not even the headlines).

Interestingly, the summary is made by taking screenshots of a (probably illegal) PDF it found someplace on the internet. It then cites that sketchy PDF as the source rather than linking back to the original NY Times articles.

If I were the NYT I would still be plenty pissed off.

ChatGPT's reference: https://d2dr22b2lm4tvw.cloudfront.net/ny_nyt/2025-11-13/fron... via https://frontpages.freedomforum.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: