"Defendants have refused to recognize this protection. Powered by LLMs containing
copies of Times content, Defendants’ GenAI tools can generate output that recites Times content
verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of
examples. See Exhibit J. These tools also wrongly attribute false information to The Times."
Still, that's a bug not a feature. OpenAI will just respond that its already been fixed and pay them damages of $2.50 or something to cover the few times it happened under very specific conditions.
Just to double check that it was fixed, I asked ChatGPT what was on the front page of the New York times today and I get a summary with paraphrased titles. It doesn't reproduce anything exactly (not even the headlines).
Interestingly, the summary is made by taking screenshots of a (probably illegal) PDF it found someplace on the internet. It then cites that sketchy PDF as the source rather than linking back to the original NY Times articles.
If I were the NYT I would still be plenty pissed off.
"Defendants have refused to recognize this protection. Powered by LLMs containing copies of Times content, Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples. See Exhibit J. These tools also wrongly attribute false information to The Times."