> humans simply didn’t write this way prior to recent years.
Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?
Decades of reading experience of blog posts and newspaper articles. They simply never contained this many section headers or bolded phrases after bullet points, and especially not of the "The [awkward noun phrase]" format heavily favored by LLMs.
So what would explain why AI writes a certain way, when there is no mechanism for it, and when the way AI works is to favor what humans do? LLM training includes many more writing samples than you’ve ever seen. Maybe you have a biased sample, or maybe you’re misremembering? The article’s style is called an outline, we were taught in school to write the way the author did.
Why did LLMs add tons of emoji to everything for a while, and then dial back on it more recently?
The problem is they were trained on everything, yet the common style for a blog post previously differed from the common style of a technical book, which differed from the common style of a throwaway Reddit post, etc.
There's a weird baseline assumption of AI outputting "good" or "professional" style, but this simply isn't the case. Good writing doesn't repeat the same basic phrasing for every section header, and insert tons of unnecessary headers in the first place.
Yes, training data is a plausible answer to your own question there, as well as mine above. And that explanation does not support your claims that AI is writing differently than humans, it only suggests training sets vary.
Repeating your thesis three times in slightly different words was taught in school. Using outline style and headings to make your points clear was taught in school. People have been writing like this for a long time.
If your argument depends on your subjective idea of “good writing”, that may explain why you think AI & blog styles are changing; they are changing. That still doesn’t suggest that LLMs veer from what they see.
All that aside, as other people have mentioned already, whether someone is using AI is irrelevant, and believing you can detect it and accusing people of using AI quickly becoming a lazy trope, and often incorrect to boot.
Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?