Interesting, because I was trained in writing classes to always have a "conclusion" where you make sure to summarize and restate your thesis for emphasis and focus. That AI does this feels like a result of training/emulating what humans do. If people think my writing is AI driven because of that, that's quite unfortunate. If we have to start introducing errors or mistakes into our writing so people don't assume it's AI, that seems like a quick race to the bottom.
I wonder what the tech blog meta will shape up to be in a couple years.