And people are also still clearly confusing "isn't human or conscious" with "can't possibly create new logical thoughts or come to new logical conclusions i.e. do intellectual labor" when there is a plethora of evidence at this point that the latter is, in fact, the truth
I'm not sure if you mean that as a dig, or not, but if you are referring to me then I have these data points to discuss.
1. I have encountered a problem where AI will suggest 4 different "solutions" and when I point out a problem with one, it cycles on to the next, and stays in that loop, repeating over and over that set of 4, with no recollection of the previous refutation of the soltuion (this is a mix of context retention, and the fact that the solution selection is limited to that which has already been fully explored on the web - I had a 5th idea in mind which the AI failed to understand, but worked well)
2. Yesterday I was discussing with AI the fact that I had three options for action, and it misunderstood that as 4 actions, a trivial arithmetic failure.
This demonstrates (clearly) that the AI didn't "understand" the points discussed, and was instead staying with the correlation of text with other text.
I really like where AI is at the moment and use it a lot - it's very helpful for debugging, for example, but as every vibe coder out there will attest, AI fails hard at standalone coding, and I submit that this is a symptom of its inability to understand what its doing.
It's still correlation is not causation, and it demonstrates why correlation is so attractive, you can get quite far knowing that there is a correlation between ice cream sales and shark attacks, but it takes work to understand that there is no causative link (FTR I suspect that it's because ice cream sales go up in hot weather, more people are in the ocean during those hot weather periods, therefore there's more opportunity for people to interact with sharks)
Edit: Note how I use the word "suspect" when I talk about the cause of the correlation - it's VERY tempting to say that the weather is the cause, but that's still a correlation, and the fact is, as humans have discovered, actual research is required to verify whether that is, indeed, the cause, or not - something AI might miss.
Another data point has just arisen - I have a function (in Go) that accepts an unpacked slice of `interface{}` (some people will now call that an unpacked slice of `any`)
I was calling that function with an unpacked slice of string - eg
The AI I was using (Claude for the purposes of this discussion) incorrectly told me that I first needed to convert the slice of string to a slice of interface before calling the function
It argued with me when I said that I didn't and demanded I TIAS to prove its point, and report the compile time errors
Of course, I did, and there were no errors
The issue was that the AI (Claude) did not understand that `interface{}` or `any` means that any type can be used there.
Claude is doing a fantastic job, but this is an example of it not actually understanding what's happening.