That's because there are at least 5 different definitions of AI.
- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)
- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago
- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding
- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI
- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI
Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI
It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.
Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.
Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?
It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.
Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.
LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.
Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.
They weren't finding a lot of matches. That was odd.
That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.
I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.
And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".
Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.
You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.
And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.
Yeah. You're not a coder, so you don't have the expertise to see the pitfalls and problems with the approach.
If you want to use concrete to anchor some poles in the ground, great. Build that gazebo. If it falls down, oh well.
If you want to use concrete to make a building that needs to be safe and maintained, it's critical that you use the right concrete mix, use rebar in the right way, and seal it properly.
Civil engineers aren't "threatened" by hobbyists building gazebos. Software engineers aren't "threatened" by AI. We're pointing out that the building's gonna fall over if you do it this way, which is what we're actually paid to do.
Sorry, carefully read the comments on this thread and you will quickly realize "real" coders are very much threatened by this technology - especially junior coders. They are frightened their job is at stake by a new tool and take a very anti-AI view to the entire domain - probably more-so for those who live in areas where the wages are not high to begin with. People who come from a different perspective truly see the value of what these tools can help you do. To say all AI output is slop or garbage is just wrong.
The flip of this is to understand and appreciate what the new tooling can help you do and adopt. Sure, junior coders will face significant headwinds, but I guarantee you there are opportunities waiting to get uncovered. Just give it a couple of years...
Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.
I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”
Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.
Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.
Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.