Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kudos for this identification, but at this point, calling the use of neural networks in statistical work "AI" is misleading at best. I know it won't stop because it gets attention to claim "AI", but it's really depressing. Ultimately it's not really any different than all the talk about "mechanical brains" in the 50s, but it's just really tiresome.


John McCarthy coined the term AI in 1955 for a research project that included NN. He then founded the MIT AI Project (later AI Lab) with one of the researchers who joined the project, Marvin Minsky, who also had created the first NN in 1951.

If NNs aren't AI, what is?

http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf


It implies agency on the part of the software that doesn't exist. It should really just say "researchers ran some math calculations and found X." The fact that the math runs on a computer and involves the use of functions that were found by heuristic-guided search using a function estimator, instead of human scientists finding them from first principles, is surely relevant in some way, but this has been the case for at least a century since Einstein's field laws required the use of numerical approximations of PDE solutions to compute gravity at a point, probably longer.

I don't want to say there is no qualitative difference between what the PDE solvers of 1910 could do and what a GPT can do, but until we don't need scientists running the software at all and it can do decide to do this all on its own and know what to do and how to interpret it, it feels misleading to use terminology like "AI" that in the public consciousness has always meant full autonomy. It's going to make people think someone just told a computer "hey, go do science" and it figured this out.


In PR materials, AI = computers were involved


That's a problem with the term now. Whenever there is a PR statement of using AI it does not have much meaning attached to it. Sometimes even simple algorithms are called AI, if there is a bit of statistics involved even more. I liked the term machine learning, because now with AI-everything I don't really know what it is about.


I think the average person can safely call "the use of neural networks in statistical work" AI.


It's technically correct, but AI has become such an overloaded term that it's impossible to know it refers to "the use of neural networks" without explicitly saying so. So you know, maybe just say that?


This debate is a classic. AI has always been an overloaded term and more of a marketing signifier than anything else.

The rule of thumb is, historically, "something is AI while it doesn't work". Originally, techniques like A* search were regarded as AI; they definitely wouldn't be now. Information retrieval, similarly. "Machine learning", as a brand, was an effort to get statistical techniques (like neural networks, though at the time it was more "linear regression and random forests") out from under the AI stigma; AI was "the thing that doesn't work".

But we're culturally optimistic about AI's prospects again, so all the machine learning work is merrily being rebranded as AI. The wheel will turn again, eventually.


... and once it works, it "earns" a name of its own, at least among people actually doing it. Even in 2024 there are Machine Learning Conferences of note.


ICLR, ICML, KDD, ICCV, ECCV and NeurIPS are all major "AI" conferences and none of them has AI in the name. NeurIPS comes closest. "AI" has historically not been a useful distinction for anyone in the field.

I actually think this is changing given the current rapid ascent of multimodal models.


> calling the use of neural networks in statistical work "AI" is misleading at best.

Neural Networks are not considered AI anymore?

That just reinforces my thesis that "AI" is an ever sliding window that means "something we don't yet have". Voice recognition used to be firmly in the "AI" camp and received grants from even the military. Now we have that on wrist watches (admittedly with some computation offloaded) and nobody cares. Expert systems were once very much "AI".

LLMs will suffer the same treatment pretty soon. Just wait.


The current usage of AI is a rather new market term.

Where would you draw the line? Is prediction via linear regression AI?

Also language is fuzzy and fluid, get used to it.


It was called machine learning 10 years ago since AI had bad connotations, but today people have forgotten and call it all AI again.


Maybe it's a good indicator of misuse when the paper didn't mention 'AI', or 'intelligence' once.

> my thesis that "AI" is an ever sliding window that means "something we don't yet have

Or maybe it's the sliding window of "well, turns out this ain't it, there is more to intelligence than we wanted it to be".

If everything is intelligent, nothing is. If you define pattern recognition as intelligence, you'd be challenged to find unintelligent lifeforms, for example. You haven't learned to recognize faces, you are literally born with this ability. And well, life at least has agency. Is evolution itself intelligent? What about water slowly wearing down rock into canyons?


Pretty soon? I already regularly see people proudly stating that LLMs "aren't really AI" and just "a Markov chain" (yeah sure, let's ignore the self-attention mechanism of transformers which violate the Markov property).

For the sake of my sanity I've just started tuning out what anyone says about AI outside of specialist spaces and forums. I welcome educated disagreement from my positions, but I really can't take the antivaxx equivalent in machine learning anymore.


What if we hold off on calling it AI until it shows sign of intelligence


Chess was a major topic of AI research for decades because playing a good game of chess was seen as a sign of intelligence. Until computers started playing better than people and we decided it didn't count for some reason. It reminds me of the (real) quote by I.I. Rabi that got used in Nolan's movie when Rabi was frustrated with how the committee was minimizing the accomplishments of Oppenheimer: "We have an A-bomb! What more do you want, mermaids?"


They chased chess since they thought if they could solve chess then AGI would be close. They were wrong, so then they moved the goalpost to something more complicated thinking that new thing would lead to AGI. Repeat forever.

> we decided it didn't count for some reason

Optimists did move their goals once you realized that solving chess actually didn't lead anywhere, and then they blamed the pessimists for moving even though pessimists mostly stayed still throughout these AI hype waves. It is funny that optimists constantly are wrong and have to move their goal like that, yes, but people tend to point the finger at the wrong people here.

The AI winter came from AI optimists constantly moving the goalposts like that, constantly saying "we are almost there, the goal is just that next thing and we are basically done!". AI pessimists doesn't do that, all that came from the optimists that tried to get more funding.

And we see that exact same thing play out today, a lot of AI optimists clamoring for massive amounts of money because they are close to AGI, just like what we have seen in the past. Maybe they are right this time, but this time just like back then it is those optimists that are setting and moving the goal posts.


It turned out you can use some clever search algorithms rather than intelligence to play chess, yeah.


It also turns out that you can make machines that fly without having them flap their wings like flying animals. But it would be absurd to claim that airplanes don't fly for that reason.


Good thing I never claimed plans don't fly

I think you'll find that definition "intelligence" is a bit harder than defining "flight", and convincing people that "a machine programmed to mechanically follow the steps in the minimax algorithm as applied to chess, and do nothing else" doesn't fit most people's definition of "intelligence" in the context of the philosophical question of what constitutes intelligence.


And what signs of intelligence are we looking for this year?


Some sign that it's more than autocomplete would be nice, maybe the ability to perform some kind of logical reasoning. ChatGPT does a good job of putting up an illusion of human-like intelligence, but speak to it for like 10 minutes and its nature as a plausible text generator quickly makes itself apparent


Another entry in the 'marketing and technical terms don't mean the same thing despite using the same words' saga.


AI has always meant Artificial Intelligence. Intelligent and capable of learning, like a person.

LLMs are not AI.


> LLMs are not AI.

Neither are neural networks, by that definition. Or 'machine learning' in general. They all have been called "AI" at different points in time. Even expert systems – that are glorified IF statements – they were supposed to replace doctors.


People thought those techniques would ultimately become something intelligent, so AI, but they fizzled out. That isn't the doubters moving the goalposts, that is the optimists moving the goal posts always thinking what we have now is the golden ticket to truly intelligent systems.


Correct, we don't have AI yet.


Some people are incapable of learning. Therefore, LLMs are AI?

As far as I recall, the turing test was developed long ago to give a practical answer to what was and was not practically artificial intelligence because the debate over the definition is much older than we are


Everyone is capable of learning else they'd have died as a toddler, or any time since when they tried to cross the road.


I think the Turing test is subjective, because the result depends on who was giving the test and for how long.


I mean. "AI" has meant "whatever shiny new computer thing is hot right now" in both common vernacular and every academic field besides AI research basically since the term was coined...


> but it's really depressing

Why do you feel it depressing?


The taxonomy of AI is the following:

AI

-> machine learning

...-> supervised

........-> neural networks

...-> unsupervised

...-> reinforcement

-> massive if/then statements

-> ...

That is to say NN falls under AI but everything falls into AI.


Where did you pull this “taxonomy” from?


Why would it be depressing? Who cares


Because regulating "AI" has the potential to encompass all software development. Many programs act as decision support. From the outside there's little difference between an application that uses conventional programming, ML, RNNs, or GPT.


taking credit from the drivers of tools to give credit to the tools themselves as PR? Yes, depressing. No one says "Unreal Engine Presents Final Fantasy VII". It's an important tool but not the creative mind behind the work.


Should the term AI just not be used until we have a skynet level ai?


No, it should only be used about things that don’t exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: