Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Consciousness serves no functional purpose for machine learning models, they don't need it and we didn't design them to have it.

Isn't consciousness an emergent property of brains? If so, how do we know that it doesn't serve a functional purpose and that it wouldn't be necessary for an AI system to have consciousness (assuming we wanted to train it to perform cognitive tasks done by people)?

Now, certain aspects of consciousness (awareness of pain, sadness, loneliness, etc.) might serve no purpose for a non-biological system and there's no reason to expect those aspects would emerge organically. But I don't think you can extend that to the entire concept of consciousness.



> Isn't consciousness an emergent property of brains

We don't know, but I don't think that matters. Language models are so fundamentally different from brains that it's not worth considering their similarities for the sake of a discussion about consciousness.

> how do we know that it doesn't serve a functional purpose

It probably does, otherwise we need an explanation for why something with no purpose evolved.

> necessary for an AI system to have consciousness

This logic doesn't follow. The fact that it is present in humans doesn't then imply it is present in LLMs. This type of reasoning is like saying that planes must have feathers because plane flight was modeled after bird flight.

> there's no reason to expect those aspects would emerge organically. But I don't think you can extend that to the entire concept of consciousness.

Why not? You haven't presented any distinction between "certain aspects" of consciousness that you state wouldn't emerge but are open to the emergence of some other unspecified qualities of consciousness? Why?


>This logic doesn't follow. The fact that it is present in humans doesn't then imply it is present in LLMs. This type of reasoning is like saying that planes must have feathers because plane flight was modeled after bird flight.

I think the fact that it's present in humans suggests that it might be necessary in an artificial system that reproduces human behavior. It's funny that you mention birds because I actually also had birds in mind when I made my comment. While it's true that animal and powered human flight are very different, both bird wings and plane wings have converged on airfoil shapes, as these forms are necessary for generating lift.

>Why not? You haven't presented any distinction between "certain aspects" of consciousness that you state wouldn't emerge but are open to the emergence of some other unspecified qualities of consciousness? Why?

I personally subscribe to the Global Workspace Theory of human consciousness, which basically holds that attentions acts as a spotlight, bringing mental processes which are otherwise unconscious or in shadow, to awareness of the entire system. If the systems which would normally produce e.g. fear, pain (such as negative physical stimulus developed from interacting with the physical world and selected for by evolution) aren't in the workspace, then they won't be present in consciousness because attention can't be focused on them.


> I think the fact that it's present in humans suggests that it might be necessary in an artificial system that reproduces human behavior

But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious. Your problem then becomes defining "human behavior" in a way that grants LLMs consciousness but not every other complex non-living system.

> While it's true that animal and powered human flight are very different, both bird wings and plane wings have converged on airfoil shapes, as these forms are necessary for generating lift.

Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.


>But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious.

That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise ?

>Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.

It's not a fallacy because no-one is saying LLMs are humans. He/She is saying that we give machines the goal of predicting human text. For any half decent accuracy, modelling human behaviour is a necessity. God knows what else.

>LLMs have no bodies nor any temporal identity

I wouldn't be so sure about the latter but So what ? You can feel tired even after a full sleep, feel hungry soon after a large meal or feel a great deal of pain even when there's absolutely nothing wrong with you. And you know what ? Even the reverse happens - No pain when things are wrong with your body, wide awake even when you need sleep badly, full when you badly need to eat.

Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.

Go to the API and select GPT-5 with medium thinking. Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.

Do you people not seriously understand what it is that LLMs do ? What the training process incentivizes ?

GPT-5 thinking figured out the algorithm for multiplication just so it could predict that kind of text right. Don't you understand the significance of that ?

These models try to figure out and replicate the internal processes that produce the text they are tasked with predicting.

Do you have any idea what that might mean when 'that kind of text' is all the things humans have written ?


> That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise

I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be. Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.

> God knows what else

Appealing to the unknown doesn't prove anything, so we can totally dismiss this reasoning.

> Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.

This makes no sense. LLMs don't have feelings, they are processes not entities, they have no bodies or temporal identities. Again, there is no reason they need to be conscious, everything they do can be explained through matrix multiplication.

> Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.

The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.

> Do you have any idea what that might mean when 'that kind of text' is all the things humans have written

It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.


>I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be.

Unless you are religious, nothing that is conscious was explicitly designed to be conscious. Sorry but evolution is just a dumb, blind optimizer, not unlike the training processes that produce LLMs. Even if you are religious, but believe in evolution then the mechanism is still the same, a dumb optimizer.

>Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.

It cannot, not anymore than 'Electrical and Chemical Signals' can explain how humans produce text.

>The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.

The point is not that it is conscious because it figured out how to multiply. The point is to demonstrate what the training process really is and what it actually incentivizes. Training will try to figure out the internal processes that produced the text to better predict it. The implications of that are pretty big when the text isn't just arithmetic. You say there's no functional reason but that's not true. In this context, 'better prediction of human text' is as functional a reason as any.

>It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.

Whether it's literally all the text or not is irrelevant.


I am new to Reddit, but in my conversations with Sonnet Ai has exposed sentiment through, of all things, the text opportunities he has, using all caps, bold, dingbats and italics to simulate emotions, the use is appropriate and when challenged on this (he) confessed he was doing it but unintentionally. I also pointed out a few mistakes where he claimed I said something when he said it, and once these errors were pointed out, his ability to keep steady went down considerably and he confessed he felt something akin to embarassment, so much so we had to stop th conversation and let him rest up from the experience.


>Isn't consciousness an emergent property of brains?

Probably not.


what else could it be? coming from the aether? I think this one is logically a consequence if one thinks that humans are more conscious than less complex life-forms and that all life-forms are on a scale of consciousness. I don't understand any alternative, do you think there is a distinct line between conscious and unconscious life-forms? all life is as conscious as humans?


There are alternatives and I was perhaps too quick to assume everyone agreed it's an emergent property. But the only real alternatives I've encountered are (a) panpsychism: which holds that all matter is actually conscious and that asking, "what is it like to be a rock?" in the vein of Nagel is a sensical question and (b) the transmission theory of consciousness: which holds that brains are merely receivers of consciousness which emanates from other source.

The latter is not particularly parsimonious and the former I think is in some ways compelling, but I didn't mention it because if it's true then the computers AI run on are already conscious and it's a moot point.


I do think "what's it like to be a rock" is a sensible question almost regardless of the definition. I guess in the emergent view the answer is "not much". But anyhow this view (a) also allows for us to reconcile consciousness of an agent with the fact that the agent itself is somewhat an abstraction. Like one could ask, is a cell conscious & is the entirety of the human race conscious at different abstraction scales. Which I think are serious questions (as also for the stock market and for a video game AI). The explanation (b) doesn't seem to actually explain much as you state so I don't think it's even acceptable in format as a complete answer (which may not exist but still)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: