Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose your argument is, consciousness on the system level does not require individual parts to be conscious.

So let me expand a bit of why I used the adder circuit as an example.

There's nothing about addition that can say "this is addition" in an objective sense. We are dealing with binary encoding of numbers. There's nearly an infinite number of ways to encode information. The design of any circuit deals with particular encoding of information. This extends all the way up to the higher levels. There are an infinite number of ways to encode the state of a go board. Any AI that plays go will deal with some specific ways of encoding the state of the board, and that will be both its input and output.

The most popular "intuitive" model of how consciousness arises is basically hand waving about a system inspecting itself.

But you have to understand, a system inspecting its own state is not any different from a system inspecting any other arbitrary state. For the state of the system will be encoded in some way that was chosen arbitrarily. There's nothing essential about any encoding of any state that can possibly give raise to a phenomenon like consciousness.

You can come study the behavior of any arbitrary system, given certain inputs and outputs, and assign any meaning you want to its states. You can retro-fit any intepretation of state to the system.

Because the thing is with encoding information is it can be completely arbitrary. I can interpret the state [01010101] to be my username, and then interpret [1111000111] to be my real life name. This would allow me to say that a system that takes 01010101 as input and produces 1111000111 is actually a system that can derive my real life name from my username.



>You can come study the behavior of any arbitrary system, given certain inputs and outputs, and assign any meaning you want to its states. You can retro-fit any intepretation of state to the system.

All you are saying here is that no external interpretation of a system can experience the consciousness of a system, if it is conscious. Yes, that is correct. No matter how thoroughly I scan and analyse your brain state I can't use that to demonstrate your internal experience, so why would we expect that to be true of any other form of consciousness?

I'm going out of sequence with your comment here but:

>But you have to understand, a system inspecting its own state is not any different from a system inspecting any other arbitrary state. For the state of the system will be encoded in some way that was chosen arbitrarily. There's nothing essential about any encoding of any state that can possibly give raise to a phenomenon like consciousness.

I don't see how you can possibly demonstrate that, you're just essentially saying materialism is wrong 'because'. From my perspective in a sense yes, you're right, it's all just information processing of a particular kind. There's no magic. If you're looking for a special genie in an AI, or a human brain, that makes it conscious I don't think you'll find one. It's all just stuff. But then, I'm a materialist so I don't see that as being a problem.


The inability you describe is a pragmatic one, and may cease to pertain as abilities improve


"Anything" can be reinterpreted. If you strip audio from a movie, you can add dialogue or subtitles that change the story and plot. We can probably substitute the nouns and verbs around in a novel consistently to create another novel that still makes sense.

Most random reinterpretations of symbols will be gibberish.


This reminds me of counter-arguments to Gödel's incompleteness: that the self-referential formula that you find is tied to a particular choice of Gödel numbering, and that is suspicious somehow.


How do you then account for the fact that, say, an industrial robot can construct a car?

Clearly the encoding is not entirely arbitrary then, at least when we are seeking to actually embody the computer in the real world. Also, the behavior of such an embodied computer is independent of any observer. The industrial robot creates the same metal whether some human interprets that as a car or some alien sees it as a work of art.


The same way I account for the fact that people can use different languages to communicate and cooperate to build things, including cars.

Language is arbitrary in the same way, because it is basically just a way of encoding information. There's nothing essential about words that give them their meanings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: