Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?

Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no? I mean it's always possible that you are mistaken about what constitutes ML etc. but assuming you have a reasonable if flawed correspondence between your education and reality the deduction comes pretty readily...

> Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness?

I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?

Well, the question is what science does during that time of course. If science manages to figure out the correlates of consciousness and understands something about why they need to have the structure that they in fact do have, then it becomes a question of “let's see whether you have the hardware that can do this whole conversation thing without consciousness, or whether you have the hardware that skips the algorithmic complexity by using consciousness.” But if this proves to be a quite tougher nut to crack, then we're stuck with our present crude methods. “How much of my internal structure do you appear to have?”



> Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no?

This seems like begging the question. Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

> I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?

I'm not. These are absolutely situations that can happen now, with people. I am thinking more when it comes to mental and some physical impairments, so "a 15 minute conversation" is assuming a lot about the capabilities and clarity of everyone involved.


> Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

I believe this is the real question about consciousness. If a being were to be conscious but it had no desires, no wishes, not even a will to keep itself alive... it wouldn't bother to do anything... i.e. it would behave exactly like a rock, or anything non-conscious.

Having desires, wishes, and should I say, emotions... is absolutely required for what we think of as consciousness to materialize. But we know that emotions are chemical processes which perhaps cannot occur outside a biological being. Maybe it can, but it's hard to think of a reasonable way this could work.


A loss function, perhaps?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: