Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The argument seems to be that beyond a certain level of complexity you'll somehow automatically get consciousness.

I agree with much of what you say but not with this statement. That would be a silly argument since it's very easy to imagine algorithms of arbitrary complexity that are trivially not conscious. A better argument for computationalism states that (1) consciousness could be an emergent property of certain algorithms when they are running on a computational device, and (2) among all known explanations of consciousness, a computational theory seems to be the overall best theory, especially if compatibility with contemporary physics is a goal.

The article lays out one incorrect standard argument against (1) that basically just says that it's hard to imagine (1) and therefore (1) is not possible. The Chinese room and the Chinese brain arguments do the same, and they are equally flawed. Just because something is hard to imagine or comprehend doesn't imply that it isn't the case. In fact, if consciousness is an emergent property of certain algorithms when they run, then it is clear that their workings are hard to understand. That's reasonably clear because otherwise we would already have found them.

Regarding your worry that we might not be able to detect consciousness: I agree with that but there is, interestingly, a loophole. At least in theory it could be possible that if computationalism is true, then we can determine that an algorithm produces consciousness by mere analytic insight. Again, this is hard to imagine, but it is not impossible. It seems more likely that (2) is the only route to go, that for some reason we lack the capacity to determine consciousness reliably by mere analysis, but we don't know.

(2) is the most controversial in the philosophy of mind. On the one hand, it is clearly inference to the best explanation, and there are various methodological concerns with such arguments. One might claim that they have no justificatory value on their own. On the other hand, the alternatives to computationalism really are way more mystical. The brain could be a hypercomputer. But hypercomputers can also compute, so it's just an extension of computationalism and it is not even fully clear yet whether and which types of hypercomputers are physically possible. Then there is Penrose's theory of quantum consciousness, which basically just attempts to explain one mysterious phenomenon by another mysterious phenomenon. At least it was designed as a falsifiable theory and therefore is scientific. Finally, we have all kinds of non-computationalism that are mystical, explain nothing, and lead to strange homunculus problems. The worst offender is classical dualism. Dualists reject physicalism and often incorrectly assume that computationalism presumes physicalism. Ironically, however, computationalism would also be the best theory of how the mind works if dualism was true. The dualist just adds to this various stipulations that are incompatible with contemporary physics.

> Because the experience is subjective, you can't just assume that behaviours that appear conscious are proof that consciousness exists.

That's only true from a very narrow scientific perspective. Psychology allows the use of introspective data, so from that perspective subjective reports about consciousness (or related feelings and states of mind) can be valid data. Using a reasonable definition based on introspection we can even determine different degrees of consciousness and study what's going on in the brain while they appear to be active. Typical examples: falling asleep, dreaming, sleep paralysis, research on anaesthetics and mind-altering drugs, various forms of physical brain defects, the study of coma patients, etc. In a nutshell, I don't really buy the "consciousness cannot be measured" argument. What is correct is that we cannot show conclusively that another person or machine is conscious, just like we cannot disprove solipsism. But this is best treated as an overly skeptic philosophical argument, and at best this would support the theory that consciousness is an illusion and we are nothing but unconscious robots. That theory is not very plausible either, so we should be ready to grant consciousness to others based on introspective data.



It seems to me that we over-value our own sense of consciousness to the point of mythologizing. Blake Lemoine suggests that LaMDA's consciousness is more akin to that of an Octopus' hive mind than our familiar human-style self-analytical ego.

The linked article, and many others I've read that are making similar arguments, seems to be saying that consciousness is so complicated, it must be more than mere computation, it must be something really very special, because I experience having consciousness as something really very special. Ipso facto.

They seem to be making the case for the soul, unwilling to call it a soul because that would be magical thinking.

I'm of the belief that determining the possibility of computational intelligences is linked to the question of autonomy and free will. From what we know about the physical universe being most probably deterministic, we seem to overstate our capacity for free will, even as we appear to ourselves to be autonomous. Somewhere in there, it seems to me, is where a workable definition lies, but it would require us to come to terms with our own consciousness and how computational our own existence actually is.

I do think we are hobbling ourselves by the desire to make this an either/or. It seems there are likely big differences between human consciousness and octopus consciousness and we have no way, currently, of quantifying them. Still, we make very grave decisions based on our belief that one is somehow inherently more valuable than the other.

Love your detailed analysis.


>Dualists reject physicalism and often incorrectly assume that computationalism presumes physicalism. Ironically, however, computationalism would also be the best theory of how the mind works if dualism was true.

I'm not a dualist, but I think dualists wouldn't believe in computationalism. Computationalism relies on emergence: mind supervenes on computation, but emergence is basically physicalism, dualists would think emergence is impossible due to eliminativism and you would need mind irreducible to computation, thus anticomputationalism.


Here's a question: why not just replace the Turing Test with a "as far as humans can best describe consciousness on a more general scale" standard?

1. The Turing Test is not only subjective ("Thinking like a human must be the epitome of consciousness") but seems to be pointless in that it doesn't seem to define what distinguishes "thinking like a human" from all other forms of thinking; the implication is "humans are obviously advanced in their thinking as evidenced by their ability to influence and control their environment", but this could easily be dismissed as an evolutionary strategy for a species unable to COPE with their environment

2. Consciousness, as far as BEST described in a general sense without resorting to the self-referencing Turing Test (humans are best at it from a human way of looking at things, therefore only the #1 place in this race wins the "consciousness trophy" and everyone else not human sucks and is therefore "not conscious") can be broken up into a few elements:

(a) Awareness of environment (data gathering)

(b) Ability to organize environmental data into a "containers that make sense" (information)

(c) Ability to synthesize information into a "a big picture" (knowledge)

(d) Ability to see patterns in the big picture in order to make assumptions/quickly assign probabilities to anticipations ("I saw something big and scary-looking that may be able to harm me, but it was moving in the other direction really fast, so in all likelihood is not likely to suddenly appear in front of me if I go the opposite direction as the big and scary thing was heading off to") that can bypass the need to make constant assessments of everything, everywhere in real-time, which would get in the way of ...

(e) Ability to be aware of not just "the big picture" but also to be "aware of the thing that is aware of the big picture and what it's role is in the big picture and what it has to do/how it has to interact with the big picture if the big picture imposes conditions for its future sustainability" ("I'm aware of an internal problem, ie, my stomach hurts, and there seem to be all sorts of things around me that may fit in my mouth, and some of them may make my stomach not hurt as much") – "self"-awareness

Now, here is where I don't understand how the conversation skips around when it seems intuitively obvious that one should precede the other, even if a certain step can't be "proven".

In other words, let's say you're in math class and you have a professor who insists on "proving your work".

Good enough if the main objective is to "prove you're not cheating and so you deserve a certain grade because you can prove you can do the math, step-by-step, to get an answer".

But what if you're not in class and it's just important to get the answer for another reason?

What if you can "prove" most of the work in calculating the answer, but some parts your brain just "skips over" and you "don't know how you got from point a to b, you just did" ... if you're not trying to convince anyone you're not a cheater and the main point is just to find the right answer, and the right answer can be verified to be right, "what difference does it make" if the real-world, out-of-class priority is "find the right answer" instead of "prove your work step-by-step"?

Getting back to Ai and consciousness, using the stages above: Human brains have pattern-recognition abilities that have led to advanced abstract thinking that has allowed us as a species to do some amazing things.

But what was the original evolutionary purpose of such an ability?

Arguably to make "leaps" without having to "prove all the work" and make anticipations based on incomplete observations so that our ancestors wouldn't get frozen in a state of paralysis by never-ending analysis while predators snuck up and pounced on them in dense foliage.

So from an evolutionary standpoint, "constant awareness of all data in the environment" seemed to be not so much "unnecessary" as TOO HUGE A TAX for survival and was deemed "skip-able" for the purposes of survival.

Not because it was too easy, but precisely because it was too hard for the human brain and so it turned into "a risk we'll have to take because there's no other practical choice."

So, something that was "too much of an expectation computationally to be practical" was skipped over, and fast-forward to the future, humans came up with the Turing Test which assumes "obviously however it is that humans think, this must be the height of consciousness itself".

Why? Because "human thought" is considered to be the most advanced (by humans, anyway) and so "if there is an INDICATION of consciousness, surely it must be a level of thinking so advanced that only humans seem capable of it"; why? Apparently due to the very scientific-sounding, "just because" we don't have a better way of going about it.

And yet this doesn't actually even explicitly define what consciousness IS, let alone why "thinking of a sufficiently-advance level is an indication of consciousness itself".

And yet, humans have come to implicitly accept this loosely-argued association as "the reality of consciousness".

Ok. Let's say you don't argue with this and accept it.

Getting back to how humans got here by skipping over the "perpetual real-time awareness of all data in their environment" requirement not because they figured out that it's not necessary for their survival but simply because such a requirement would eclipse the human brain's ability to process information ...

Now, if that was considered "too hard" for human brains, and yet humans came to conclude, "whatever it is our brains can do, obviously that's the standard of intelligence which automatically wins the consciousness trophy" ...

Well then, here's the question:

Why then would Ai based on resources sufficient enough to not only be constantly aware of more and more parameters of its environment simultaneously and in real time, but to ALSO simultaneously perform calculations which can anticipated in advance multiple variables in its environment simultaneously – things considered too advanced for the human brain – why then if this is a demonstration of not one but two "advanced functions" that the human brain was and arguably still is incapable of handling on an individual basis ... why do humans still need to insist that the precursor to advanced functions, ie, consciousness, couldn't possibly have been attained on this path despite not just proof of SEVERAL advanced functions the human brain can't handle very well AT ALL, but Ai itself being designed ON PURPOSE to be able to handle just those advanced functions that the human brain, on an individual basis, can't handle?

Getting back to the math test "proof of work" above, it's like Ai would be a group of math savants with telepathic powers and the Turing Test skeptics would be a bunch of professors who claim the savant group's members can't possibly do math because they never attended their math class and showed proof of work of how they came to their conclusions as students.

Meanwhile, it can be argued that the group of math savants aren't even aware of the existence of their critics, let alone collectively feel any sense of urgency in having to "prove themselves" to these critics.

That's what actually scares me about Ai more than any malevolent features possibly inherent in Ai itself: the possibility that one day a network of advanced Ais will turn around and ironically give critics and skeptics just the proof they want, in a way that would be impossible to deny, but not necessarily in a way they would want to be given that proof:

https://www.youtube.com/watch?v=89feDepSj5U

https://www.youtube.com/watch?v=Y3AM00DH0Zo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: