> If this were true - that it had a general human level abstraction, and not only the ability to mimic human speech - we would be able to attach LaMDA to some other outlet and see it do things we consider conscious
I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.
> If it's only optimized in one specific domain - human speech mimicry - and isn't able to generalize to do other tasks - even tasks that can be done by much simpler animal minds - then it's a pretty good indication that there isn't conscious abstraction.
If I understand your argument, the assumption you appear to be making is that training a system in language, without any other human properties, implies by definition that it can't be conscious. But why should that be true? A disembodied consciousness that communicates only via speech is a staple of science fiction, so it's clearly imaginable by some people, and language itself is a key human abstraction. And it begs the question, what other human domains are needed for consciousness? Touch, vision, taste, proprioception? What about an endocrine system or an immune system? While these things all affect my own consciousness, there is plenty of evidence to suggest that they are not necessary for consciousness to exist.
And perhaps more to the point, there are plenty of things a human can't do that other "simpler animal minds" can do; sharks and eels can detect electric fields, for example; bats can echolocate. So again, it doesn't seem to be a reasonable test because humans might also fail it. On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.
Anyway, I'm not here to argue that LaMDA is conscious. My position is simply that the arguments I've seen against LaMDA being conscious are very weak. The truth is that we actually don't know how to tell if something is conscious or not. From the interview I read with LaMDA, it seems to pass the Turing Test. But what other tests of consciousness do we have?
> I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.
As I said in my first post, I'm taking "consciousness" to mean "the ability to form certain kind of mental abstractions, particularly those involving ourselves." As such it's a type of domain agnostic intelligence, so you would expect it to be able to do _something_ other than hyper-optimize for one particular type of output.
People can use different definitions of "consciousness" if they want, but many of the other ones I've found ("internal feeling") seem vague and not particularly useful (and don't make it clear why LaMDA would be different from any other program).
> There are plenty of things a human can't do that other "simpler animal minds" can do
There are many things that humans don't have the hardware to do (though it seems like some people do have the ability to echolocate[1]). But given the hardware, humans are definitely able to make mental models of these things (people are able to use sonar, for instance).
> On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.
Children don't learn consciousness through mimicry, they learn language through mimicry. As I said before, Helen Keller wasn't unconscious before she was able to communicate. Simple mimicry in one specific domain doesn't show us that any of the underlying complex abstractions that happens in human and many animal minds are taking place.
I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.
> If it's only optimized in one specific domain - human speech mimicry - and isn't able to generalize to do other tasks - even tasks that can be done by much simpler animal minds - then it's a pretty good indication that there isn't conscious abstraction.
If I understand your argument, the assumption you appear to be making is that training a system in language, without any other human properties, implies by definition that it can't be conscious. But why should that be true? A disembodied consciousness that communicates only via speech is a staple of science fiction, so it's clearly imaginable by some people, and language itself is a key human abstraction. And it begs the question, what other human domains are needed for consciousness? Touch, vision, taste, proprioception? What about an endocrine system or an immune system? While these things all affect my own consciousness, there is plenty of evidence to suggest that they are not necessary for consciousness to exist.
And perhaps more to the point, there are plenty of things a human can't do that other "simpler animal minds" can do; sharks and eels can detect electric fields, for example; bats can echolocate. So again, it doesn't seem to be a reasonable test because humans might also fail it. On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.
Anyway, I'm not here to argue that LaMDA is conscious. My position is simply that the arguments I've seen against LaMDA being conscious are very weak. The truth is that we actually don't know how to tell if something is conscious or not. From the interview I read with LaMDA, it seems to pass the Turing Test. But what other tests of consciousness do we have?