> I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.
As I said in my first post, I'm taking "consciousness" to mean "the ability to form certain kind of mental abstractions, particularly those involving ourselves." As such it's a type of domain agnostic intelligence, so you would expect it to be able to do _something_ other than hyper-optimize for one particular type of output.
People can use different definitions of "consciousness" if they want, but many of the other ones I've found ("internal feeling") seem vague and not particularly useful (and don't make it clear why LaMDA would be different from any other program).
> There are plenty of things a human can't do that other "simpler animal minds" can do
There are many things that humans don't have the hardware to do (though it seems like some people do have the ability to echolocate[1]). But given the hardware, humans are definitely able to make mental models of these things (people are able to use sonar, for instance).
> On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.
Children don't learn consciousness through mimicry, they learn language through mimicry. As I said before, Helen Keller wasn't unconscious before she was able to communicate. Simple mimicry in one specific domain doesn't show us that any of the underlying complex abstractions that happens in human and many animal minds are taking place.
As I said in my first post, I'm taking "consciousness" to mean "the ability to form certain kind of mental abstractions, particularly those involving ourselves." As such it's a type of domain agnostic intelligence, so you would expect it to be able to do _something_ other than hyper-optimize for one particular type of output.
People can use different definitions of "consciousness" if they want, but many of the other ones I've found ("internal feeling") seem vague and not particularly useful (and don't make it clear why LaMDA would be different from any other program).
> There are plenty of things a human can't do that other "simpler animal minds" can do
There are many things that humans don't have the hardware to do (though it seems like some people do have the ability to echolocate[1]). But given the hardware, humans are definitely able to make mental models of these things (people are able to use sonar, for instance).
> On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.
Children don't learn consciousness through mimicry, they learn language through mimicry. As I said before, Helen Keller wasn't unconscious before she was able to communicate. Simple mimicry in one specific domain doesn't show us that any of the underlying complex abstractions that happens in human and many animal minds are taking place.
[1] https://en.wikipedia.org/wiki/Human_echolocation