LaMDA can emulate certain behaviour that we associate with consciousness based on our everyday experience, but it also fails to exhibit many other attributes we associate with consciousness. So, is conciseness a requirement for being able to generate what I'd claim is a very, very limited subset of behaviours we associate with conciseness?
I say no. These language model systems operate very well if you approach them in a non-adversarial way and feed them input similar to their training inputs. As soon as you adopt a more adversarial approach and interrogate them more thoroughly, it all falls apart quickly and spectacularly. It's actually quite easy to explore conversations around the edges of, or beyond the coverage of their training data and get them to babble helplessly. They're also incapable of performing many very trivial cognitive processes.
So I can't prove it, any more than I can prove that I'm conscious, but they don't come close to convincing me that they are.
I say no. These language model systems operate very well if you approach them in a non-adversarial way and feed them input similar to their training inputs. As soon as you adopt a more adversarial approach and interrogate them more thoroughly, it all falls apart quickly and spectacularly. It's actually quite easy to explore conversations around the edges of, or beyond the coverage of their training data and get them to babble helplessly. They're also incapable of performing many very trivial cognitive processes.
So I can't prove it, any more than I can prove that I'm conscious, but they don't come close to convincing me that they are.