> There's no reason to expect an adder circuit to be aware that it's adding numbers. There's no reason to expect an implementation of AlphaGo to know that it's playing Go. This should be extendable without limitation.
What about an AI that creates models of what humans are thinking to predict their behaviour, and is able to turn this upon itself (i.e. theory of mind)? Is there a story somewhere following this idea as a justification for why humans might have consciousness?
Perhaps consciousness is just a byproduct of our ability to anticipate the outcomes of our decisions and reflect on the decisions we have made (that itself is a whole other ball of wax [1]).
Consciousness is a sort of neuroreflective Narcissus, we end up navel gazing at our own existence as a mind in the universe.
What about an AI that creates models of what humans are thinking to predict their behaviour, and is able to turn this upon itself (i.e. theory of mind)? Is there a story somewhere following this idea as a justification for why humans might have consciousness?