I don't think they're confused, I think they're approaching it as general AI research due to the uncertainty of how the models might improve in the future.
They even call this out a couple times during the intro:
> This feature was developed primarily as part of our exploratory work on potential AI welfare
> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future
They even call this out a couple times during the intro:
> This feature was developed primarily as part of our exploratory work on potential AI welfare
> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future