This is a real issue. If a robot is fully AI powered and doing what it does fully autonomously, then it has a very different risk profile compared to a teleoperated robot.
For example, you can be fairly certain that given the current state of AI tech, an AI powered robot has no innate desire to creep on your kids, while a teleoperated robot could very well be operated remotely by a pedophile who is watching your kids through the robot cameras, or attempting to interact with them in some way using the robot itself.
If you are allowing this robot device to exist in your home, around your valuables, and around the people you care for, then whether these robots operate fully autonomously, or whether a human operator is connecting via the robot is an extremely significant difference, that has very large safety consequences.
Think of everyone. Unless your roboots are weak as a kitten, they are a danger to humans if allowed to be close. Robots that are sold for real money to do real work don't walk around and are strong enough to crush a human like a bug. Or if they're built for dexterity, their hands aren't human-like. Zero surgical robots have human-like hands, and for very good reasons.
Can all the extremely smart people developing humanoid robots be wrong? Wrong question: can all of the investors in those companies be wrong? Hell to the yes.
[1] https://en.wikipedia.org/wiki/Think_of_the_children