1. Users and, more importantly, makers of those tools can't predict their behaviour in a consistent fashion.
2. Requires elaborate procedures that don't guarantee success and their effect and its magnitude is poorly understood.
An LLM is a machine spirit through and through. Good thing we have copious amounts of literature from a canonically unreliable narrator to navigate this problem.
When you consider that machine spirits in 40k are side effect of every thing computer being infected with bird of AI, and that she of the best cares are actually complete loyalist AI systems from before empire hiding in plain sight...
"In the sacred tongue of the omnissiah we chant..."
In that universe though they got to this point after having a big war against the robot uprising. So hopefully we're past this in the real world. :-)