big statement that doesn’t hold up under any technical scrutiny. “AI” —- neural networks —- are used reliably in production all over the place. signals filtering/analysis, anomaly detection, background blurring, medical devices, and more
assuming you mean LLMs, this still doesn’t hold up. it depends on the system around it. naively asking ChatGPT to construct a legal brief is stupid use of the tool. constructing a system that can reliably query over and point you to relevant data from known databases is not
Another reason why the public hates AI is because it has developed a cult around it of people who deny its fallibility and insist, with unshakable faith, that it will make their socially destructive fantasies come true.
You know, AI can still be fallible and destructive.
Business leaders are almost always willing to compromise quality for cost-reductions (offshore call centers with accent issues), or take a relatively satisfying job and refactor it into stressful one (e.g. just cutting half the team an expecting the other half to take up the slack). They don't need AI to do it, but AI will let them go father with those impulses.
big statement that doesn’t hold up under any technical scrutiny. “AI” —- neural networks —- are used reliably in production all over the place. signals filtering/analysis, anomaly detection, background blurring, medical devices, and more
assuming you mean LLMs, this still doesn’t hold up. it depends on the system around it. naively asking ChatGPT to construct a legal brief is stupid use of the tool. constructing a system that can reliably query over and point you to relevant data from known databases is not