AI is inherently unreliable and untrustworthy. In this case, distrust is not just some emotional reaction but pretty well grounded in fact.
Once lawyers learn this themselves from experience, I expect they will move toward legally impressing this upon any who are slow/reluctant to admit as much.
Using technology that is widely known to be flawed for any sort of serious work is a textbook example of "negligence".
The example in the article really is true negligence. I can't imagine that lawyer did good work even prior to using AI.
My approach is that you're responsible for anything you ship, and I don't care (within reason) how you generated it. Once it hits production, it's yours, and if it has any flaws, I don't want to hear "Well the AI just missed it or hallucinated." I don't fucking care. You shipped it, it's _your_ mistake now.
I use Claude Code constantly. It's a great tool. But you have to review the output , make necessary adjustments, and be willing to put your name on it.
Yes, lawyers will take the same approach ---- as will the courts I suspect.
Once this is fully developed and ingrained in the popular psyche, people/companies will start to question the value proposition of an unreliable tool. But by that time, tech billionaires will be significantly enriched by it --- and that is the most important aspect of it.
big statement that doesn’t hold up under any technical scrutiny. “AI” —- neural networks —- are used reliably in production all over the place. signals filtering/analysis, anomaly detection, background blurring, medical devices, and more
assuming you mean LLMs, this still doesn’t hold up. it depends on the system around it. naively asking ChatGPT to construct a legal brief is stupid use of the tool. constructing a system that can reliably query over and point you to relevant data from known databases is not
Another reason why the public hates AI is because it has developed a cult around it of people who deny its fallibility and insist, with unshakable faith, that it will make their socially destructive fantasies come true.
You know, AI can still be fallible and destructive.
Business leaders are almost always willing to compromise quality for cost-reductions (offshore call centers with accent issues), or take a relatively satisfying job and refactor it into stressful one (e.g. just cutting half the team an expecting the other half to take up the slack). They don't need AI to do it, but AI will let them go father with those impulses.
Lawyers use it to draft legal briefs.
Court sanctions lawyers for fake citations generated by AI.
https://natlawreview.com/article/court-sanctions-attorneys-s...
AI is inherently unreliable and untrustworthy. In this case, distrust is not just some emotional reaction but pretty well grounded in fact.
Once lawyers learn this themselves from experience, I expect they will move toward legally impressing this upon any who are slow/reluctant to admit as much.
Using technology that is widely known to be flawed for any sort of serious work is a textbook example of "negligence".