Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Couldn’t it easily also take over the CEO job? Pretty sure it’s easier than producing code that works and is maintainable.


Given how much these CEOs are hallucinating these days while hopelessly losing money on every venture they pursue, I think AI is leaps and bounds ahead of these idiots for decision making.


Good point. Maybe not for the CEO yet, but a manager without people to manage is a useless thing. So I think corporates will invent new Bullshit Jobs[1] for humans, to keep them in their organization chart.

[1] https://en.wikipedia.org/wiki/Bullshit_Jobs


I once asked a CEO, what a CEO's job involves.

Apparently it's lots of fiduciary duties.

As with driving cars, even if the AI is strictly better at doing these tasks than they are at writing code, mistakes aren't so easy to recover from and can destroy something unrecoverably in a 5-second attention lapse from a human overseer.


So can software, if not more. Like, your healthcare data leaking, bank account losing your money, some legal document getting lost/wrongly issued, .. and then we didn't even talk about actual safety critical applications (which are hopefully not vibe coded) like airplanes/medical device, etc.


AI is software, so in a sense everything that can go wrong with AI must be a subset of things that can go wrong with software.

Lots of software has a test environment. Even in live, e.g. bank account losing your money the transactions can be un-wound.

And that's the difference when it comes to replacing software devs with LLMs vs replacing CEOs with LLMs: it's possible to write the tests and then lock them. And to perform code review before changes are merged into the main branch. And to test against something other than production.

I know the Board can in principle remove a CEO, but is there even a standardised way for a CEO to have a vice-CEO that always checks everything they do, that always tests their commands against a simulation of the company and only lets them proceed if they agree with the outcome?

The point is that "AI as CEO" would be in the category of "business-critical" software, and also that current approaches to AI also lack sufficient guarantees of obligation compliance or sufficient defence against failures, which in the banking example would be things like the AI deciding to save money by deleting the system capable of unwinding incorrect banking transactions.

To the extent this kind of failure mode happens with vibe coding (in the original coining of the term: always accept without reading), it's like letting the LLM modify the unit tests to always succeed regardless of the code.


Well, the same goes for wrong code. One wrong line can cost millions or destroy everything completely, depending on the context. It is also not very easy to recover from.


The two contexts where that applies is "interacts with the outside world and you deployed without tests", and "even though it only affects your own data, you don't have backups and you deployed without tests".


I feel like management roles would be much more easy to automate than dev. It's hilarious how they're trying to sell these products. The only thing they couldn't do is go golf and drink half the time. They would be superior in that regard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: