Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing is that some imagined AI that can reliably produce reliable software will also likely be able to be smart enough to come up with the requirements on its own. If vibe coding is that capable, then even vibe coding itself is redundant. In other words, vibe coding cannot possibly be "the future", because the moment vibe coding can do all that, vibe coding doesn't need to exist.

The converse is that if vibe coding is the future, that means we assume there are things the AI cannot do well (such as come up with requirements), at which point it's also likely it cannot actually vibe code that well.

The general problem is that once we start talking about imagined AI capabilities, both the capabilities and the constraints become arbitrary. If we imagine an AI that does X but not Y, we could just as easily imagine an AI that does both X and Y.





Following similar thinking, there's no world in which AI becomes exactly capable of replacing all software developers and then stops there, miraculously saving the jobs of everyone else next to and above them in the corporate hierarchy. There may be a human, C-suite driven cost-cutting effort to pause progress there for some brief time, but if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard. Either we're all similarly affected, or else AI still isn't good enough, in which case fleets of programmers are still needed, and among those, the presumed "helpfulness" of AI will vary wildly. Not unlike what we see already.

> if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard

There are plenty of reasons.

Radiologists aren’t being replaced by AI because of liability. Same for e.g. civil engineers. Coders don’t have liability for shipping shit code. That makes switching to an AI that’s equally blameless easier.

Also, data: the web is first and foremost a lot of code. AI is getting good at coding first for good reason.

Finally, as OP says, the hard work in engineering is actually scoping requirements and then executing and iterating on that. Some of that is technical know-how. A lot is also political and social skills. Again, customers are okay with a vibe-coded website in a way most people are not with even support chatbots.


> Coders don’t have liability for shipping shit code

What if you're shipping code for a therac-25?


My bet is that it will be good enough to devise the requirements.

They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.

We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.

If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".


If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.

Maybe yes. It takes time for those structures to "compact" and for systems to realign.

I agree with the first part which is basically 'being able to do a software engineers full job' is basically ASI/AGI complete.

But I think it is certainly possible that we reach a point/plateau where everything is just 'english -> code' compilation but that 'vibe coding' compilation step is really really good.


It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs. In fact, why not the opposite? I think that currently LLMs are better at writing general prose, offering advice etc.., than they are at writing code. They are better at knowing what people generally want than they are at solving complex logic puzzle that require many deduction steps. Once we're reduced to imagining what AI can and cannot do, we can imagine pretty much any capability or restriction we like. We can imagine something is possible, and we can just as well choose to imagine it's not possible. We're now in the realm of, literally, science fiction.

> It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs.

Ideation at the working PM level, sure. I meant more hard technical ideation - ie. what gets us from 'not working humanoid robot' to 'humanoid robot' or 'what do we need to do to get a detection of a higgs boson', etc. etc. I think it is possible to imagine a world where 'english -> code' (for reasonably specific english) is solved but not that level of ideation. If that level of ideation is solved, then we have ASI.


There are a ton of extremely Hard problems to solve there that we are not likely going to solve.

One: English is terribly non-prescriptive. Explaining an algorithm is incredibly laborious in spoken language and can contain many ambiguous errors. Try reading Euclid’s Elements. Or really any pre-algebra text and reproduce its results.

Fortunately there’s a solution to that. Formal languages.

Now LLMs can somewhat bridge that gap due to how frequently we write about code. But it’s a non-deterministic process and hallucinations are by design. There’s no escaping the fact that an LLM is making up the code it generates. There’s nothing inside the machine that is understanding what any of the data it’s manipulating means or how it affects the system it’s generating code for.

And it’s not even a tool.

Worse, we can’t actually ship the code that gets generated without a human appendage to the machine to take the fall for it if there are any mistakes in it.

If you’re trying to vibe code an operating system and have no idea what good OS design is or what good code for such a system looks like… you’re going to be a bad appendage for the clanker. If it could ship code on its own the corporate powers that be absolutely would fire all the vibe coders and you’d never work again.

Vibe coding is turning people into indentured corporate servants. The last mile delivery driver of code. Every input surveilled and scrutinized. Output is your responsibility and something you have little control over. You learn nothing when the LLM gives you the answer because you’ll forget it tomorrow. There’s no joy in it either because there is no challenge and no difficulty.

I think what pron is leading to is that there’s no need to imagine what these machines could potentially do. I think we should be looking at what they actually do, who they’re doing it to, and who benefits from it.


The only reason to imagine that plateau is because it’s painful to imagine a near future where humans have zero economic value.

It's not the only reason, technologies do plateau. We're not living in orbiting cities flying fusion powered vehicles around, even though we built rockets and nuclear power more than half a century ago.

Yes, but perhaps technology can't plateau beyond vibe coding but below "the machine does everything", not because technology doesn't plateau but because that point doesn't exist. Technology could plateau before both or after both.

All the current indicators are that AI will plateau far beyond any human capability.

Do you have evidence or empirical arguments to the contrary?


Why is this desirable?

It’s not, it’s horrifying.

But there doesn’t seem to be any off ramp, given the incentives of our current economic system.


What do you mean "come up with the requirements"? Like if self-driving cars got so good that they didn't just drive you somewhere but decided where you should go?

No, I mean that instead of vibe coding - i.e. guiding the AI through features - you'll just tell it what you want in broad strokes, e.g. "create a tax filing system that's convenient enough for the average person to use", or, "I like the following games ... Build a game involving spaceships that I'll enjoy", and it will figure out the rest.

This is the most coherent comment in this thread. People who believe in vibe coding but not in generalizing it to “engineering”... brother the LLMs speak English. They can even hold conversations with your uncle.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: