Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.

It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.



Why would you go full on ? There is no learning curve it seems like. What is there to learn about using AI to code?


The learning curve is actually huge. If you just vibe code with AI, the results are going to suck. You basically have to reify all of your software engineering artifacts and get AI to iterate on them and your code as if it were am actual software engineering (who forgot everything whenever you rebooted it, so that’s why you have to make sure it can re-read artifacts to get its context back up to speed again). So a lot more planning, design, and test documentation than you would do in a normal project. The nice thing is that AI will maintain all of it as long as you set up the right structure.

We are also in the early days still, I guess everyone has their own way of doing this ATM.


By this point you've burnt up any potential efficiency gains. So you spent a lot of hours learning a new tool which you then have to spend a lot of additional hours to babysit and correct, so much that you'll be very far from those claimed productivity gains. Plus the skills you need to verify and fix it will atrophy. So that learning curve earns you nothing expect the ability to put "AI" somewhere on your CV, which I expect will lose a lot of its lustre in 1-2 years time when everybody has made enough experiences with vibe coders who don't, or no longer can, enusre the quality of their super-efficient output.


This is all bullshit btw.

Speaking as someone with a ton of experience here.

None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.

All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.

You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.

That's the reality.

Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.


Care to share some links?

Not this one, presumably: https://en.wikipedia.org/wiki/Carl_Robert_Brown


He's the youtuber "The Internet of Bugs"


That’s interesting but how much of this if written down, documented and made into video tutorials could be learnt by just about any good engineer in 1-2 weeks?


I don’t see much yet, maybe everyone is just winging it until someone influential gives it a name. The vibe coding crowd have set us back a lot, and really so did the whole leetcode interview fad that are just throwing off. It’s kind of obvious though: just tell the AI to do what a normal junior SWE does (like write tests), but write a lot more documentation because they forget things all the time (a junior engineer who makes more mistakes, so they need to test more, and remembers nothing).


The trick is being a good engineer in the first place.


The concepts in the LLMs latent space are close to each other and you find them by asking in the right way, so if you ask like an expert you find better stuff.

For it to work best you should be an expert in the subject matter, or something equivalent.

You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).

Garbage in garbage out is as important as ever.


I hope you are joking and/or being sarcastic with this comment…


I don't think they really are.

There is, effectively, a "learning curve" required to make them useful right now, and a lot of churn on technique, because the tools remain profoundly immature and their results are delicate and inconsistent. To get anything out of them and trust what you get, you need to figure out how to hold them right for your task.

But presuming that there's something real here, and there does seem to be something, eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. The whole vision of them is to make the work easier, more accessible, and more productive, after all. Having a big learning curve doesn't align with that vision.

Unless they happen to make you more significantly productive today on the tasks you want to pursue, which only seems to be true for select people, there's no particular reason to be an early adopter.


fantastic comment! I disagree on two fronts:

- we are far removed from “early adopter” stages at this point

- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.

it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)


If you spent 1200 hours not using it you would have matured in your craft 3x more and figured out far better ways of doing things.


been hacking 3 decades so exponentially north of 1,200 hours ... in my career the one trait that always seems to differentiate great SWEs from decent/mediocre/awful ones is laziness.

the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.


What is this fantasy about people being unemployed? The layoffs we’ve seen don’t seem to be discriminating against or in favor of AI - they appear to be moves to shift capital from human workers to capex for new datacenters.

It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.

The real nuts and bolts are still software engineering. Or is that going to change too?


I don't think their will be massive unemployment based on actual "AI has removed the need for SWEs of this level..." kind of talk but I was specifically commenting on eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. ... If this actually did happen (it won't) then we'd all have to worry about being unemployed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: