Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Working with people who use this stuff a lot has made my current job just so so much harder in every way its astonishing. I used to solve problems with code, now I feel like hermeneut or dream analyzer: absent of human intention, codebases quickly become these weird piles of different idioms, even without considering the hallucinations (those have definitely cost me a few sleepless nights now either way).

But I am just venting. All of yall have clearly won, I get it. I am just grateful I have lived a full life doing other things other than computers, so this all isn't too sad other than the prospect of being poor again.

I will always have my beautiful emacs and I will always be hacking. I will always have my Stallman, my Knuth, my Andy Wingo, my SICP. I feel it is accomplishment enough to have progressed in this career as I have, especially as a self taught developer. But I kinda want to let yall deal with slop now, you really seem to like it!

Maybe I'll get another degree now, or just make some silly music and video games again. It's liberating just thinking about being free from this "new way" we work now.

Thanks for all the fish though!



Where I work, we have a directive to use AI as much as we can wherever possible. I was handed a codebase that's been working on by many, many, many different people over the past ~2 years and was written almost entirely with AI (starting with GPT 3)

The only way you can deal with the codebase is to fully embrace the AI. Whenever I want to make anything beyond a simple API change, I have to boot up Cursor and give it 5+ files for context and then write a short novel about what I want it to do. I sip some coffee while it chugs away and spits out some changes, which I then have to go and figure out how to test. I'm not fully convinced that the iteration time is any faster, and the codebase is a hot mess.

It also just feels very stifling and frustrating to me to have to write a ton of prose for something when I'm used to being able to write code to do it! I have to go home and work on other projects without AI just to scratch the itch of actually programming which is what I fell in love with all those years ago.


It's hard to address just one point for how messed up this seems, but the first thing that stands out is that I would guess the code volume of this process has to be unsustainable.

Humans themselves tend to write new code instead of using old code- a common problem- but with sensible code structures and CI, code will grow at a sustainable rate.

LLMs continuously barf a steam of new code, never deleting anything. Then you need to provide the barf as context, and the cycle surely must continue until it falls apart. How has this not happened yet?


Yes - writing code myself, whenever I need to do something substantial the first question I ask is "is there already a function in this codebase somewhere that does this, or does something close enough to it that I can just tweak it?" and most of the time there is!

AI will just write a new function every time. You can also ask it to write tests, but I think that AI-written test coverage of AI-written code is just asking for trouble. When it breaks, you'll probably just ask an AI to fix it.


This is how I think the current bubble will pop. Yes, these are useful tools that we just now are trying to learn how to use. But wallstreet and bean counters are going apeshit on the prospect of replacing the (expensive!) humans they currently pay for.

Once the codebases become an unmanageable mess I think the pendulum will swing back, hard.

That should buy sometime for anyone not entering the industry right now.


I feel you.

I started programming because I enjoy understanding something deeply and building things with that understanding. I like to write code that works on the first try because that means my mental model is correct. If I end up writing a bug, I try to avoid using a debugger. Instead, I take a step back, analyze what the code is doing and where my model differs from reality and fix it, often while finding more issues in my original understanding.

There are programmers who take a different approach, they write code that take a naive approach and works 90% of the time, then move on. When a bug manifests (it was there from the start but unless it manifested itself, it didn't bother them), they make a naive fix and move on. Sometimes fixing the bug for real, sometimes causing new ones. Eventually the amount of bugs reaches an equilibrium and they ship it.

LLMs are the second approach on steroids. Since they have no understanding, only statistical correlations between tokens, they produce code of the second kind. I mean, they don't even run their code to check it works. And they sure as fuck don't ask the programmer additional questions. Let alone the user. But they do it extremely fast so it's good from a business perspective. Better to have a shitty product now and beat competition on advertising, then be second.

I used to like open source because it attracted people of the first kind and there was cooperation instead of competition. But lately every project seems to ask for donations and it increasingly attracts people of the second kind.


I have similar feelings though maybe a bit more optimistic take. Obviously the AI hype train hasn't taken us to anywhere objectively better. Software has in no way become less buggy (if anything it feels worse in the past few years) and most if not all of the software I use predates the LLM era.

It feels like most developers en masse have taken on some masochist pleasure at deskilling themselves while becoming a prompt engineer beholden to OpenAI/MS/Google.

The upside is that those who take time to learn and improve can write software that most devs have given up the hope of being able to write. Write the next Maigt or org-mode while everyone else is asking AI to generate tailwind HTML React forms!


> It feels like most developers en masse have taken on some masochist pleasure at deskilling themselves while becoming a prompt engineer beholden to OpenAI/MS/Google.

It's a weird/delusional timeline, that's for sure.


I hear you. There was something special about the old days when programming was all about taking your time, thinking through every step, and truly understanding what you were building. I miss the days of punching cards — there was a certain simplicity to it. You’d write your code, feed it into the machine, and if it broke, it was your fault. There was no hiding behind tests or CI/CD pipelines, no auto-fixes or layers of abstraction. It was just you and the machine, and every bug was a lesson you had to learn the hard way. The feedback loop was slow, but it was real.

Now, everything feels automated, fast, and often a bit too dumb. Sure, it’s easier, but it’s lost that raw connection to the work. We’ve abstracted away so much that it’s hard to feel like we’re truly engineering something anymore — it’s more like patching together random components and hoping it holds. I think we lost something when we all started staring at screens all day and disconnected from the hands-on nature of building. There's a lot of slop now, and while some people thrive on that, it’s not for everyone.


Wait a bit, it won't last


In what way - that those things/people doing that will fail? Or the tooling will get better that this is no longer a problem? Completely different outcomes with very different consequences.


I think people will realize those AI are not worth the cost (both money wise and environment wise). Right now money is raining on everything AI, but at some point they will want to have a return and most projects will shut down, and we can move on


I'm inclined to agree with you. There are two directions of pressure. First, the tech just doesn't work as well as it needs to, and it's not clear if that will change in the next decade. Second, the companies are losing money. The investment bet is that the AI companies will make rapid advances in their models such that they'll be able to charge enough to turn a profit before the bubble pops. I'm not convinced.


It is part of the great plan:

- Buy GitHub and devalue all individual projects by soft-forcing most big projects to go there and lose their branding.

- Gamify and make "development" addictive.

- Use social cliques who hype up each other's useless code and PRs and ban critics.

- Make kLOC and plagiarism great again!

This all happened before "AI". "AI" is the last logical step that will finally destroy open source, as planned by Microsoft.


Don't know why this was downvoted. This is an interesting take that I, at least, haven't seen before.


A good conspiracy can be interesting even if it's false, so I don't criticise you for finding it interesting; but I downvoted because I didn't - HN is chock full of "Microsoft bad, Embrace Extend Extinguish" and it's fucking tiresome. I wrote 1700 words on how that comment makes no sense but I'll reduce it:

- Have you ever read a PR and thought "this code is useless" and the result was you deciding that "kLOC is great"? Any way I put those things (Microsoft, kLOC, AI, Github, social cliques,...) together I don't get anything sensible; Microsoft spent 7.5Bn on Github to make kLOC great to help them destroy open source? It's a crackpot word salad, not a reasoned position. At least, if it's a reasoned position they should post the reasoning.

- Github has 200,000,000+ public repositories and makes $1Bn/year revenue. How will putting AI into Github 'finally' destroy open source and why does Microsoft want to screw up a revenue stream roughly the size of Squarespace's, bigger than DigitalOcean's or Netgear's, and getting on for as big as CloudFlare's?


I mean I didn't say I agreed with it! :D

That said, this conspiracy theory does line up with the corporate retraction from open source writ large that's happened over the last few years (i.e. HashiCorp moving to BUSL; Red Hat ending CentOS; VMware pulling back after Broadcom ate them; etc.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: