Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I strongly agree with both the premise of the article, and most of the specific arguments brought forth. That said, I've also been noticing some positive aspects of using LLMs in my day-to-day. For context, I've been in the software trade for about three decades now.

One thing working with AI-generated code forces you to do is to read code -- development becomes more a series of code reviews than a first-principles creative journey. I think this can be seen as beneficial for solo developers, as in a way, it mimics / helps learn responsibilities only present in teams.

Another: it quickly becomes clear that working with an LLM requires the dev to have a clearly defined and well structured hierarchical understanding of the problem. Trying to one-shot something substantial usually leads to that something being your foot. Approaching the problem from a design side, writing a detailed spec, then implementing sections of it -- this helps to define boundaries and interfaces for the conceptual building blocks.

I have more observations, but attention is scarce, so -- to conclude. We can look at LLMs as a powerful accelerant, helping junior devs grow into senior roles. With some guidance, these tools make apparent the progression of lessons the more experienced of us took time to learn. I don't think it's all doom and gloom. AI won't replace developers, and while it's incredibly disruptive at the moment, I think it will settle into a place among other tools (perhaps on a shelf all of its own).



I appreciate your nuanced position. I believe that any developer who isn't reading more code than they are writing is doing it wrong. Reading code is central to growth as a software engineer. You can argue that you'll be reading more bland code when reviewing code generated with the aid of an LLM. I still think you are learning. I've read lots of LLM generated code and I routinely learn new things. Idioms that I wasn't familiar with, or library calls I didn't know existed.

I also think that LLMs are an even more powerful accelerant for senior developers. We can prompt better because we know what exists and what to not bother trying.


I don't think it is becoming a series of code reviews, more like having something do some prototyping for you. It is great for fixing the blank page problem, but not something you can review and commit as is.


In my experience code reviews involve a fair bit of back-and-forth, iterating with the would-be committer until the code 1) does what it's meant to and 2) does it in an acceptable manner. This parallels the common workflow of trying to get an LLM to produce something useable.


Problem is, paraphrasing Scott Kilmer, corporations are dead from the neck up. The conclusion for them was not that AI will help juniors, is that they will not hire juniors and will ask seniors the magic "10x" with the help of AI. Even some seniors are getting the boot, because AI.

Just look at recent news, layoff after layoff from Big Tech, Middle tech and small tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: