Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Current state-of-the-art models are, in my experience, very good at writing boilerplate code or very simple architecture especially in projects or frameworks where there are extremely well-known opinionated patterns (MVC especially).

Yes, kind of. What you downplay as "extremely well-known opinionated patterns" actually means standard design patterns that are well established and tried-and-true. You know, what competent engineers do.

There's even a basic technique which consists of prompting agents to refactor code to clean it up to comply with best practices, as this helps agents evaluate your project as it lines them up with known patterns.

> What they are genuinely impressive at is parsing through large amounts of information to find something (eg: in a codebase, or in stack traces, or in logs).

Yes, they are. It helps if a project is well structured, clean, and follow best practices. Messy projects that are inconsistent and evolve as big balls of mud can and do judge LLMs to output garbage based on the garbage that was inputted. Once, while working on a particularly bad project, I noticed GPT4.1 wasn't even managing to put together consistent variable names for domain models.

> But this hype machine of 'agents creating entire codebases' is surely just smoke and mirrors - at least for now.

This really depends on what are your expectations. A glass half full perspective clearly points you to the fact that yes agents can and do create entire codebases. I know this to be a fact because I did it already just for shits and giggles. A glass half empty perspective however will lead people to nitpick their way into asserting agents are useless at creating code because they once prompted something to create a Twitter code and it failed to set the right shade of blue. YMMV and what you get out is proportional to the effort you put in.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: