Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The agent forgets to free memory latter just like a human would and has to go back and fix it later.

I highly recommend people learn how to write their own agents. Its really not that hard. You can do it with any llm model, even ones that run locally.

I.e you can automate things like checking for memory freeing.





Why would I want to have an extra thing to maintain, on top of having to manually review, debug, and write tests for a language I don't like that much?

You don't have to maintain it. LLMs are really good at following direction.

I have a custom agent that can take python code, translates it to C, does a refactoring job to include a mempool implementation (so that memory is allocated once at the start of the program and instead of malloc it grabs chunks out of mempool), runs cppcheck, uploads to a container, and runs it with valgrind.

Been using it since ChatGPT3 - the only updates I did to it was API changes to call different providers. Doesn't use any agent/mcp/tools thing either, pure chat.


There's always going to be some maintenance, at the very least the API changes for providers you mentioned, and then there's still the reviews and testing of the C.

A mempool seems very much like a DIY implementation of malloc, unless you have fixed size allocations or something else that would make things different, not sure why I'd want that in the general case.

For "non hacker style" production code it just seems like a lot of extra steps.


> I.e you can automate things like checking for memory freeing.

Or, if you don't need to use C (e.g. for FFI or platform compatibility reasons), you could use a language with a compiler that does it for you.


Right, a lot of the promise of AI can (and has) been achieved with better tool design. If we get the AI to start writing Assembly or Machine Code as some people want it to, we're going to have the same problems with AI writing in those languages as we did when humans had to use them raw. We invented new languages because we didn't find those old ones expressive enough, so I don't exactly understand the idea that LLMs will have a better time expressing themselves in those languages. The AI forgetting to free memory in C and having to go back and correct itself is a perfect example of this. We invented new tools so we wouldn't have to do that anymore, and they work. Now we are going backwards, and building giant AI datacenters that suck up all the RAM in the world just to make up for lost ground? Weak.

> We invented new languages because we didn't find those old ones expressive enough

Not quite. Its not about being expressive enough to define algorithms, its about simplification, organization and avoidance of repetition. We invented languages to automate a lot of the work that programmers had to do in a lower level language.

C abstracts away handling memory addresses and setting up frame stacks like you would in assembly.

Rust makes handling memory more restrictive so you don't run into issues.

Java abstracts away memory management completely, so you don't need to manage memory, freeing up you to design algorithm without worrying about memory leaks (although apparently you do have to worry if your log statements can execute arbitrary code).

Javascript and Python abstract type definition away through dynamic interpretation.

Likewise, OOP/Typing, functional programming, and other styles were included for better organization.

LLMs are right in line with this. There is no difference between you using a compiler to compile a program, vs a sufficiently advanced LLM writing said compiler and using it to compile your program, vs LLM compiling the program directly with agentic loops for accuracy.

Once we get past the hype of big LLMs, the next chapter is gonna be much smaller, specialized LLMs with architecture that is more deterministic than probabilistic that are gonna replace a lot of tools. The future of programming will be you defining code in a high level language like Python, then the LLM will be able to infer a lot of the information (for example, the task of finding how variables relate to each other is right in line with what transformers do) just from the code and do things like auto infer types, write template code, then adapt it to the specific needs.

In fact, CPUs already do this to a certain extent - modern branch predictors are basically miniature neural networks.


I use rust. The compiler is my agent.

Or to quote Rick and Morty, “that’s just rust with extra steps!”


On a related note, I've always regarded Python as the best IDE for writing C. :)

Replace memory with one of the dozen common issues the Rust compiler does nothing for like deadlocks.

Well, the case would still stand, wouldn't it? Unless C is free of these dozen common issues.

Sure. Or you can let the language do that for you and spend your tokens on something else. Like, do you want your LLM to generate LLVM byte code? It could, right? Buy why wouldn't you let the compiler do that?

Unless im writing something like code for a video game in a game engine that uses C++, most of the stuff that I need C is compartmentalized enough to where its much faster to have an LLM write it.

For example, the last C code I wrote was tcp over ethernet, bypassing the IP layer, so I can be connected to the VPN while being able to access local machines on my network.

If im writing it in Rust, I have to do a lot of research, think about code structure, and so on. With LLMs, it took me an hour to write, and that is with no memory leaks or any other safety issues.


Interesting. I find that Claude 4.5 has a ridiculous amount of knowledge and “I don’t know how to do that in Rust” is exactly what it’s good at. Also, have you tried just modifying your route table?

>Also, have you tried just modifying your route table?

The problem is I want to run VNC on my home computer to the server on my work Mac so I can just access everything from one screen and m+b combo without having to use a USB switch and a second monitor. With VPN it basically just does not allow any inbound connections.

So I run a localhost tunnel its a generic ethernet listener that basically takes data and initiates a connection to localhost from localost and proxies the data. On my desktop side, its the same thing just in reverse.


Do you have any good starting points? For example, if someone had an ollama or lm studio daemon running where would they go from that point?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: