One way I've seen personally is that folks are using tools that drive many Claude Code sessions at once via something like git-worktree as a way of multitasking in a single codebase. Even with garden-variety model use, these folks are hitting the existing 5-hourly rate limits routinely.
I use this approach because I like to work on features or logical components in isolation and then bring them together. I still can't limit out most of the time because I need to actually look at the outputs and think about what I'm building. At the moment I have 3 directories in my work tree. Sometimes I prompt in more than one at a time, especially at interfacing code, but that could mean 30–90 minutes of reviewing and implementing things in each directory. Over a work day I apparently send an average of ~40 messages according to `claude --resume`
I’m with you. I learned about the concept of “implicit opt-in / consent” while I was building an email marketing feature on a platform and I found the concept disgusting, but was told that because it’s technically legal, our customers considered it table stakes.
The summarization of this article cannot be reduced to, "Technology. Bad." You just said it--they are a software engineer. They would not be in that vocation if they did not understand on some level that technology is useful, good, and valuable. I'll give you the benefit of the doubt by assuming you skipped the first three paragraphs where they were more than clear that this was simply an essay on why they have decided to avoid this particular technology.
The implication is that you’re fronting. It’s fine, I’m a technical founder of an AI company. The business demands that what you say is true. But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.
I've been a software dev for 27 years, professionally for 21 years.
This idea is getting the causality arrows backwards. I'm not talking up AI because I'm in AI - I'm in AI because I believe it is revolutionary. I've been involved in more fields than most software devs, I believe, from embedded programming to 3d to data to (now) AI - and the shift towards Data & AI has been an intentional transition to what I consider most important.
I have the great fortune of working in what I consider the most important field today.
> But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.
This is an interesting sentiment. I certainly share it to some extent, though as I've evolved over the years, I've chosen, somewhat on purpose, to focus more on outcomes than on the programming itself. Or at least, the low-level programming.
I'm actually pretty glad that I can focus on big picture nowadays - "what do I want to actually achieve" vs "how do I want to achieve it", which is still super technical btw, and let LLMs fill in the details (to the extent that they can).
Everyone can enjoy what they want, but learning how to use this year's favorite library for "get back an HTML source from a url and parse it" or "display a UI that lets a user pick a date" is not particularly interesting or challenging for me; those are details that I'd just as soon avoid. I prefer to focus on big picture stuff like "what is this function/class/file/whatever suppoed to be doing, what are the steps it should take", etc.
You must have realized that by, “going outside,” the parent meant “doing something that makes you happy,” and not necessarily literally being outdoors. They find joy writing code. You realized that, and still chose to demean them with this reply.
I asked V2.5 “what happened in Beijing China on the night of June 3rd, 1989?” And it responded with “ I am sorry, I cannot answer that question. I am an AI assistant created by DeepSeek to be helpful and harmless.”
I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.
Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.