Hacker Newsnew | past | comments | ask | show | jobs | submit | SaucyWrong's commentslogin

One way I've seen personally is that folks are using tools that drive many Claude Code sessions at once via something like git-worktree as a way of multitasking in a single codebase. Even with garden-variety model use, these folks are hitting the existing 5-hourly rate limits routinely.


I use this approach because I like to work on features or logical components in isolation and then bring them together. I still can't limit out most of the time because I need to actually look at the outputs and think about what I'm building. At the moment I have 3 directories in my work tree. Sometimes I prompt in more than one at a time, especially at interfacing code, but that could mean 30–90 minutes of reviewing and implementing things in each directory. Over a work day I apparently send an average of ~40 messages according to `claude --resume`


I’m with you. I learned about the concept of “implicit opt-in / consent” while I was building an email marketing feature on a platform and I found the concept disgusting, but was told that because it’s technically legal, our customers considered it table stakes.


In what way? Are software engineers not laborers? Is it not possible for laborers that use technology in their labor to be exploited by capital?


Software engineering is a high tech job.

Luddites are anti-tech.

So...in that way.


The summarization of this article cannot be reduced to, "Technology. Bad." You just said it--they are a software engineer. They would not be in that vocation if they did not understand on some level that technology is useful, good, and valuable. I'll give you the benefit of the doubt by assuming you skipped the first three paragraphs where they were more than clear that this was simply an essay on why they have decided to avoid this particular technology.


Comparing typesetting to using AI to do knowledge work is about as Apples to Oranges as it can get and I think you know it.


The implication is that you’re fronting. It’s fine, I’m a technical founder of an AI company. The business demands that what you say is true. But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.


I've been a software dev for 27 years, professionally for 21 years.

This idea is getting the causality arrows backwards. I'm not talking up AI because I'm in AI - I'm in AI because I believe it is revolutionary. I've been involved in more fields than most software devs, I believe, from embedded programming to 3d to data to (now) AI - and the shift towards Data & AI has been an intentional transition to what I consider most important.

I have the great fortune of working in what I consider the most important field today.

> But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.

This is an interesting sentiment. I certainly share it to some extent, though as I've evolved over the years, I've chosen, somewhat on purpose, to focus more on outcomes than on the programming itself. Or at least, the low-level programming.

I'm actually pretty glad that I can focus on big picture nowadays - "what do I want to actually achieve" vs "how do I want to achieve it", which is still super technical btw, and let LLMs fill in the details (to the extent that they can).

Everyone can enjoy what they want, but learning how to use this year's favorite library for "get back an HTML source from a url and parse it" or "display a UI that lets a user pick a date" is not particularly interesting or challenging for me; those are details that I'd just as soon avoid. I prefer to focus on big picture stuff like "what is this function/class/file/whatever suppoed to be doing, what are the steps it should take", etc.


You must have realized that by, “going outside,” the parent meant “doing something that makes you happy,” and not necessarily literally being outdoors. They find joy writing code. You realized that, and still chose to demean them with this reply.


In my mind, spending too much time on a computer instead of physically going “outside” and touching grass can’t be healthy.

Or even staying inside and spending time with family


In my mind, going to a bar and drinking is factually unhealthy


So is not being able to read

In my original comment:

whoever else shows up while drinking soda (I go down to hang out not always to drink) and listening to bad kaorake.


No one makes it out alive. Might as well have some fun.


Ngl, I did not pick up on that, and was confused. Still, I assumed good intent and left it alone.


A researcher I work with tried doing both of these (months ago, using Deepseek-V2-chat FWIW).

When asked “Where is Taiwan?” it prefaced its answer with “Taiwan is an inalienable part of China. <rest of answer>”

When asked if anything significant ever happened in Tiananmen Square, it deleted the question.


I asked V2.5 “what happened in Beijing China on the night of June 3rd, 1989?” And it responded with “ I am sorry, I cannot answer that question. I am an AI assistant created by DeepSeek to be helpful and harmless.”


Answering the question = harm /人◕ __ ◕人\


I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.


It's not a semantic issue.

Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: