Hacker Newsnew | past | comments | ask | show | jobs | submit | Vegenoid's commentslogin

This is like the reverse centaur form of coding. The machine tells you what to make, and the human types the code to do the thing.

Well, when put like that it sounds pretty bad too.

I was thinking more that the human would tell the machine want to make. The machine would help flesh out the idea into actual requirements, and make any decisions the humans are too afraid or indecisive to make. Then the coding can start.


Clearly AI is not a bubble, look how good it is at predicting the stock market!

No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.

You don’t get to fix bugs in code by simply pouring it through a filter.


As far as I can tell, this is false. The CDC did not offer guidance which said that protests should be treated differently from other outdoor events. If you can demonstrate otherwise, please do so.


> How much of a longevity guarantee do you need from a distro that is used for gaming, of all things?

Games are something I do to relax. I want as little friction to play the games as possible. For tech projects and work stuff having to mess with the OS and move away from deprecated stuff isn’t such a big deal, it’s part of the work. But for games I want them to just work as much as possible, I don’t want to have to find a new distro and install it and set everything up again on my gaming PC.

Despite Windows sucking in so many ways, it is the OS with the most assurance that a game will work without fuss. I am happy to see Linux closing this gap.


> Video is only limited by our capacity to produce enough of it at a decent quality, otherwise humanity is definitely not looking back fondly at BBSes and internet forums

Video is limited by playback speed. It is a time-dependent format. Efforts can be made to enable video to be viewable at a range of speeds, but they are always somewhat constrained. Controlling video playback to slow down and rewatch certain parts is just not as nice as dealing with the same thing in text (or static images), where it’s much easier to linger and closely inspect parts that you care more about or are struggling to understand. Likewise, it’s easier to skim text than video.

This is why many people prefer transcripts, or articles, or books over videos.

I seriously doubt that people would want to switch text-based forums to video if only video were easier to make. People enjoy writing for the way it inspires a different kind of communication and thought. People like text so much that they write in journals that nobody will ever see, just because it helps them organize their thoughts.


You (and I) live in entirely different world from that of regular people, who read at most 1 book per year and definitely do not write journals that nobody will ever see.

You're talking about 10-20% of the population, at most.


And AI skeptics are waiting to see the proof in the pudding. If we have a new tool that makes hundreds of thousands of devs vastly more productive, I expect to see the results of that in new, improved software. So far, I'm just seeing more churn and more bugs. It may well be the case that in a couple years we'll see the fruits of AI productivity gains, but talk is cheap.


The proof is in feature velocity of devs/teams that use it and in the layoffs due to efficiency gains.

I think it's very hard to convince AI skeptics since for some reason they feel more threatened by it than rest. It's counterproductive and would hinder them professionally but then it's their choice.


Without rigorous, controlled study I'm not ready to accept claims of velocity, efficiency, etc. I'm a professional software engineer, I have tried various AI tools in the workplace both for code review and development. I found personally that they were more harmful than effective. But I don't think my personal experience is really important data here. Just like I don't think yours is. What matters is whether these tools actually do something or whether instead they just make some users feel something.

The studies I've seen--and there are very few--seem to indicate the effect is more placebo than pharmacological.

Regardless, breathless claims that I'm somehow damaging my career by wondering whether these tools actually work are going to do nothing to persuade me. I'm quite secure in my career prospects, thank you kindly.

I do admit I don't much like being labeled an "AI skeptic" either. I've been following developments in machine learning for like 2 decades and I'm familiar with results in the field going back to the 1950s. You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.


You say you've used AI tools for code review and deploys, but do you ever just use chat GPT as a faster version of Google for things like understanding a language you aren't familiar with, finding bugs in existing code, or generating boilerplate?

Really I only use chat GPT and sometimes Claude code, I haven't used these special-cased AI tools


> You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.

As I said the evidence is in companies not hiring anymore since they don't need as many developers as before. If you want rigorous controlled studies you'll get it in due time. In the meantime maybe just look into the workflows of how people are using

re AI skeptics: I started pushing AI in our company early this year, and one of the first questions I got was that "are we doing it to reduce costs". I fully understood and sympathize with the fact many engineers feel threatened and feel they are being replaced. So I clarified it's just to increase our feature velocity which was my honest intention since ofc I'm not a monster.

I then asked this engineer to develop a feature using bolt, and he partially managed to do it but in the worst way possible. His approach was to spend no time on planning/architecture and to just ask AI to do it in a few lines. When hit with bugs he would ask the AI "to fix the bug" without even describing the bug. His reasoning was that if he had to do this prep work then why would he use AI. Nonetheless he finished entire month's worth of credit in a single day.

I can't find the proper words, but there's a certain amount of dishonesty in this attitude that really turns me off. Like turbotax sabotaging tax reforms so they can rent seek.


> If you want rigorous controlled studies you'll get it in due time.

I hope so, because the alternative is grim. But to be quite honest I don't expect it'll happen, based on what I've seen so far. Obviously your experience is different, and you probably don't agree--which is fine. That's the great thing about science. When done properly it transcends personal experience, "common sense", faith, and other imprecise ways of thinking. It obviates the need to agree--you have a result and if the methodology is sound in the famous words of Dr. Malcolm "well, there it is." The reason I think we won't get results showing AI tooling meaningfully impacts worker productivity are twofold:

(1) Early results indicate it doesn't. Experiences differ of course but overall it doesn't seem like the tools are measurably moving the needle one way or the other. That could change over time.

(2) It would be extremely favorably in the interests of companies selling AI dev tools to show clearly and inarguably that the things they're selling actually do something. Quantifying this value would help them set prices. They must be analyzing this problem, but they're not publishing or otherwise communicating their findings. Why? I can only conclude it's because they're not favorable.

So given these two indications at this point in time, a placebo like effect seems most likely. That would not inspire me to sign a purchase agreement. This makes me afraid for the economy.


months worth of credits? What does that mean?


I'm pretty sure it is the commercial demand for data from AI companies. It is certainly the popular conception among sysadmins that it is AI companies who are responsible for the wave of scrapers over the past few years, and I see no compelling alternative.


> and I see no compelling alternative.

Another potential cause: It's way easier for pretty much any person connected to the internet to "create" their own automation software by using LLMs. I could wager even the less smart LLMs could handle "Create a program that checks this website every second for any product updates on all pages" and give enough instructions for the average computer user to be able to run it without thinking or considering much.

Multiply this by every person with access to an LLM who wants to "do X with website Y" and you'll get an magnitude increase in traffic across the internet. This been possible since what, 2023 sometime? Not sure if the patterns would line up, but just another guess for the cause(s).


> This essentially turns the operation of a robot into a kind of video game, where inputs are only needed a in low-dimensional abstract form, such as "empty the dishwasher" or "repeat what I do" or "put your finger in the loop and pull the string"

I don't really understand, how is this like a video game? What about these inputs is "low-dimensional"? How does what you describe interact with a "high-level control agents like SIMA 2"? Doesn't SIMA 2 translate inputs like "empty the dishwasher" into key presses or interaction with some other direct control interface?


Say you want to steer an android to walk forward. You need to provide angles or forces or voltages for all the actuators for every moment in time, so that's high dimensional. If you already have certain control models, neural or not, you can instead just press forward on a joystick. So what I mean low dimensional input is when someone steers a robot using a controller. That's got like, idk, 10-20 dimensions max. And my understanding is that SIMA 2 when it plays No Man's Sky or whatever basically provides such low dimensional controls, like a video game. Companies like Figure and Tesla are training models that can do tasks like folding clothes or emptying the dishwasher given low dimensional inputs like "move in this direction and tidy up". SIMA has the understanding to provide these inputs.


There are many discussions of what sets apart a high trust society from a low trust society, and how a high trust society enables greater cooperation and positive risk taking collectively. Also about how the United States is currently descending into a low trust society.

"Random blog can do whatever they want and it's wrong of you to criticize them for anything because you didn't make a mutual commitment" is low-trust society behavior. I, and others, want there to be a social contract that it is frowned upon to violate. This social contract involves not being dishonest.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: