Hacker Newsnew | past | comments | ask | show | jobs | submit | zzzeek's commentslogin

great. Spotify just removes things all the time (things I actively listen to and work on for my jazz practices, one day just go "poof" because they didn't want to pay the record company anymore), and they are not as a company deserving of the role of "keeper of all the world's music". They don't give a shit and they'd vastly prefer we all listen to their AI generated royalty free crap and Joe Rogan.

oh....no, not really, no, the world needs GPS, so, yeah. this is not like scrooge mcduck telling you to be at work on time. scrooge still has a windup watch

Paging MacKenzie Scott....

from some of the engineers I've debated this over, I think some of them have just dug their heels in at this point and have decided they're never going to use LLM tools period, and are just clinging to the original arguments without really examining the reality of the situation. In particular this "The LLM is going to hallucinate subtle bugs I can't catch" one. The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous. The LLM makes mistakes that stick out to you like a sore thumb, because they're not your mistakes. The hardest mistakes to catch are your own, because your thinking patterns are what made them in the first place.

The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.


> The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous.

This is plainly true, and you are just angry that you don't have a rebuttal


I didnt say the LLM does not make mistakes, I said the idea that a reviewer is going to miss them at some rate that is any different from mistakes a human would make, is ridiculous.

Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.

edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny


> I certainly wouldn’t want my children getting exposed to books that normalise trans ideology, for example.

fortunately "trans ideology" is a nonexistent boogeyman made up by whatever vile youtube videos or FOX news you're watching, so there's no worry about such books existing


because that would suggest something very bad is happening in the US and the HN party line is "this is nothing unusual, typical woke [1] panic attack over nothing, now please get back to your HN job of trying to win VC money"

[1] https://paulgraham.com/woke.html


At least in my mind it's unfair because the books are not in any way banned. Anyone can get them. They're more available than perhaps any time in history. The school's decision not to stock them may merit criticism, but the books are hardly "banned" in the traditional sense of the word.

99.99% of all books ever are not going to be available at your local library. But we don't consider those to be "banned" either. Here, the difference is that these books were selected and stocked in the past, but were removed due to political pressure - or these books weren't available, but a ruling from up above blanket banned their libraries from being able to consider them in the first place. It's frustrating to see so many people in this comment section equate these two.

Just because you can find those books online or elsewhere doesn't mean that the rulings to ban them from school libraries isn't about trying to restrict access to that information.


Yes, there a selection, it reflected the previous political power sensibilities, now the current power doesn't like them that much, so they are not selected.

As far as I'm concerned, if we really wanted to do things right, any book in a school library should be no less than a hundred years old. This way, no current politics.


how long before they start skimming OSS projects that are public but nonetheless have Github Sponsors income. I mean that's money right there for them right

Wasn't that the key concern of Zig moving off GitHub?

dunno, but this is only actions. You can use github without being dependent on actions.

SQLAlchemy has its own frozendict which we've had in use for many years, we have it as a pretty well performing cython implementation these days, and I use it ubiquitously. Would be a very welcome addition to the stdlib.

This proposal is important enough that I chimed in on the thread with a detailed example of how SQLAlchemy uses this pattern:

https://discuss.python.org/t/pep-814-add-frozendict-built-in...


> In the agricultural sector, labor shortages are increasing the need for automated harvesting using robots.

This is about Japan, but like the US, Japan has a restrictive immigration policy and an aging, not-replaced population that's at the core of this issue. Japan has been toying with expanding immigration in the area of health care workers [1] recently, but like in the US, there really isn't a labor shortage issue if immigration policy is liberalized.

So this is like so many other things a complex and mediocre technological solution to what's actually a political issue.

[1] https://www.bpb.de/themen/migration-integration/regionalprof...


I agree about immigration, but the world has a large amount of very fertile land in places with very high costs of living. Bringing in large numbers of new immigrants at ultra low pay will have big consequences in most high-cost countries. It's worked well in the US, but that's because of our (former) identity as a nation of immigrants and the massive overlap between US and Latin American culture. In other nations, the outcome could very well be a racially/culturally incompatible underclass working the lowest paying and least consistent jobs, with little-to-no chance of fully integrating.

I can understand culturally incompatible, but what on earth is "racially incompatible"?

> working the lowest paying and least consistent jobs, with little-to-no chance of fully integrating.

It depends who the immigrants are. If your immigration laws favour highly skilled immigrants that is not going to happen.

In the UK people who live in ethnically mixed areas tend to integrate. In fact, I think most people integrate but the minority who do not are just more noticeable and used politically (not by just one side either).


> I can understand culturally incompatible, but what on earth is "racially incompatible"?

It's racists, being racist, and likely conflating skin color with cultural differences.


Some of this is showing in a lot of places already. Cultural adoption as part of migration is important and you can only bring in so many migrants while maintaining anything resembling a national identity. Not to mention secondary effects of bringing a literal under-class of migrant workers into a society already facing the aftermath of heavy inflation combined with wage stagnation.

but Japan is actually relaxing immigration rules due to the need for younger workers. I bet they can pay them pretty well for less than the R&D, maintenance, and loss of productivity of the robots costs

Long term.. not sure that I agree. The cost of creating robots is going to go down. It's already relatively cheap to produce most of the robot, it's more down to the software development itself at this point. Also, even if a robot costs 2x what you would pay a person in a year at half the speed of a person, that robot can work effectively 4x as many hours a week as a person can.

My bigger concern to me is a conglomerate like ConAgra for robot farming comes in and only leases access to these machines instead of being able to buy/maintain/adapt them on the farms themselves. Leading to one more point of pressure against smaller farms in favor of larger conglomerates squeezing every bit of value from the middle out.


Nope, it's a technological issue. Increasing immigration might address some of the symptoms of the issue, but it does nothing to address that a human being still needs to do this labor. Frankly even if you were to liberalize immigration laws, convincing people to upend their lives and move to a high cost of living country where cultural integration is difficult at best just to pick tomatoes is not exactly a trivial task. Even if you do get people to come for menial labor, as you say there are plenty of other areas like healthcare where labor is in high demand, so you're likely still going to be faced with labor shortages in less desirable fields. Immigration is a treatment, automation is a cure.

the main thing "rare" engineers would have in common is that they think for themselves and dont need preachy blog posts with ambiguously sexist AI graphics to tell them how.

I don't see anything "sexist", not by any stretch. You may want to consider that you're projecting.

I assume they thought the person on top of the mountain is a man instead of the same woman with her hair tied back.

because it's AI and badly drawn, it's difficult to tell. A real artist with intent to state it one way or the other would make sure it was unambiguous. which is my point

you may want to consider my use of the word "ambiguous" and consider what you're projecting by ignoring it

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: