Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Two masters go head to head. One uses AI tools (wisely - after all, they're a master!), the other refuses to. Which one wins?

To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?

I think we simply don't have similar mental models for predicting the future.



> Which one wins?

We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.

What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.

> With as much capital as is going into

Yes, we are in a bubble. And some are predicting it will burst.

> the continued innovation

That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.

> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?

I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.

I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.

But that's not my strongest reason to avoid the LLMs anyway:

- I don't want to increase my reliance on SaaS (or very costly hardware)

- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).

[1] https://www.sciencedirect.com/science/article/pii/S016649722...


There's a clear difference between "I have used these tools, tested their limits, and have opinions" and "I am consuming media narratives about why AI is bad"

AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.

As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.

We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!


> consuming media narratives about why AI is bad

That's quite uncharitable.

I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.

> AI presently has a far lower footprint on the globe than [X]

We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?

For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).

And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.

And I'm all for stopping the meat disaster as well.

> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!

Yep :-)


It's not intended to be uncharitable - You clearly value many things I do (the world needs less attention-seeking, energy greedy shitty software).

I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.

My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.

Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.

But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.

If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.


> I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience.

Okay, I think I got your intent better, thanks for clarifying.

You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.

But I hear you, grounded perspectives would be a positive.

> That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.

I hear you as well, makes perfect sense.

OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.

The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.

We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.

Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.

(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)


My point on the initial discussion remains, but I think that it also seems like we disagree on the foundations/premise of the technology.

The actions of a few companies does not invalidate the entire category. There are open models, trained on previously aggregated datasets (which, for what its worth, nobody had a problem with being collected a decade ago!), doing research to make training and usage more efficient.

The technology is here. I think your assessment on its relevance is not informed by actual usage, your frame of its origins is black/white (rather than understanding the actual landscape of different model approaches), and that your lack of interest in using it does nothing to change the absolutely massive shift that is happening in the nature of work. I'm a Product Manager, and the Senior Engineer I work with has been reviewing my PRs before they get merged - 60%+ were merged without much comment, and his bar is high. I did half of our last release, while also doing my day job. Safe to say, his opinion has changed based on that.

Were they massive changes? No. But these are absolutely impactful in the decision calculus that goes into what it takes to build and maintain software.

The premise of my argument is that what you see as "fatal flaws" are an illusion created by bias (which bleeds into the second-hand perspectives you cite just as readily as it does the media), rather than your direct and actual validation that those flaws exist.

My suggestion is to be an objective scientist -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible, and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology and your willingness to adopt it.

I believe that it's coming, not because the hype machine tells me so (and it's WAY hyped) - But because I've used it, seen its flaws and strengths, and forecast how quickly it will change the work that I've been doing for over a decade even if it stopped getting better (and it hasn't stopped yet)


Among the fatal flaws I see, some are ethical / philosophical regardless how the thing actually perform. I care a lot about this. It's actually my main motivation for not even trying. I don't want to use a tool that has "blood" on it, and I don't need experience in using the tool to assess this (I don't need to kill someone to assess that it's bad to kill someone).

On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.

> My suggestion is to be an objective scientist

Sure, but I also want to be a reasonable Earth citizen.

> -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible

Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.

> and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology

I doubt so, but open to changing my mind on this.

> and your willingness to adopt it.

Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).

What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.

> I think that it also seems like we disagree on the foundations/premise of the technology.

Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.

[1] https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/...


> The US Beef industry alone far outpaces the impact of AI.

Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!

Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...


> There's a clear difference between...

There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.

I think that the latter type might just have a bit of a bias in this matter?


I'd be surprised if one needed to refer to my username to make a determination on me viewing the technology more optimistically, although I do chafe a tad at the notion that I don't come here for all types of content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: