Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"It" being ChatGPT, in that case. I guess most people know, but not all AI is the same as all other AI, the implementation in those cases matter more than what weights are behind it, or that it's actually ML rather than something else.

With that said, like most technology, it seems to come with a ton of drawbacks, and some benefits, while most people focus on the benefits, we're surely about to find out all the drawbacks shortly. Better than social media or not, its being deployed on a wide-scale, so it's less about what each person believes, and more about what we're ready to deal with and how we want to deal with it.





> the implementation in those cases matter more

There is/are currently no realistic ways to temper or enforce public safety on these companies. They are in full regulatory capture. Any kind of call for public safety will be set aside, and if its not someone will pay the exec to give them an exception.


> There is/are currently no realistic ways to temper or enforce public safety on these companies

There is, general strikes usually does the trick if the government stops listening to the people. Of course, this doesn't apply to some countries that spent decades making unions, syndicates and other movements handicapped, but for the modern countries that still pride themselves on democracy, it is possible, given enough people care to do something about it.


I was talking specifically about the USA. Unless something dramatic changes, there will not be a general strike.

Even when unemployment rises to ~15%


Yes, I'm well aware, I mentioned the US not by name but by other properties in my earlier comments... I think once a country moves into authoritarianism there isn't much left but violence to achieve anything. General strikes and unions won't matter much once the military gets deployed against civilians, and you guys are already beyond that point so. GLHF and I hope things don't get too messy and you're welcome to re-join the modern world once you've cleaned the house.

I mean, what you say is not really wrong, but it's also not really relevant to the post (or thread) you're replying to.

It doesn't matter what government is in control: LLMs cannot be made safe from the problems that plague them. They are fundamental to their basic makeup.


It's nore about whether we, the citizens, even want this deployed and under what legal framework, so that it will fit our collective view of what society is.

The "if" is very much on the table at this stage of the political discussion. Companies are trying to railroad everybody past this decision stage by moving too fast. However, this is a momemt where we need to slow down instead and have a good long ponderous moment hinjing about whether we should allow it at all. And as the peoples of our respective countries, we can force that.


Yeah, that's not how technology deployments work, nor ever worked. Basically, there is a "cat is out of the bag" moment, and after that, it's basically a free-for-all until things get organized enough for someone to eventually start pushing back on too much regulation. Since we're just after this "cat is out of the bag" moment and way early for "over-regulation", companies of course ignore all of it and focuses on what they always focus on, making as much money while spending as little money as possible.

Besides general strikes, there isn't much one can do to stop, pause or otherwise hold back companies and individuals from deploying legal technology any way they see fit, for better or worse.


Well, you're very much wrong about that. The cat can be put back into the bag if we want to. It certainly happened before.

Right now, companies are working extremely hard to give the impression that AI technology is essential. But that is a purposefully manufactured illusion. It's a set of ideas planted in people's heads. Marketing in those megacompanies that introduce new technologies like LLMs and AR glasses to end users is very much focused on reshaping society around their product. They think BIG. We need more awareness that this is happening so that we can push back in a coordinated and meaningful way. And then we can support politicians that implement that agenda.


> Well, you're very much wrong about that. The cat can be put back into the bag if we want to. It certainly happened before.

Name a single technology that was invented, people figured out the drawbacks where bigger than the benefits, and then humanity just stopped caring about it altogether? Not even the technology with the biggest drawback we've created so far (literally make the earth inhospitable if deployed at scale) apparently been important enough to do so with, so I'm eager to hear what specific cats have been put back in what hats, if you'd entertain me.


> It certainly happened before.

With nuclear weapons, human cloning, chemical weapons, and ozone destruction.

All of these are highly centralized, controlled via big government-scale operations.

How do you propose doing this with GPT LLM tech that has been open sourced/weights and decentralized?


There are plenty of ways. For example, the technology would die completely the moment companies get barred from creating or running it. End users don't have the means to update those models and they would age and become useless.

How to reconcile this with the general consensus here that open [source] models are only 6-12 months behind the leading commercial models?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: