Does it really matter? Even if a billionaire’s kid gets hooked on Coca-Cola or social media, they still have vastly more resources (therapy, education, support) to overcome it. Meanwhile, kids in underprivileged communities don’t get that safety net. For CEOs like Zuckerberg or Coca-Cola’s leadership, that disparity is just a small price to pay for the profits their products generate.
It's part of the parental responsibility to provide enough structure and instill enough discipline to your kids so that they grow to be complete persons. Sure it will be nice if social media was restricted like tobacco, and I am sure one day it will, but you can't relegate all responsibility for everything to the state. I don't want to live in a bubble wrapped society for the sake of the children.
I'm with you. The industry has pivoted from building tools that help you code to selling the fantasy that you won't have to. They don't care about the reality of the review bottleneck; they care about shipping features that look like 'the future' to sell more seats.
It's all about the hardware and infrastructure. If you check OpenRouter, no provider offers a SOTA chinese model matching the speed of Claude, GPT or Gemini. The chinese models may benchmark close on paper, but real-world deployment is different. So you either buy your own hardware in order to run a chinese model at 150-200tps or give up an use one of the Big 3.
The US labs aren't just selling models, they're selling globally distributed, low-latency infrastructure at massive scale. That's what justifies the valuation gap.
Edit: It looks like Cerebras is offering a very fast GLM 4.6
It doesn't work like that. You need to actually use the model and then go to /activity to see the actual speed. I constantly get 150-200tps from the Big 3 while other providers barely hit 50tps even though they advertise much higher speeds. GLM 4.6 via Cerebras is the only one faster than the closed source models at over 600tps.
The network effects of using consistently behaving models and maintaining API coverage between updates is valuable, too - presumably the big labs are including their own domains of competence in the training, so Claude is likely to remain being very good at coding, and behave in similar ways, informed and constrained by their prompt frameworks, so that interactions will continue to work in predictable ways even after major new releases occur, and upgrades can be clean.
It'll probably be a few years before all that stuff becomes as smooth as people need, but OAI and Anthropic are already doing a good job on that front.
Each new Chinese model requires a lot of testing and bespoke conformance to every task you want to use it for. There's a lot of activity and shared prompt engineering, and some really competent people doing things out in the open, but it's generally going to take a lot more expert work getting the new Chinese models up to snuff than working with the big US labs. Their product and testing teams do a lot of valuable work.
Qwen 3 Coder Plus has been braindead this past weekend, but Codex 5.1 has also been acting up. It told me updating UI styling was too much work and I should do it myself. I also see people complaining about Claude every week. I think this is an unsolved problem, and you also have to separate perception from actual performance, which I think is an impossible task.
Assuming your hardware premise is right (and lets be honest, nobody really wants to send their data to chinese providers) You can use a provider like Cerebras, Groq?
Tariffs might keep Chinese EVs out of the US, but they don't stop US influence from fading everywhere else. South America is voting with their wallets, and 'buy American' doesn't work when the price is double and the tech is the same.
Unless the US intends to sanction every country that prioritizes value over US geopolitics, this battle is already lost.
In South America there's also no anxiety over China becoming a superpower, which may be an argument against Chinese products in the US.
In fact, China has pretty good relations with most South American countries. Likely better than the US. I wouldn't be surprised if many people view China more favorably.
The average person in the west isn't losing sleep over China either. That anxiety is mostly manufactured by the media pushing the narrative that they are an existential threat. Maybe they are, I don't know. But what I do know is that western companies love it when they can sell you overpriced products made in China, but panic the moment chinese companies sell the exact same product at a fair price.
Hmm, I wonder if that might have anything to do with the decades of state sponsored terrorism the US has inflicted on the entire region since the 70s? Maybe it wasn't the best idea to make that "we will coup whoever we want" crashout tweet in between begging for crumbs of latam market share?
How do you expect open source alternatives to exist when they cannot enforce how you use their IP? Open source licenses exist and are enforced under IP law. This is part of the reason why AI companies have been pushing hard for IP reform because they to decimate IP laws for thee but not for me.
Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff.
Calling a HN comment “intellectual property” is like calling a table saw in your garage “capital”. There are specific regulatory contexts where it might be somewhat accurate, but it’s so different from the normal case that none of our normal intuitions about it apply.
For example, copyright makes it illegal to take an entire book and republish it with minor tweaks. But for something short like an HN comment this doesn’t apply; copyright always permits you to copy someone’s ideas, even when that requires using many of the same words.
I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.
> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun.
The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
Do the artists that are not in the 1% actually benefit from IP or does it hinder them from building new art based on other art? It seem to me that IP only benefits the top players.
Can you give me an example of the situation you are picturing?
Simply because I can't see what you mean by artists being hindered by IP, artists try to create original work, and derivative work from other IP is usually re-interpreted enough to fall under fair use. I can't picture a situation where artists could be hindered on their creations due to IP owned by others.
Sampling is still done, I'm a hobbyist music producer, and friends with many professionals. They have to clear the samples and pay royalties, and they get royalties from sampled tracks.
It's more cumbersome while being fairer, it hasn't stopped at all the practice. As a hobbyist I do it all the time while my professional friends clear their samples before earning money on their tracks.
"Stop the laundering of responsibility/liability" - the risk that you can run someone over with a software controlled car and it's not a crime "because AI" whereas a human doing the same thing would be in jail. Image detection leading to false arrests, etc. It's harder to sue because the immediate party can say "it wasn't us, we bought this software product and it did the bad thing!"
I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did.
> This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
They already do this[1]. Why should there be an exception carved out for AI type jobs?
------------------------------
[1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
So politicians are supposed to create "non bullshit" jobs out of thin air?
The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
They do create bullshit jobs in finance by propping up the system when it's about to collapse from the consequences of their own actions though.
Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences.
Which I believe is what's lacking in the whole discussion, politicians shouldn't be trying to maintain the labour status quo if/when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective.
At this point if an LLM can do your job, it was already bullshit. But in the future when they can do non bullshit jobs, then you can go get another one just like every other person out of the billions who has had their job made obsolete by technology. It's not that hard.
If large swaths of people lose their jobs to AI, have no job prospects due to the presence of AI, and can't afford their next meal in the here and now, that is a recipe for civil unrest.
If... But most likely it will be like technology replacing all the many jobs it has replaced over the last 100 years and those people will move into other jobs. If it is different this time then it requires a different response, but that isn't needed until we know it actually is different.
In those past times of technological change, it was reasonably obvious where the puck was headed.
However, I feel like that has been changing over the past decade or two. I have met countless young people who have been willing and able to pick up a new skill to make a living. By and large, that has either turned out to be going into tech or going into gig work
AI is threatening both of those. It is not obvious to me what comes after. Frankly, these days if someone younger comes to me asking for career advice, I honestly wouldn't know what to tell them.
You give way too much credit to the US electorate. Right now vast swaths of the country are worshipping a billionaire and support policies that are actively harming them because the politicians claim to hate the same people they hate and/or quote scripture.
Seeing the life if people in red states that continue to struggle and still vote for politicians that pass policies that hurt them, I disagree.
How many farmers right now are suffering between the current tariff policies and immigration policies are still professing support for Trump? The very people that unions and higher minimum wages would help the most are opposed to because they support the very people who favoring their employers getting rich over them.
If you take solace in “god will provide” as long as you give the church 10% of your income, you aren’t looking at things logically as long as the politicians can quote scripture.
> But in the future when they can do non bullshit jobs, then you can go get another one just like every other person out of the billions who has had their job made obsolete by technology. It's not that hard.
This was the argument made by the capitalists after they had jailed and murdered most of the people in the Luddite movement before there was employment regulation.
They ignored what the Luddites were protesting for and suggested it was about people who just didn't understand how the new industrial economy worked. Don't they know that they can get jobs elsewhere and we, as a society, can be more productive for it?
The problem is that this was tone deaf. There were no labor regulations yet and the Luddites were smashing looms as that form of violence was the only leverage they had to ask for: elimination of child labor, social support that wasn't just government workhouses (ie: indentured servitude), and labor laws that protected workers. These people weren't asking everyone to make cloth by hand forever because they liked making cloth by hand and thought it should stay that way.
In modern times I think what many people are concerned about with companies getting hot for throwing labor out into the streets when it's not profitable for them anymore is that there are once more a lack of social supports in place to make sure those people's basic needs are met.
... and that's just one of the economic and social impacts of this technology.
It's even simple than that, IMHO. Yes, there are always new jobs to replace one you've lost to automation. But no, those new jobs are not for you, and not for your children. Someone else will be doing them - you will be dealing with the fallout of having your life upended, suddenly facing deep poverty.
You can re-skill - but you'll be competing for starter positions and starter salary with people who're just entering the workforce, much younger than you, with no dependents or health issues.
The technology may have benefited everyone in the long run, but in immediate terms, sudden shifts like these ruin lives of people, and destroy futures of their descendants.
It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
A vocal minority led to the French Revolution, the Bolshevik Revolution, the Nazi party and the modern climate change movement. Vocal minorities can be powerful.
>There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.
Just from the top of my head:
- information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.
- related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?
- healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?
- National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?
- energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?
- consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?
- economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?
- monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?
- enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?
- impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.
your point "It's on politicians to help people adapt to a new economic reality" brings a few:
- Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake
- How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people.
- how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?
You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.
Your list of evidence-free vibe complaints perfectly exemplifies the reasons why regulations should be approached carefully with the advice of experts, or not at all.
Debates for public regulation should not be started by evidence-backed conclusions, but rather they are what pushes research and discussion in the first place.
Perhaps the conclussion to AI's impact on mental health is "hey, multiple high quality studies show that the impact is actually positive, let's allow it and in fact consider it as a potential treatment path". That's perfectly fine.
What is not fine is not considering the topic at all until it's too late for preventive action. We don't need to wait for a building burning before we consider whether we need fire extinguishers there.
My list is not made of complains at all, it's just a few of the ways in which we suspect AI can be disruptive, which are then probably worth examining.
Healthcare/Social damage: we already have peer reviewed studies on the potentially negative impacts of LLMs on mental health: https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/ . We also have numerous stories of people committing suicides after "falling in love" or being nudged to do so by an LLM.
Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on?
Those are just the ironclad ones, you can make very good data privacy and national security arguments quite easily as well.
> Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on?
Yes, if you want to be taken seriously, then your claims about this should be based in evidence and contextualized amid the overall energy market.
Exactly. Energy/resources line is by far the most silly out of anti-AI arguments being regurgitated by people.
Electricity is fungible. Before decrying that LLMs are using it to provide the world (probably) more utility on the net per watt than your own work output (which segues into an actual problem of labor as source of personal and social worth), contrast it with what we'd otherwise be doing with that same electricity - e.g. more sportsball streams in higher definitions, more cryptocurrency shams, more Juiceros and other borderline-fraudlent startups in physical space (cheap energy means cheaper manufacturing, which means materials become more like bits, and it's easier to pull the same crap in the real world, as companies now pull in virtual).
Point being, if you want to judge use of electricity on AI, judge it in context of the whole human condition - of everything else we'd otherwise be using it on.
> "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.
The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
This presupposes the existence of said jobs, which is a whopper of an assumption that conveniently shifts blame onto the most vulnerable. Of course, that's probably the point.
This will work even worse than "if everyone goes to college, good jobs will appear for everyone."
The good (or bad) thing about humans is they always want more than what they have. AI seems like a nice tool that may solve some problems for people but, in the very near future, customers will demand more than what AI can do and companies will need to hire people who can deliver more until those jobs, eventually like all jobs, are automated away. We see this happen every 50 years or so in society. Just have a conversation with people your grandparent's age and you'll see they've gone through the same thing several times.
The last 50 years in the USA (and elsewhere) have been an absolute disaster for labor: the economy as a whole grew, the capital share grew even more, and the labor share shrank (unless you use a deflator rigged to show the opposite, but a rigged deflator can't hide the ratios). This contrasts to the 50 years prior, where we largely grew and fell together, proving that K shaped economies are a policy choice, not an inevitability.
A Roosevelt economy can still work for most people when the "job creators" stop creating jobs. A Reagan economy cannot.
> The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.
Because the world, sadly, doesn't revolve around just 1 individual. We are a society where other individuals have different goals and needs and when those are met by the development of a new product offering it shifts how people act and where they spend their money. If enough people shift then it affects jobs.
Yes, but again, the goal of automatization should be to reduce the need for people having jobs to secure their livelihood and enable a dignified life. However, what we are seeing in the Western Hemisphere is that per capita productivity is rising while the middle class is eroding and capital is accumulated by a select few in obscene amounts. 'Upskilling' does not happen out of personal motivation, but rather to meet the demands of the market so that one does not live in poverty. The idea of ‘upskilling’ to serve the market is also absurd because, in times of ever-accelerating technological development, there is no guarantee that the skills you learn today will still be relevant tomorrow. Yesterday it was “learn to code” but now many people who followed this mantra find themselves in precarious situations because they cannot find a job or are forced into the gig economy. So what do you do with people who couldn't foresee the future, or who are simply too old for the market?
Because you enjoy eating? Whatever you think society should be, the fact is we live in one where you have to exchange labour for money. What ought to be and what is, are unrelated to each other.
Its interesting how we feel this way about white collar jobs, but when a coal mine closes nobody cares.
This may be true, however, then upskilling should not be a way to solve economic issues as this line of thinking will not bring us further to solve the Is-Ought-Problem. I think most people can accept that a future where we don't have to exchange labour for money is a desirable future, right?
> If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage
Regulating AI doesn't mean blocking it. The EU AI Act regulates AI without blocking it, just imposing restrictions on data usage and decision making (if it's making life or death decisions, you have to be able to reliably explain how and why it makes those decisions, and it needs to be deterministic - no UnitedHealthcare bullshit hiding behind an "algorithm" refusing healthcare)
I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread.
Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
> Could you give some specific examples of AI regulations that you think would be good?
AI companies need to be held liable for the outputs of their models. Giving bad medical advice, buggy code etc should be something they can be sued for.
90% of the time I'm pro anything that causes a problem for the big corporations, but buggy code? C'mon.
It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship.
Commercial properties often have enough roof area to meet most of their daytime demand on-site. And industrial consumption in Western countries has been flat or declining for years, so "stable, high-voltage generation" may face less demand than assumed.
> Typical savings: 60-90% on most requests, since Gemini Flash is often free/cheapest, but you still get Claude or GPT-4 when needed.
This claim seems overstated. Accurately routing arbitrary prompts to the cheapest viable model is a hard problem. If it were reliably solvable, it would fundamentally disrupt the pricing models of OpenAI and Anthropic. In practice, you'd either sacrifice quality on edge cases or end up re-running failed requests on pricier models anyway, eating into those "savings".
I genuinely wonder the use cases are where the required accuracy is so low (or I guess the prompts are so strong) that you don't need to vigorously use evals to prevent regressions with the model that works best--let alone actually just change models on the fly based on what's cheaper.
Yes and in addition for some reason that use case is also not a fit for some cheap OS model like qwen or kimi, but must be run on the cheapest model of the big three.