Hacker Newsnew | past | comments | ask | show | jobs | submit | jaennaet's commentslogin

Reality would be much funnier if I didn't have to live in it

LLMs really can't be improved all that much beyond what we currently have, because they're fundamentally limited by their architecture, which is what ultimately leads to this sort of behaviour.

Unfortunately the AI bubble seems to be predicated on just improving LLMs and really really hoping that they'll magically turn into even weakly general AIs (or even AGIs like the worst Kool-aid drinkers claim they will), so everybody is throwing absolutely bonkers amounts of money at incremental improvements to existing architectures, instead of doing the hard thing and trying to come up with better architectures.

I doubt static networks like LLMs (or practically all other neural networks that are currently in use) will ever be candidates for general AI. All they can do is react to external input, they don't have any sort of an "inner life" outside of that, ie. the network isn't active except when you throw input at it. They literally can't even learn, and (re)training them takes ridiculous amounts of money and compute.

I'd wager that for producing an actual AGI, spiking neural networks or something similar to them would be what you'd want to lean in to, maybe with some kind of neuroplasticity-like mechanism. Spiking networks already exist and they can do some pretty cool stuff, but nowhere near what LLMs can do right now (even if they do do it kinda badly). Currently they're harder to train than more traditional static NNs because they're not differentiable so you can't do backpropagation, and they're still relatively new so there's a lot of open questions about eg. the uses and benefits of different neural models and such.


I think there is something to be said about the value of bad information. For example, pre ai, how might you come to the correct answer for something? You might dig into the underlying documentation or whatever "primary literature" exist for that thing and get the correct answer.

However, that was never very many people. Only the smart ones. Many would prefer to have shouted into the void at reddit/stackoverflow/quora/yahoo answers/forums/irc/whatever, to seek an "easy" answer that is probably not entirely correct if you bothered going right to the source of truth.

That represents a ton of money controlling that pipeline and selling expensive monthly subscriptions to people to use it. Even better if you can shoehorn yourself into the workplace, and get work to pay for it at a premium per user. Get people to come to rely on it and have no clue how to deal with anything without it.

It doesn't matter if it's any good. That isn't even the point. It just has to be the first thing people reach for and therefore available to every consumer and worker, a mandatory subscription most people now feel obliged to pay for.

This is why these companies are worth billions. Not for the utility, but from the money to be made off of the people who don't know any better.


But the thing is that they aren't even making money; eg. OpenAI lost $11 billion in one quarter. Big LLMs are just so fantastically expensive to train and operate, and they ultimately really aren't as useful to eg businesses as they've been evangelised as so demand just hasn't picked up – plus the subscription plans are priced so low that most if not all "LLM operators" (OpenAI, Anthropic, etc) apparently actually lose money on even the most expensive ones. They'd lose all their customers if the plans actually cost as much as they should.

Apropos to that, I wonder if OpenAI et al are losing money on API plans too, or if it's just the subscriptions.

Source for the OpenAI loss figure: https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Source for OpenAI losing money on their $200/mo sub: https://fortune.com/2025/01/07/sam-altman-openai-chatgpt-pro...


To lose 11 billion means you have successfully convinced some people to give you 11 billion to lose. And money wasn't lost either. It was spent. It was used for things, making people richer and buying hardware, which also makes people richer.

Based on your ad hominem of a reply I suppose it's safe to assume you don't have the experience, then


What is your proposed alternative, though?


Silos. You can create your own and say anything you want (only constrained by the law). Everyone else can join it, or blacklist it, for themselves. Nobody gets to shut off someone else's silo, they can only ignore it for themselves. Nobody gets to decide what other people choose to read or write.

For the case of Reddit, a silo maps nicely onto a subreddit. Within any subreddit the moderator can have full control, they can moderate it exactly as they choose. If you don't like it, create your own where you will have free rein.


What about content that is illegal in the country that your "silo" is hosted in, like, say, CSAM (but you can really really substitute anything else illegal there, like eg. planning terrorist attacks)? If a "silo" is CSAM-friendly or its express purpose is posting it and its moderators don't want to remove illegal content, what then?


I hope there are no legal jurisdictions that are actually CSAM-friendly. But this isn't a unique problem, there are many situations in the world where legal jurisdictions are muddy. For example, when over-the-air television signals can be received across country borders. Just let the law sort it out. Admittedly, it's more difficult for companies that operate in multiple countries, but they're already managing to do it today. The main hope is, that companies will not add any additional censorship themselves, and that an attitude of free exchange and tolerance, would be the default position for more of us than it is today.


That already exists, it’s called a website.


That's a good point. But for all practical purposes, Facebook, Reddit, and other major social networks represent what the web means to an average person. Many of them never even open a browser. So those major social networks should be treated more like a public square, for the discoverability that provides, if nothing else. And in the context of sites being delisted and apps being banned (Google, Apple, etc), it would be nice for major social networks to be committed to free speech on their platforms.


If there's something I'd expect Google to use a strong consistency model for, it'd be a credit system like that.

Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.


So yes, it would have an effect; even with your imaginary numbers that'd be a 3x drawdown


it might bring in the schedules, but since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else


> since it probably wouldn't cause there to be an actual hole, its really more about long term fab build plans than anything else

Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have a backlog to grow into.


Exactly. A drop in the expected growth would absolutely cause a drop in valuation as investors reassess their holdings


What are you doing on Hacker News? You should be working on "AGI".

This may come as a surprise to you, but us humans need entertainment


Also speaks to a lack of understanding on the author's part; people who truly understand some subject are generally much more adept at explaining it in simpler terms – ie without adding complexity beyond the subject's essential complexity


How often did you IDE or editor refuse to do something it was generally capable of because it deemed the operation too frivolous in a context?


This'd be a valid analogy if all compiled / interpreted languages were like INTERCAL and eg. refused to compile / execute programs that were insufficiently polite, or if the runtime wouldn't print out strings that it "felt" were too silly.

Now there's an idea for an esoteric language.


It depends from which vantage point you look at it. The person directing the company, let's imagine it was Bill Gates instructing that the code should be bug free, but its very opinionated about what a bug is at Microsoft.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: