Hacker Newsnew | past | comments | ask | show | jobs | submit | rvz's commentslogin

> ...oh and the app still works, there's no new features, and just a few new bugs.

Many apps out there with developers religiously worshipping high quality and over-engineering over a single app with less than 10 users or if they are lucky enough to get over 1,000 users.

…and all of that and not a single dollar was made. Might as well donated it to Anthropic.


React patches one vulnerability and two more are revealed, just like a Hydra.

At this point you might as well deprecate RSC as it is clearly a contraption for someone trying to justify a promotion at Meta.

Maybe they are going to silently remove “Built RSC at Meta!” in their LinkedIn bios after this. So what other vulnerabilities are going to be revealed in React after this one?


Meta don’t use RSC: https://bsky.app/profile/en-js.bsky.social/post/3lmvwmr5rfs2...

> We are not using RSC at Meta yet, bc of limits of our packaging infra (it’s great at different things) and because Relay+GraphQL gives us many of the same benefits as RSCs. But we are fans and users of server driven UI and incrementally working toward RSC.

(as of April 2025)


Uses Electron.

> We use deterministic algorithms because we need control and predictability. A hallucination in a portfolio allocation is a non-starter for us.

At least with this project, there is a sense of reason when knowing when you need to use LLMs instead of throwing it everywhere where it is not needed.

> My question for this crowd: Are you fine with AI running your money if the UX is good? Or do you genuinely prefer a traditional, transparent algorithmic approach? What would actually get you to pull the trigger on a trade?

IANAL but I don't think you would want a liability risk if you are unable to explain why the AI malfunctioned when managing someone else's money. Given that LLMs really cannot be held to account, you likely need lots of disclaimers on your product so that the users know what they are interacting with.

On the other hand, If you can transparently explain why the AI is not functioning as expected, if something goes wrong then that is more trustworthy than: 'We don't know where your money went as the AI hallucinated the result by misreading a single digit in the data'.


Looking forward to the post-mortem

I'd be really surprised if Apple were forthcoming. Apple famously holds all cards close to the chest, so I dont expect anything from them. Happy to be wrong though.

> Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.

Minimum 3 years and at a hard maximum of 6 years from now.

We'll see lots of so called AI companies fold and there will be a select few winners that stay on.

So I'd give my crash timelines at around 2029 to 2031 for a significant correction turned crash.


That is like telling others who experience natural disasters to wait until it happens and then ask themselves "How much damage will it bring" and then someone else tells them that it costed them everything.

Anyone who has lived through the dotcom bubble knows that this AI mania is a obvious bubble and the whole point is you have to prepare before it eventually pops, not after someone tells you that it is too late when it pops.


You don't prepare by making predictions about when it will pop, you prepare by hedging etc.

Just as those who live in earthquake-prone areas build earthquake-resistant buildings.


> you prepare by hedging etc.

Has to be done before the eventual collapse of the bubble and still proves my whole point:

>> the whole point is you have to prepare before it eventually pops.


Knowing whether it is or isn't a bubble isn't relevant to the decision to prepare. You prepare for both possibilities!

TLDR:

Yes.


Sorry to pop your bubble...

...it is a bubble and we all know it.

(I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)

But one certainty is the crash will be spectacular.


I presume, given your confidence in a crash, you have a massive short? Hedged? Something clever to capture the downside you're so confident will come?

I would love to see your portfolio, if you wouldn't mind showing the class. Let us see what your allocation reveals about what you really think...


All of the above including the regular rotation of realized gains into early stage AI startups.

Rinse and repeat.


Before someone replies and does a fallacious comparison along the lines like: "But humans also do 'bullshitting' as well, humans also 'hallucinate' just like LLMs do".

Except that LLMs have no mechanism for transparent reasoning and also have no idea about what they don't know and will go to great lengths to generate fake citations to convince you that it is correct.


> Except that LLMs have no mechanism for transparent reasoning

Humans have transparent reasoning?

> and also have no idea about what they don't know

So why can they respond saying they don't know things?


> So why can they respond saying they don't know things?

Because sometimes, the tokens for "I don't know" are the most likely, given the prior context + the RLHF. LLMs can absolutely respond that they don't know something or that they were incorrect about sometimes, but I've only seen that happen after first pointing out that they're wrong, which changes the context window to one where such an admission of fault becomes probable.


I've actually had ChatGPT admit it was wrong by simply asking a question ("how is X achieved with what you described for Y"). It responded with "Oh, it's a great question which highlights how I was wrong: this is what really happens...": but still, it couldn't get there without me understanding the underlying truth (it was about key exchange in a particular protocol that I knew little about, but I know about secure messaging in general), and it would easily confuse less experienced engineers with fully confident sounding explanation.

For things I don't understand deeply, I can only look if it sounds plausible and realistic, but I can't have full trust.

The "language" it uses when it's wrong is still just an extension of the token-completion it does (because that's what text contains in many of the online discussions etc).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: