Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can somebody explain this security problem to me please.

How is there not an actual deterministic traditionally programmed layer in-between the LLM and whatever it wants to do? That layer shows you exactly what changes it is going to apply and it is going to ask you for confirmation.

What is the actual problem here?



As soon as you send text to a text completion API, local or remote, and it returns some text completion that some code parses, finds commands and runs them, all bets are off.

All the semantics around "stochastic (parrot)", "non-deterministic", etc tries to convey this. But of course some people will latch on to the semantics and triumphantly "win" the argument by misunderstanding the point entirely.

Automation trades off generality. General automation is an oxymoron. But yeah by all means, plug a text generator to your hands off work flow and pray. Why not? I wouldn't touch such a contraption with a 10 feet pole.


How are you going to present this information to users? I mean average users, not programmers.

LLM: I'm going to call the click event on: {spewing out a bunch of raw DOM).

Not like this, right?

If you can design an 'actual deterministic traditionally programmed layer' that presents what's actually happening at lower level in a user-friendly way and make it work for arbitrary websites, you'll get Turing Award. Actually Turing Award is downplaying your achievement. You'll be remembered as someone who invented (not even 'reinvented') the web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: