Hacker Newsnew | past | comments | ask | show | jobs | submit | Xelynega's commentslogin

That's not how namespacing works though, is it?

Getting UUID 'A' from app 'X' is easily distinguishable from UUID 'A' from app 'Y'.


The point of the first U in UUID, universal, is that you don't need to use namespacing.


Universal mean unique that uid wouldn't be used anyone else in any point in history or just universal available in one app????

because you just overreach at this point, if you can develop a better one. be my guest


You're glossing over the fact that they assumed youtube would want to assign a UUID to each pixel in a 4k@60fps video as the use case that this would fail for...


I don't think they're discounting that distrust can be legitimate, they're questioning whether it's useful to distrust somebody when it's not your job to micromanage them or they're providing adequate output.


Going a step further, I live in a reality where you can train most people against phishing attacks like that.

How accurate is the comparison if LLMs can't recover from phishing attacks like that and become more resilient?


I'm confused, you said "most".

If anything that to me strengthens the equivalence.

Do you think we will ever be able to stamp out phishing entirely, as long as humans can be tricked into following untrusted instructions by mistake? Is that not an eerily similar problem to the one we're discussing with LLMs?

Edit: rereading, I may have misinterpreted your point - are you agreeing and pointing out that actually LLMs may be worse than people in that regard?

I do think just as with humans we can keep trying to figure out how to train them better, and I also wouldn't be surprised if we end up with a similarly long tail


Are you not worried that anthropomorphizing them will lead to misinterpreting the failure modes by attributing them to human characteristics, when the failures might not be caused in the same way at all?

Why anthropomorphize if not to dismiss the actual reasons? If the reasons have explanations that can be tied to reality why do we need the fiction?


> Are you not worried that anthropomorphizing them will lead to misinterpreting the failure modes by attributing them to human characteristics, when the failures might not be caused in the same way at all?

On the other hand, maybe techniques we use to protect against phishing can indeed be helpful against prompt injection. Things like tagging untrusted sources and adding instructions accordingly (along the lines of, "this email is from an untrusted source, be careful"), limiting privileges (perhaps in response to said "instructions"), etc. Why should we treat an LLM differently from an employee in that way?

I remember an HN comment about project management, that software engineering is creating technical systems to solve problems with constraints, while project management is creating people systems to solve problems with constraints. I found it an insightful metaphor and feel like this situation is somewhat similar.

https://news.ycombinator.com/item?id=40002598


Because most people talking about LLMs don't understand how they work so can only function in analogy space. It adds a veneer of intellectualism to what is basically superstition.


We all routinely talk about things we don't fully understand. We have to. That's life.

Whatever flawed analogy you're using, it can be more or less wrong though. My claim is that, to a first approximation, LLMs behave more like people than like regular software, therefore anthropomorphising them gives you better high-level intuition than stubbornly refusing to.


> But I would be pretty irritated if the government stepped in and mandated they make my searches public and linkable to me.

Who is calling for this? Are you perhaps taking an absolutist view where "not destroying evidence" is the same as "mandated they make my searches public and linkable to me"? That's quite ridiculous.


Discovery routinely leaks. Handing over every chat from every user to opposing council has both human, technical, and incentive issues that make it far more likely that something I told ChatGPT with an understanding of its privacy limitations will appear in a torrent.


I don't understand your logic. Should security reports never be published that say "hash the password before storing it in the DB". Boring research is boring most of the time, that doesn't make it unimportant, no?


No, but it's not the database's fault if you don't hash your password. Same here, it's human error, not "MCP vulnerability". It's not that GitHub MCP needs fixing, but rather how you use it. That's the entire point of my reasoning for this "exploit."


One of the things I'm not looking forward to with people "just throwing it through an LLM for a final pass" is the loss of individual voice.

Everything is starting to sound the same, and it's becoming monotonous


Yea, this sounds like "Microsoft teams no longer supporting video on Linux and old versions of mac/windows" more than anything


Yep, joining Teams meetings from a browser on Linux is a flaky experience at best (despite Meet and Zoom working fine.) I'll happily send back a Google Meet invite to anyone that invites me to a Teams meeting.


Sounds like an good reason to turn down invites with an Teams link


How is compliance as written impossible?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: