This is a very valid concern. Here are some of our initial considerations:
1. Security of these agentic system is a hard and important problem to solve. We're indexing heavily on it, but it's definitely still early days and there is still a lot to figure out.
2. We have a critic LLM that assesses among other things whether the website content is leading a non-aligned initiative. This is still subject to the LLM intelligence, but it's a first step.
3. Our agents run in isolated browser sessions and, as per all software engineering, each session should be granted minimum access. Nothing more than strictly needed.
4. These attacks are starting to resemble social engineering attacks. There may be opportunities to shift some of the preventative approaches to the LLM world.
Thanks for asking this, we should probably share a write-up on this subject!
> 2. We have a critic LLM that assesses among other things whether the website content is leading a non-aligned initiative. This is still subject to the LLM intelligence, but it's a first step.
> [...]
> 4. These attacks are starting to resemble social engineering attacks. There may be opportunities to shift some of the preventative approaches to the LLM world.
With current tech, if you get to the point where these mitigations are the last line of defense, you've entered the zone of security theater. These browser agents simply cannot be trusted. The best assumption you can make is they will do a mixture of random actions and evil actions. Everything downstream of it must be hardened to withstand both random & evil actions, and I really think marketing material should be honest about this reality.
I agree, these mitigations alone can't be sufficient, but they are all necessary within a wider framework.
The only way to make this kind of agents safe is to work on every layer. Part of it is teaching the underlying model to see the dangers, part of it is building stronger critics, and part of it is hardening the systems they connect to. These aren’t alternatives, we need all of them.
1. Security of these agentic system is a hard and important problem to solve. We're indexing heavily on it, but it's definitely still early days and there is still a lot to figure out.
2. We have a critic LLM that assesses among other things whether the website content is leading a non-aligned initiative. This is still subject to the LLM intelligence, but it's a first step.
3. Our agents run in isolated browser sessions and, as per all software engineering, each session should be granted minimum access. Nothing more than strictly needed.
4. These attacks are starting to resemble social engineering attacks. There may be opportunities to shift some of the preventative approaches to the LLM world.
Thanks for asking this, we should probably share a write-up on this subject!