Hi, I'm the post writer. I have a habit of writing like a textbook author, but the things that jumped out at me while I was working on this with Oso were:
1. Least privilege can address a lot of these issues. We all know that, but in practice we don't really apply it because it can be a pain.
2. These applications are interesting because they can interpret meaning instead of rigidly following instructions, but that makes them prone to misunderstanding and manipulation. That breaks a lot of our assumptions about how software responds to input.
3. It's helpful to think of these applications in terms of impersonation. The user's rights should be the upper bound of the LLM's permissions when it acts on their behalf.
4. Ideally, we'd also constrain permissions according to the task being performed, but that's trickier.
The article goes into all that in exhaustive (some might say tedious) detail. It was a difficult write because this space moves so quickly and has so much hype, but it's been a good exercise to try to sift through that and think about it seriously.
(edited because I don't know how to make a legible list)
Just to make sure I'm following: that's ongoing discussion of the same issue, but not the same post, right?
pvg · 2h ago
Right, because otherwise you end up with a split discussion and people miss stuff, moderators end up having to merge them, etc. Immediate followups count as dupes (in HN's weird dupery algebra), they're better off linked in the active thread.
meghan · 3h ago
Yes, similar discussion across two separate articles:
(1) article from General Analysis on "Supabase MCP can leak your entire SQL database"
(2) article from Oso talking about why authorization in AI is hard, what to do about it, which references the General Analysis article
gneray · 3h ago
Curious to hear from the community about this, esp in light of article on supabase
1. Least privilege can address a lot of these issues. We all know that, but in practice we don't really apply it because it can be a pain.
2. These applications are interesting because they can interpret meaning instead of rigidly following instructions, but that makes them prone to misunderstanding and manipulation. That breaks a lot of our assumptions about how software responds to input.
3. It's helpful to think of these applications in terms of impersonation. The user's rights should be the upper bound of the LLM's permissions when it acts on their behalf.
4. Ideally, we'd also constrain permissions according to the task being performed, but that's trickier.
The article goes into all that in exhaustive (some might say tedious) detail. It was a difficult write because this space moves so quickly and has so much hype, but it's been a good exercise to try to sift through that and think about it seriously.
(edited because I don't know how to make a legible list)