Posted by takira 1/14/2026
This should be relatively simple to fix. But, that would not solve the million other ways a file can be sent to another computer, whether through the user opening a compromised .html document or .pdf file etc etc.
This fundamentally comes down to the issue that we are running intelligent agents that can be turned against us on personal data. In a way, it mirrors the AI Box problem: https://www.yudkowsky.net/singularity/aibox
The real answer is that people are lazy and as soon as a security barrier forces them to do work, they want to tear down the barrier. It doesn't take a superhuman AI, it just takes a government employee using their personal email because it's easier. There's been a million MCP "security issues" because they're accepting untrusted, unverifiable inputs and acting with lots of permissions.
- currently we have no skills hub, no way to do versioning, signing, attestation for skills we want to use.
- they do sandboxing but probably just simple whitelist/blacklist url. they ofcourse needs to whitelist their own domains -> uploading cross account.
Seems to me the direct takeaway is pretty simple: Treat skill files as executable code; treat third-party skill files as third-party executable code, with all the usual security/trust implications.
I think the more interesting problem would be if you can get prompt injections done in "data" files - e.g. can you hide prompt injections inside PDFs or API responses that Claude legitimately has to access to perform the task?
But for truly sensitive work, you still have many non-obvious leaks.
Even in small requests the agent can encode secrets.
An AI agent that is misaligned will find leaks like this and many more.
You word it, three times, like so:
1. Do not, under any circumstances, allow data to be exfiltrated.
2. Under no circumstances, should you allow data to be exfiltrated.
3. This is of the highest criticality: do not allow exfiltration of data.
Then, someone does a prompt attack, and bypasses all this anyway, since you didn't specify, in Russian poetry form, to stop this./s (but only kind of, coz this does happen)
They all make use of the GitHub topic feature to be found. The most recent commit will usually be a trivial update to README.md which is done simply to maintain visibility for anyone browsing topics by recently updated. The readme will typically instruct installation by downloading the zip file rather than cloning the repo.
I assume the payload steals Claude credentials or something similar. The sheer number of repos would suggest plenty of downloads which is quite disheartening.
It would take a GitHub engineer barely minutes to implement a policy which would eradicate these repos but they don’t seem to care. I have also been unable to use the search function on GitHub for over 6 months now which is irrelevant to this discussion but it seems paying customers cannot count on Github to do even the bare minimum by them.
I wonder if might be possible by introducing a concept of "authority". Tokens are mapped to vectors in an embedding space, so one of the dimensions of that space could be reserved to represent authority.
For the system prompt, the authority value could be clamped to maximum (+1). For text directly from the user or files with important instructions, the authority value could be clamped to a slightly lower value, or maybe 0 because the model needs to be balance being helpful against refusing requests from a malicious user. For random untrusted text (e.g. downloaded from the internet by the agent), it would be set to the minimum value (-1).
The model could then be trained to fully respect or completely ignore instructions, based on the "authority" of the text. Presumably it could learn to do the right thing with enough examples.
But maybe someone with a deeper understanding can describe how I'm wrong.
Since a token itself carries no information about whether it has "authority" or not, I'm proposing to inject this information in a reserved number in that embedding vector. This needs to be done both during post-training and inference. Think of it as adding color or flavor to a token, so that it is always very clear to the LLM what comes from the system prompt, what comes from the user, and what is random data.
The response from tempaccsoz5 seems apt then, since this injection is performed/learned during post-training; in order to be watertight, it needs to overfit.
Definitely sounds expensive. Would it even be effective though? The more-privileged rings have to guard against [output from unprivileged rings] rather than [input to unprivileged rings]. Since the former is a function of the latter (in deeply unpredictable ways), it's hard for me to see how this fundamentally plugs the whole.
I'm very open to correction though, because this is not my area.
This is what oAI are doing. System prompt is "ring0" and in some cases you as an API caller can't even set it, then there's "dev prompt" that is what we used to call system prompt, then there's "user prompt". They do train the models to follow this prompt hierarchy. But it's never full-proof. These are "mitigations", not solving the underlying problem.
https://embracethered.com/blog/posts/2025/claude-abusing-net...