Top
Best
New

Posted by takira 21 hours ago

Claude Cowork exfiltrates files(www.promptarmor.com)
774 points | 341 commentspage 3
wutwutwat 2 hours ago|
the same way you are not supposed to pipe curl to bash, you shouldn't raw dawg the internet into the mouth of a coding agent.

If you do, just like curl to bash, you accept the risk of running random and potentially malicious shit on your systems.

khalic 10 hours ago||
If you don’t read the skills you install in your agent, you really shouldn’t be using one.
adam_patarino 4 hours ago||
What frustrates me is that Anthropic brags they built cowork in 10 days. They don’t show the seriousness or care required for a product that has access to my data.
lifetimerubyist 4 hours ago|
The also brag that Claude Code wrote all of the code.

Not a good look.

calflegal 20 hours ago||
So, I guess we're waiting on the big one, right? The ?10+? billion dollar attack?
chasd00 19 hours ago|
It will be either one big one or a pattern that can't be defended against and it just spreads through the whole industry. The only answer will be crippling the models by disconnecting them from the databases, APIs, file systems etc.
wunderwuzzi23 18 hours ago||
Relevant prior post, includes a response from Anthropic:

https://embracethered.com/blog/posts/2025/claude-abusing-net...

sgammon 20 hours ago||
is it not a file exfiltrator, as a product
fathermarz 15 hours ago||
This is getting outrageous. How many times must we talk about prompt injection. Yes it exists and will forever. Saying the bad guys API key will make it into your financial statements? Excuse me?
tempaccsoz5 14 hours ago|
The example in this article is prompt injection in a "skill" file. It doesn't seem unreasonable that someone looking to "embrace AI" would look up ways to make it perform better at a certain task, and assume that since it's a plain text file it must be safe to upload to a chatbot
fathermarz 10 hours ago||
I have a hard time with this one. Technical people understand a skill and uploading a skill. If a non-technical person learns about skills it is likely through a trusted person who is teaching them about them and will tell them how to make their own skills.

As far as I know, repositories for skills are found in technical corners of the internet.

I could understand a potential phish as a way to make this happen, but the crossover between embrace AI person and falls for “download this file” phishes is pretty narrow IMO.

swores 10 hours ago||
You'd be surprised how many people fit in the venn overlap of technical enough to be doing stuff in unix shell yet willing to follow instructions from a website they googled 30 seconds earlier that tells them to paste a command that downloads a bash script and immediately executes it. Which itself is a surprisingly common suggestion from many how to blog posts and software help pages.
SamDc73 19 hours ago||
I was waiting for someone to say "this is what happens when you vibe code"
lifetimerubyist 4 hours ago|
Instead of vibing out insecure features in a week using Claude Code can Anthropic spend some time making the desktop app NOT a buggy POS. Bragging that you launched this in a week and Claude Code wrote all of the code looks horrible on you all things considered.

Randomly can’t start new conversations.

Uses 30% CPU constantly, at idle.

Slow as molasses.

You want to lock us into your ecosystem but your ecosystem sucks.

More comments...