Posted by robotswantdata 5 days ago
Assuming that this will be using the totally flawed MCP protocol, I can only see more cases of data exfiltration attacks on these AI systems just like before [0] [1].
Prompt injection + Data exfiltration is the new social engineering in AI Agents.
[0] https://embracethered.com/blog/posts/2025/security-advisory-...
[1] https://www.bleepingcomputer.com/news/security/zero-click-ai...
I don't want to delete all thoughts right away as it makes it easier for the AI to continue but I also don't want to weed trhough endless superfluous comments
I almost always rewrite AI written functions in my code a few weeks later. Doesn't matter they have more context or better context, they still fail to write code easily understandable by humans.
I actually think they're a lot more than incremental. 3.7 introduced "thinking" mode and 4 doubled down on that and thinking/reasoning/whatever-you-want-to-call-it is particularly good at code challenges.
As always, if you're not getting great results out of coding LLMs it's likely you haven't spent several months iterating on your prompting techniques to figure out what works best for your style of development.
Also, for anyone working with LLMs right now, this is a pretty obvious concept and I'm surprised it's on top of HN.