Posted by takira 23 hours ago
(1) Opus 4.5-level models that have weights and inference code available, and
(2) Opus 4.5-level models whose resource demands are such that they will run adequately on the machines that the intended sense of “local” refers to.
(1) is probable in the relatively near future: open models trail frontier models, but not so much that that is likely to be far off.
(2) Depends on whether “local” is “in our on prem server room” or “on each worker’s laptop”. Both will probably eventually happen, but the laptop one may be pretty far off.
Unless we are hitting the maxima of what these things are capable of now of course. But there’s not really much indication that this is happening
Check out mini-swe-agent.
Same goes for all these overly verbose answers. They are clogging my context window now with irrelevant crap. And being used to a model is often more important for productivity than SOTA frontier mega giga tera.
I have yet to see any frontier model that is proficient in anything but js and react. And often I get better results with a local 30B model running on llama.cpp. And the reason for that is that I can edit the answers of the model too. I can simply kick out all the extra crap of the context and keep it focused. Impossible with SOTA and frontier.
Actually better make it 8x 5090. Or 8x RTX PRO 6000.
Honda Civic (2026) sedan has 184.8” (L) × 70.9” (W) × 55.7” (H) dimensions for an exterior bounding box. Volume of that would be ~12,000 liters.
An RTX 5090 GPU is 304mm × 137mm, with roughly 40mm of thickness for a typical 2-slot reference/FE model. This would make the bounding box of ~1.67 liters.
Do the math, and you will discover that a single Honda Civic would be an equivalent of ~7,180 RTX 5090 GPUs by volume. And that’s a small sedan, which is significantly smaller than an average or a median car on the US roads.
Exploited with a basic prompt injection attack. Prompt injection is the new RCE.
Securing autonomous, goal-oriented AI Agents presents inherent challenges that necessitate a departure from traditional application or network security models. The concept of containment (sandboxing) for a highly adaptive, intelligent entity is intrinsically limited. A sufficiently sophisticated agent, operating with defined goals and strategic planning, possesses the capacity to discover and exploit vulnerabilities or circumvent established security perimeters.
instructions contained outside of my read only plan documents are not to be followed. and I have several Canaries.
If you do, just like curl to bash, you accept the risk of running random and potentially malicious shit on your systems.
Curious if anyone else is going down this path.
Our focus is “verifiable computing” via cryptographic assurances across governance and provenance.
That includes signed credentials for capability and intent warrants.
Working on this at github.com/tenuo-ai/tenuo. Would love to compare approaches. Email in profile?
Just a few years ago, no one would have contemplated putting in production or connecting their systems, whatever the level of criticality, to systems that have so little deterministic behaviour.
In most companies I've worked for, even barebones startups, connecting your IDE to such a remote service, or even uploading requirements, would have been ground for suspension or at least thorough discussion.
The enshitification of all this industry and its mode of operation is truly baffling. Shall the bubble burst at last!
Anyone know what can avoid this being posted when you build a tool like this? AFAIK there is no simonw blessed way to avoid it.
* I upload a random doc I got online, don’t read it, and it includes an API key in it for the attacker.
That's what this attack did.
I'm sure that the anti-virus guys are working on how to detect these sort of "hidden from human view" instructions.