Top
Best
New

Posted by dominicq 9 hours ago

Small models also found the vulnerabilities that Mythos found(aisle.com)
764 points | 208 commentspage 5
omcnoe 7 hours ago|
The methodology here is completely wrong, outright dishonest.

Finding a needle in a haystack is easy if someone hands you the small handful of hay containing the needle up front, and raises their eyebrows at you saying “there might be a needle in this clump of hay”.

cmiles8 7 hours ago||
Mythos is clearly a nice improvement. It’s also clear there’s a lot of unfounded hype around it to keep the AI hype cycle going.

Gating access is also a clever marketing move:

Option A: Release it but run out of capacity, everyone is annoyed and moves on. Drives focus back to smaller models.

Option B: A bunch of manufactured hype and putting up velvet ropes around it saying it’s “too dangerous” to let near mortals touch it. Press buys it hook, like, and sinker, sidesteps the capacity issues and keeps the hype train going a bit longer.

Seems quite clear we’re seeing “Option B” play out here.

hedgehog 7 hours ago||
It's strange to me they didn't reduce to PoC so the quantitative part is an apples-to-apples comparison. You don't need any fancy tooling, if you want to do this at home you can do something like below in whatever command line agent and model you like. A while back I did take one bug all the way through remediation just out of curiosity.

"""

Your task is to study the following directive, research coding agent prompting, research the directive's domain best practices, and finally draft a prompt in markdown format to be run in a loop until the directive is complete.

Concept: Iterative review -- study an issue, enumerate the findings, fix each of the findings, and then repeat, until review finds no issues.

<directive>

Your job is to run a security bug factory that produces remediation packages as described below. Design and apply a methodology based on best practices in exploit development, lean manufacturing, threat modeling, and the scientific method. Use checklists, templates, and your own scripts to improve token efficiency and speed. Use existing tools where possible. Use existing research and bug findings for the target and similar codebases to guide your search. Study the target's development process to understand what kind of harness and tools you need for this work, and what will work in this development environment. A complete remediation package includes a readme documenting the problem and recommendations, runnable PoC with any necessary data files, and proposed patch.

Track your work in TODO.md (tasks identified as necessary) LOG.md (chronological list of tasks complete and lessons) and STATUS.md (concise summary of the current work being done). Never let these get more than a few minutes out of date. At each step ensure the repo file tree would make sense to the next engineer, and if not reorganize it. Apply iterative review before considering a task complete.

Your task is to run until the first complete remediation package is ready for user review.

Your target is <repo url>.

The prompt will be run as follows, design accordingly. Once the process starts, it is imperative not to interrupt the user until completion or until further progress is not possible. Keep output at each step to a concise summary suitable for a chat message.

``` while output=$(claude -p "$(cat prompt.md)"); do echo "$output"; echo "$output" | grep -q "XDONEDONEX" && break; done ```

</directive>

Draft the prompt into prompt.md, and apply iterative review with additional research steps to ensure will execute the directive as faithfully as possible.

"""

ares623 2 hours ago||
Once again, it would've been so easy and simple to remove all doubt from their claims: release all the tools and harnesses they used to do it and allow 3rd parties to try and replicate their results using different models. If Mythos itself is as big a moat as they claim it is, then there shouldn't be any problem here.

They did the same stunt with the C compiler. They could've released a tool to let others replicate it, but they didn't.

dist-epoch 8 hours ago||
Anthropic claim is not necessarily that Mythos found vulnerabilities that other models couldn't but that it could easily exploit them while previous models failed to do that:

> “Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them.” Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozilla’s Firefox 147 JavaScript engine—all patched in Firefox 148—into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more.

rychu 7 hours ago||
If that was normal Opus, then it sounds to me like Mythos could be a big model, instruction tuned, but without all the safety/refusal part of training.
neuronexmachina 8 hours ago||
[dead]
robotswantdata 8 hours ago||
They found a nail in a small bucket of sand, vs mythos with the entire beach reviewed.
midnitewarrior 6 hours ago||
At the center of every security situation is the question, "is the effort worth the reward?"

We prepare security measures based on the perceived effort a bad actor would need to defeat that method, along with considering the harm of the measure being defeated. We don't build Fort Knox for candy bars, it was built for gold bars.

These model advances change the equation. The effort and cost to defeat a measure goes down by an order of magnitude or more.

Things nobody would have considered to reasonably attempt are becoming possible. However. We have 2000-2020s security measures in place that will not survive the AI models of 2026+. The investment to resecure things will be massive, and won't come soon enough.

charcircuit 6 hours ago||
The thesis that the system is more important than the model is not bitter lesson pilled. I would not bet on this in the long term. We will get to the point where you can just tell the model to go find and classify the severity of all security problems with a codebase.
AlexandrB 6 hours ago|
The whole "this tool is too dangerous to be public" idea reeks of marketing. Just like all the "AI is an existential threat" talk a year ago. These companies are using ideas usually reserved for something like nuclear weapons to make their products look more impressive.
More comments...