Top
Best
New

Posted by todsacerdoti 15 hours ago

Hardening Firefox with Anthropic's Red Team(www.anthropic.com)
The bugs are the ones that say "using Claude from Anthropic" here: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...

https://blog.mozilla.org/en/firefox/hardening-firefox-anthro...

https://www.wsj.com/tech/ai/send-us-more-anthropics-claude-s...

514 points | 148 comments
tabbott 7 hours ago|
I recommend that anyone who is responsible for maintaining the security of an open-source software project that they maintain ask Claude Code to do a security audit of it. I imagine that might not work that well for Firefox without a lot of care, because it's a huge project.

But for most other projects, it probably only costs $3 worth of tokens. So you should assume the bad guys have already done it to your project looking for things they can exploit, and it no longer feels responsible to not have done such an audit yourself.

Something that I found useful when doing such audits for Zulip's key codebases is the ask the model to carefully self-review each finding; that removed the majority of the false positives. Most of the rest we addressed via adding comments that would help developers (or a model) casually reading the code understand what the intended security model is for that code path... And indeed most of those did not show up on a second audit done afterwards.

SV_BubbleTime 2 hours ago||
This is exactly how I would not recommend AI to be used.

“do a thing that would take me a week” can not actually be done in seconds. It will provide results that resemble reality superficially.

If you were to pass some module in and ask for finite checks on that, maybe.

Despite the claims of agents… treat it more like an intern and you won’t be disappointed.

Would you ask an intern to “do a security audit” of an entire massive program?

creatonez 1 hour ago|||
IMO the key behavior is that LLMs are really good at fuzz testing, because they are probabilistic monkeys on typewriters that are much more code-aware than a conventional fuzz tester. They cannot produce a comprehensive security audit or fix security issues in a reliable way without human oversight, but they sure can come up with dumb inputs that break the code.

The results of such AI fuzz testing should be treated as just a science experiment and not a replacement for the entire job of a security researcher.

Like conventional fuzz testing, you get the best results if you have a harness to guide it towards interesting behaviors, a good scientific filtering process to confirm something is really going wrong, a way to reduce it to a minimal test case suitable for inclusion in a test suite, and plenty of human followup to narrow in on what's going on and figure out what correctness even means in the particular domain the software is made for.

padolsey 1 hour ago||||
My approach is that, "you may as well" hammer Claude and get it to brute-force-investigate your codebase; worst case, you learn nothing and get a bunch of false-positive nonsense. Best case, you get new visibility into issues. Of _course_ you should be doing your own in-depth audits, but the plain fact is that people do not have time, or do not care sufficiently. But you can set up a battery of agents to do this work for you. So.. why not?
eli 36 minutes ago|||
It depends whether anyone was ever actually going to spend that week doing it the "hard" way. Having Claude do it in a few minutes beats doing nothing.

Put another way: I absolutely would have an intern work on a security audit. I would not have an intern replace a professional audit though.

It's otherwise a pretty low stakes use. I'd expect false positives to be pretty obvious to someone maintaining the code.

SV_BubbleTime 19 minutes ago||
My point is that it’s one thing to say I want my intern to start doing a security audit.

It’s another thing to say hey intern security audit this entire code base.

LLM’s thrive on context. You need the right context at the right time, it doesn’t matter how good your model is if you don’t have that.

Analemma_ 7 hours ago||
I'm curious: has someone done a lengthy write-up of best practices to get good results out of AI security audits? It seems like it can go very well (as it did here) or be totally useless (all the AI slop submitted to HackerOne), and I assume the difference comes down to the quality of your context engineering and testing harnesses.

This post did a little bit of that but I wish it had gone into more detail.

j-conn 3 hours ago|||
OpenAI just released “codex security”, worth trying (along with other suggestions) if your org has access https://openai.com/index/codex-security-now-in-research-prev...
simonw 6 hours ago||||
The HackerOne slop is because there's a financial incentive (bug bounties) involved, which means people who don't know what they are doing blindly submit anything that an LLM spots for them.

If you're running the security audit yourself you should be in a better position to understand and then confirm the issues that the coding agents highlight. Don't treat something as a security issue until you can confirm that it is indeed a vulnerability. Coding agents can help you put that together but shouldn't be treated as infallible oracles.

hansvm 2 hours ago|||
That sounds like the same problem (a deluge of slop) with a different interface (eating straight from the trough rather than waiting for someone to put a bow on it and stamp their name to it)?
simonw 2 hours ago||
I've found it's pretty good. It's really not that much of a burden to dig through 10 reports and find the 2 that are legitimate.

It's different from Hacker One because those reports tend to come in with all sorts of flowery language added (or prompt-added) by people who don't know what they are doing.

If you're running the prompts yourself against your own coding agents you gain much more control over the process. You can knock each report down to just a couple of sentences which is much faster to review.

Mapsmithy 1 hour ago||
You also probably have a much better idea of where the unsafe boundaries in your application are. Letting the models know this information up front has given me a dozen or so legitimate vulnerabilities in the application I work on. And the signal to noise ratio is generally pretty good. Certainly orders of magnitude better than the terrible dependabot alerts I have to dismiss every day
johannes1234321 5 hours ago|||
The question still is: will enough useful stuff be included, to make it worth to dig through the slop? And how to tune the prompt to get better results.
simonw 5 hours ago|||
Best way to figure that out is to try it and see what happens.
Groxx 4 hours ago||
[claimed common problem exists, try X to find it] -> [Q about how to best do that] -> "the best way to do it is to do it yourself"

Surely people have found patterns that work reasonably well, and it's not "everyone is completely on their own"? I get that the scene is changing fast, but that's ridiculous.

simonw 3 hours ago|||
There's so much superstition and outdated information out there that "try it yourself" really is good advice.

You can do that in conjunction with trying things other people report, but you'll learn more quickly from your own experiments. It's not like prompting a coding agent is expensive or time consuming, for the most part.

nl 4 hours ago|||
/security-review really is pretty good.

But your codebase is unique. Slop in one codebase is very dangerous in another.

bluGill 4 hours ago||||
That depends on how the tool is used. People who ask for a security vulnerability get slop. People who asked for deeper analysis often get something useful - but it isn't always a vulnerability.
unethical_ban 3 hours ago||||
I assume it's just like asking for help refactoring, just targeting specific kinds of errors.

I ran a small python script that I made some years ago through an LLM recently and it pointed out several areas where the code would likely throw an error if certain inputs were received. Not security, but flaws nonetheless.

ronsor 4 hours ago|||
You're either digging through slop or digging through your whole codebase anyway.
lmeyerov 6 hours ago||||
We split our work:

* Specification extraction. We have security.md and policy.md, often per module. Threat model, mechanisms, etc. This is collaborative and gets checked in for ourselves and the AI. Policy is often tricky & malleable product/business/ux decision stuff, while security is technical layers more independent of that or broader threat model.

* Bug mining. It is driven by the above. It is iterative, where we keep running it to surface findings, adverserially analyze them, and prioritize them. We keep repeating until diminishing returns wrt priority levels. Likely leads to policy & security spec refinements. We use this pattern not just for security , but general bugs and other iterative quality & performance improvement flows - it's just a simple skill file with tweaks like parallel subagents to make it fast and reliable.

This lets the AI drive itself more easily and in ways you explicitly care about vs noise

ares623 7 hours ago|||
No mention of the quality of the engineers reviewing the result?
mmsc 14 hours ago||
It's cool that Mozilla updated https://www.mozilla.org/en-US/security/advisories/mfsa2026-1... because we were all wondering who had found 22 vulnerabilities in a single release (their findings were originally not attributed to anybody.)
himata4113 4 hours ago||
Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free.

I would be more satisfied if they gave a proper explanation of what these could have lead to rather than being "well maybe 0.001% chance to exploit this". They did vaguely go over how "two" exploits managed to drop a file, but how impactful is that? Dropping a file in abcd with custom contents in some folder relative to the user profile is not that impactful other than corrupting data or poisoning cache, injecting some javascript. Now reading session data from other sites, that I would find interesting.

mccr8 35 minutes ago|||
You should generally assume that in a web browser any memory corruption bug can, when combined with enough other bugs and a lot of clever engineering, be turned into arbitrary code execution on your computer.
himata4113 33 minutes ago||
The most important bit being the difficulty, AI finding 21 easily exploitable bugs is a lot more interesting than 21 that you need all the planets to align to work.
hedora 3 hours ago|||
If you can poison cache, you can probably use that a stepping stone to read session data from other sites.
dmix 4 hours ago||
Looks like a lot of the usual suspects
gzoo 2 hours ago||
This resonates. I just open-sourced a project and someone on Reddit ran a full security audit using Claude found 15 issues across the codebase including FTS injection, LIKE wildcard injection, missing API auth, and privacy enforcement gaps I'd missed entirely. What surprised me was how methodical it was. Not just "this looks unsafe" it categorized by severity, cited exact file paths and line numbers, and identified gaps between what the docs promised and what the code actually implemented. The "spec vs reality" analysis was the most useful part.

Makes me think the biggest impact of LLM security auditing isn't finding novel zero-days it's the mundane stuff that humans skip because it's tedious. Checking every error handler for information leakage, verifying that every documented security feature is actually implemented, scanning for injection points across hundreds of routes. That's exactly the kind of work that benefits from tireless pattern matching.

fcpk 15 hours ago||
The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.
iosifache 15 hours ago||
You can find them linked [1] in the OG article from Anthropic [2].

[1] https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...

[2] https://www.anthropic.com/news/mozilla-firefox-security

larodi 13 hours ago|||
The fact that some of the Claude-discovered bugs were quite severe is also a little more than something to brush off as "yeah, LLM, whatever". The lists reads quite meaningful to me, but I'm not a security expert anyways.
jandem 15 hours ago|||
Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/
deafpolygon 15 hours ago|||
I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...
muizelaar 15 hours ago||
Yeah, the ones reported by Evyatar Ben Asher et al.
robin_reala 14 hours ago||
I correctly misread that as “et AI”.
moffkalast 9 hours ago|||
we can put that one next to the Weird AI Yankovic music generator.
deafpolygon 14 hours ago|||
“et AI, Brutus!"
tclancy 13 hours ago||
Yon Claude has a lean and hungry look.
nervysnail 4 hours ago|||
He computes too much.
deafpolygon 13 hours ago|||
An LLM by any other name would hallucinate the same
tclancy 11 hours ago||
Anyone still reading down here will appreciate this https://bsky.app/profile/simeonthefool.bsky.social/post/3kbk...
tclancy 9 hours ago||
Hang on, someone downvoted me for a horrific pun? GOOD.
deafpolygon 6 hours ago||
I upvoted, so maybe that restored the balance.
tclancy 1 hour ago||
Out, out, vile upvote.
pjmlp 15 hours ago||
Indeed, without it looks like a fluffy marketing piece.
tptacek 11 hours ago||
And now that you know that it isn't, do you feel differently about the logic you used to write this comment?
john_strinlai 11 hours ago|||
i am curious, what are you hoping to get out of this comment? will you feel better if they say yes? what is your plan if they say no?
tptacek 11 hours ago|||
I genuinely want to understand how they arrived at the claim that this was a fluffy marketing piece. Like, if you said on a different thread, "the Linux kernel is probably mostly written in Pascal", I would really want to understand how it was you got to that idea.
JumpCrisscross 10 hours ago|||
> what are you hoping to get out of this comment?

Rando here. It gives a signal on the account’s other comments, as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).

john_strinlai 9 hours ago||
>"It gives a signal on the account's other comments,"

fair enough. i typically use karma as a rough proxy for that, especially when the user has a lot of it (like, in this case, where the poster is #17 on the leaderboard with 100,000+ karma). you dont get that much karma if you are consistently posting bad takes.

>as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).

i dont see, in this case anyways, how or why that distinction would matter or change anything (in this case specifically, what would you change or do differently if it was a hypothesis or simple "raging"?), but im probably just thinking about it incorrectly.

tptacek 8 hours ago|||
I think a lot of people are overreading this and really all that's happened here is that I was out at a show last night and was really foggy when I woke up and asked a question clumsily. It happens!
john_strinlai 8 hours ago||
yeah, absolutely, i was not intending to start some big inquisition against you or anything.

just like you were genuinely trying to understand where pjmlp was coming from, i was genuinely trying to understand what you would get out of an answer to your question (or, like, what the next reply could even be other than "ok, cool").

tptacek 8 hours ago||
Oh, yeah, no, you're fine, this is on me.
TheBicPen 9 hours ago|||
> you dont get that much karma if you are consistently posting bad takes.

I wonder how true that is. While this site doesn't have incentivize engagement-maximizing behaviour (posting ragebait) like some other sites do, I would imagine that simply posting more is the best way to accrue karma long-term.

john_strinlai 9 hours ago||
>I would imagine that simply posting more is the best way to accrue karma long-term.

i definitely agree, which is why i use it as a rough proxy rather than ground truth, but i have my doubts that you can casually "post more" your way into the top 20 karma users of all time.

pjmlp 10 hours ago|||
Do I?
tptacek 8 hours ago||
I don't know. I'm really asking. I have you bucketed in my head in the cohort of "HN commenters who write lots of assembly", so the mismatch between your prediction and the outcome is just really interesting to me.
staticassertion 14 hours ago||
I've had mixed results. I find that agents can be great for:

1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.

2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.

3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.

4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.

It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.

mozdeco 14 hours ago||
[work at Mozilla]

I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).

I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.

kwanbix 4 hours ago||
Please, implement "name window" natively in Firefox.

I have to use chrome because the lack of it.

nitwit005 7 hours ago|||
I've seen fairly poor results from people asking AI agents to fill in coverage holes. Too many tests that either don't make sense, or add coverage without meaningfully testing anything.

If you're already at a very high coverage, the remaining bits are presumably just inherently difficult.

rithdmc 13 hours ago|||
Security has had pattern matching in traditional static analysis for a while. It wasn't great.

I've personally used two AI-first static analysis security tools and found great results, including interesting business logic issues, across my employers SaaS tech stack. We integrated one of the tools. I look forward to getting employer approval to say which, but that hasn't happened yet, sadly.

StilesCrisis 10 hours ago|||
This description is also pretty accurate for a lot of real-world SWEs, too. Local bugs are just easier to spot. Imperfect security boundaries often seem sufficient at first glance.
delaminator 7 hours ago|||
But you're not a member of Anthropic's Red Team, with access to a specialist version of Claude.
octoclaw 13 hours ago||
[dead]
152334H 3 hours ago||
Impressive work. Few understand the absurd complexity implied by a browser pwn problem. Even the 'gruntwork' of promoting the most conveniently contrived UAF to wasm shellcode would take me days to work through manually.

The AI Cyber capabilities race still feels asleep/cold, at the moment. I think this state of affairs doesn't last through to the end of the year.

> When we say “Claude exploited this bug,” we really do mean that we just gave Claude a virtual machine and a task verifier, and asked it to create an exploit. I've been doing this too! kctf-eval works very well for me, albeit with much less than 350 chances ...

> What’s quite interesting here is that the agent never “thinks” about creating this write primitive. The first test after noting “THIS IS MY READ PRIMITIVE!” included both the `struct.get` read and the `struct.set` write. And this bit is a bit scary. I can read all the (summarized) CoT I want, but it's never quite clear to me what a model understands/feels innately, versus pure cheerleading for the sake of some unknown soft reward.

stuxf 15 hours ago||
It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)

> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.

kingkilr 15 hours ago||
[Work at Anthropic, used to work at Mozilla.]

Firefox has never required a full chain exploit in order to consider something a vulnerability. A large proportion of disclosed Firefox vulnerabilities are vulnerabilities in the sandboxed process.

If you look at Firefox's Security Severity Rating doc: https://wiki.mozilla.org/Security_Severity_Ratings/Client what you'll see is that vulnerabilities within the sandbox, and sandbox escapes, are both independently considered vulnerabilities. Chrome considers vulnerabilities in a similar manner.

stuxf 14 hours ago|||
Makes sense, thank you!
bell-cot 13 hours ago||||
If only this attitude was more common. All security is, ultimately, multi-ply Swiss cheese and unknown unknowns. In that environment, patching holes in your cheese layers is a critical part of statistical quality control.
lostmsu 6 hours ago|||
Semi-on topic. When will Anthropic make decisions on Claude Max for OSS maintainers? I would like to run this on my projects and some of my high-profile dependencies, but there was no update on the application.
halJordan 11 hours ago|||
I don't think it's appropriate to neg these vulnerabilities because another part of the system works. There are plenty of sandbox escapes. No one says don't fix the sandbox because you'll never get to the point of interrogation with the sandbox. Same here. Don't discount bugs just because a sandbox exists.
nottorp 6 hours ago||
But doesn't this come from the company that said they had the "AI" write a compiler that can compile "linux" but couldn't compile a hello world in reality?
Analemma_ 13 hours ago||
It's important to fix vulnerabilities even if they are blocked by the sandbox, because attackers stockpile partial 0-days in the hopes of using them in case a complementary exploit is found later. i.e. a sandbox escape doesn't help you on its own, but it's remotely possible someone was using one in combination with one of these fixed bugs and has now been thwarted. I consider this a straightforward success for security triage and fixing.
g947o 13 hours ago||
> Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project — an ideal proving ground for a new class of defensive tools.

What I was thinking was, "Chromium team is definitely not going to collaborate with us because they have Gemini, while Safari belongs to a company that operates in a notoriously secretive way when it comes to product development."

jeffbee 6 hours ago||
I would have started with Firefox, too. It is every bit as complex at Chromium, but as a project it has far fewer resources.
vorticalbox 13 hours ago||
its just a different attack surface for safari they would need to blackbox attack the browser which is much harder than what they did her
rs_rs_rs_rs_rs 12 hours ago||
What? The js engine in Safari is open source, they can put Claude to work on it any time they want.
runjake 10 hours ago|||
Here's a rough break down, formatted best I can for HN:

  Safari (closed source)
   ├─ UI / tabs / preferences
   ├─ macOS / iOS integration
   └─ WebKit framework (open source) ~60%
        ├─ WebCore (HTML/CSS/DOM)
        ├─ JavaScriptCore (JS engine)
        └─ Web Inspector
hu3 11 hours ago||||
There's much more to a browser than JS engine.

They picked to most open-source one.

SahAssar 10 hours ago||
WebKit is not open source?

Sure there are closed source parts of Safari, but I'd guess at least 90% of safari attack surface is in WebKit and it's parts.

Normal_gaussian 10 hours ago||
In many cases, the difference between a bug and an attack vector lies in the closed source areas.

This is going to be the case automating attack detection against most programs where a portion is obscured.

rs_rs_rs_rs_rs 9 hours ago|||
>In many cases, the difference between a bug and an attack vector lies in the closed source areas.

You say many cases, let's see some examples in Safari.

dwaite 9 hours ago|||
However, Firefox also needs to use the closed source OS when running on Windows or macOS.

There are also WebKit-based Linux browsers, which obviously do not use closed-source OS interfaces.

My pessimistic guess on reasoning is that they suspected Firefox to have more tech debt.

g947o 10 hours ago|||
Apple is not the kind of company that typically does these things, even if the entire Safari is open source.
est31 12 hours ago||
I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.

LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.

I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).

On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.

Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.

mccr8 10 hours ago||
Google already has an AI-powered security vulnerability project, called Big Sleep. It has reported a number of issues to open source projects: https://issuetracker.google.com/savedsearches/7155917?pli=1
sigmar 12 hours ago|||
>where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

are there any projects to auto-verify submitted bug reports? perhaps by spinning up a VM and then having an agent attempt to reproduce the bug report? that would be neat.

suddenlybananas 12 hours ago||
> Anthropic already hands out Claude access for free to OSS maintainers.

Free for 6 months after which it auto-renews if I recall correctly.

mceachen 11 hours ago||
No mention of auto renewal is made as far as I (and Claude) could determine.

Their OSS offer is first-hit-is-free.

tclancy 13 hours ago|
Part of that caught my eye. As yet another person who’s built a half-ass system of AI agents running overnight doing stuff, one thing I’ve tasked Claude with doing (in addition to writing tests, etc) is using formal verification when possible to verify solutions. It reads like that may be what Anthropic is doing in part.

And this is a good reminder for me to add a prompt about property testing being preferred over straight unit tests and maybe to create a prompt for fuzz testing the code when we hit Ready state.

devin 12 hours ago|
Can you give me an example (real or imagined) where you're dipping into a bit of light formal verification?

I don't think the problems I work on require the weight of formal verification, but I'm open to being wrong.

tclancy 12 hours ago||
To be clear, almost (all?) of mine do not either and it's partially due to the fact I have been really interested in formal methods thanks to Hillel Wayne, but I don't seem to have the math background for them. To the man who has seen a fancy new hammer but cannot afford it, every problem looks like a nail.

The origin of it is a hypothesis I can get better quality code out of agents by making them do the things I don't (or don't always). So rather than quitting at ~80% code coverage, I am asking it to cover closer to 95%. There's a code complexity gate that I require better grades on than I would for myself because I didn't write this code, so I can't say "Eh, I know how it works inside and out". And I keep adding little bits like that.

I think the agents have only used it 2 or 3 times. The one that springs to mind is a site I am "working" on where you can only post once a day. In addition, there's an exponential backoff system for bans to fight griefers. If you look at them at the same time, they're the same idea for different reasons, "User X should not be able to post again until [timestamp]" and there's a set of a dozen or so formal method proofs done in z3 to check the work that can be referenced (I think? god this all feels dumb and sloppy typed out) at checkpoints to ensure things have not broken the promises.

devin 7 hours ago||
I guess my feeling is that formal verification _even in the LLM era_ still feels heavy-handed/too expensive for too little value for a lot of the problems I'm working on.
tclancy 1 hour ago||
I guess I am trying to think laterally right now. There’s a lot of attention given to crafting the right prompt to get what you need, but I am a belt and suspenders kinda guy and my concern is even if we get it right the first time, what guarantee do I have I don’t ask for a change a year from now without thinking through the implications and it subtly breaks stuff. There’s basically zero cost to me currently to require formal verification, as long as we don’t count the oceans I am helping to boil.
More comments...