(I don’t see what this being reported during the Christmas holidays has to do with not revealing the disclosure and patch timeline, a “note that delays should be attributed to Christmas” would have sufficed.)
This specific issue is fixed here https://github.com/FFmpeg/FFmpeg/commit/4bfac71ecd96488dd2dc...
> Seeing as this has made the orange site, let it be known this person is a model security researcher.
> The issue was not in any FFmpeg release, and a report was sent three days after a new code was added to FFmpeg Git.
> There was no big CVE ADVISORY "MUH SECURITEH" "you need to fix this now or you will be hacked and the world will end" associated with the report.
""" BREAKING: AI FOUND VULNERABILITY IN FFMPEG!
After decades of human struggle, humans no longer call the shots.
Pwno decided to take the leap. They did not just find a vulnerability---they found a BOMBSHELL! What took developers weeks to write, AI analyzed in SECONDS! """
You basically cannot commit in public to the main branch and audit and test everything 3 months before a release, because any error can be picked up, will be publicized and go into the official statistics.
There are no "official" statistics. None of this matters. If we judged projects by the number of security holes they had, then no one would be using ffmpeg, which had hundreds of serious vulns.
Vulnerability research is useful insofar that the bad guys are using the same techniques (e.g., the same fuzzing tools), so any bugs you squash make it harder for others to attack you. If your enemy is a nation state, they might still pack your laptop / phone / pager with explosives, but the bar for that is higher than popping your phone with a 0-day.
Vulnerability research is demonstrably not useful for improving the security of the ecosystem in the long haul. That's where sandboxing, hardening, and good engineering hygiene come into play. If you're writing a browser or a video decoder in C/C++, you're going to have exploitable bugs.
IMHO, vulnerability research is the stick that drives the ecosystem towards all those things. Reports of vulnerabilities in the codec for Rebel Assult videos (or whatever) leads one to disable codecs other than those they need. Reports of vulnerabilities in playlist support leads one to disable playlist support where it's unnecessary and run transcodes in a chroot sandbox with no network access. Reports of buffer oveflows leads one to prefer implementation in memory safe languages where available with sufficient performance and also to sandbox when possible.
We all know that LLMs were used to find these vulnerabilities, specifically on high impact projects. That's fine.
However, my only question is who actually provided the patch: The maintainers of FFmpeg? The LLM that is being used? Or the security researchers themselves after finding the issue?
It seems that these two statements about the issue are in conflict:
> We found and patched 6 memory vulnerabilities in FFmpeg in two days.
> Dec, 2025: avcodec/exif maintainer provided patch.
1. https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/21258
2. https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/4bfac71ecd96488...
Is Forgejo using LLM-assisted translations? Or simply somepne without any context whatsoever in order to understand the word's meaning?
---- EDIT:
I went on a fun detour to inform myself better, and ended up finding [1] where Gitlab had the same discussion. Seems some translations have tried to use "confirmation" as translation for a git commit.
But really, this is one of those cases where no local word is able to appropriately describe such an unique concept oridea. I'd love to retroactively chime in and confirm (hah) that the english word "Commit" has trascended any translation attempts, and absolutely nobody would know what you're talking about if you say "confirmation" in an attempt to use a spanish term.
So Forgejo authors if you read this: it'd better to do as Gitlab did.
How do we know that? You seem quite certain.
Manually paying people to write fuzzers by hand would yield a lot more and be less expensive than data centers and burning money, but who wants to pay people in 2026?
If Google gave open source projects $100,000 per year for a competent QA person, it would cost less than this "AI" money straw fire and produce better results. Maybe the QA person would also find the 5 "AI" detected bugs.