Perhaps, or maybe they just got tired of students coming in and claiming that their program worked perfectly on such-and-such compiler.[1] It looks like tcc would run on most systems from the time of its introduction, and perhaps some that are a great deal older. When I took a few computer science courses, they were much more restrictive. All code had to be compiled with a particular compiler on their computers, and tested on their computers. They said it was to prevent cheating but, given how trivial it would have been to cheat with their setup, I suspect it had more to do with shutting down arguments with students who came in to argue over grades.
[1] I was a TA in the physical sciences for a few years. Some students would try to argue anything for a grade, and would persist if you let them.
When I taught programming (I started teaching 22 years ago), the course was still having students either use GCC with their university shell accounts, or if they were Windows people, they would use Borland C++ we could provide under some kind of fair use arrangement IIANM, and that worked within a command shell on Windows.
I used it just the other day to do some tests. No dependencies, no fiddling around with libwhater-1.0.dll or stuff like that when on Windows and so on.
Sad but not surprised to see it's no longer maintained (8 years ago!).
Even in the era of terabyte NVMe drives my eyes water when I install MSVC (and that's usually just for the linker!)
Debian, Fedora, Arch and others pull their package from the mob repo. They're pretty good at pulling in CVE fixes almost immediately.
Thomas Preud'homme is the new maintainer lead, though the code is a mob approach.
https://lists.nongnu.org/archive/html/tinycc-devel/2026-02/t...
https://arstechnica.com/ai/2026/02/sixteen-claude-ai-agents-...
> The $20,000 experiment compiled a Linux kernel but needed deep human management.
We tasked Opus 4.6 using agent teams to build a C Compiler | Hacker News
Except it was written in a completely different language (Rust), which likely would have necessitated a completely different architecture, and nobody has established any relationship either algorithmically or on any other level between that compiler and TCC. Additionally, and Anthropic's compiler supports x86_64 (partially), ARM, and RISC-V, whereas TCC supports x86, x86_64, and ARM. Additionally, TCC is only known to be able to boot a modified version of the Linux 2.4 kernel[1] instead of an unmodified version of Linux 6.9.
Additionally, it is extremely unlikely for a model to be able to regurgitate this many tokens of something, especially translated into another language, especially without being prompted with the starting set of tokens in order to specifically direct it to do that regurgitation.
So, whatever you want to say about the general idea that all model output is plagiarism of patterns it's already seen or something. It seems pretty clear to me that this does not fit the hyperbolic description put forward in the parent comments.