Top
Best
New

Posted by frays 6 hours ago

Frontier AI has broken the open CTF format(kabir.au)
177 points | 148 commentspage 2
tromp 5 hours ago|
https://en.wikipedia.org/wiki/Capture_the_flag_(cybersecurit...

still has no mention of AI, but that will likely change as they increasingly dominate competition.

amingilani 5 hours ago||
I don’t think CTFs are dead, they’ll just evolve. The difficulty level will need to be increased or the rules locked down. Just like sports and racing persist despite the existence of performance enhancing drugs and rocket technology.

I just did a CTF where I was in the top 10. It was the first CTF I completed and I used AI because the rules permitted it. That said, I couldn’t solve all challenges.

But yes, it was significantly easier now than I last attempted one. Even manually solving with AI assisted assembly interpretation was much easier.

mort96 4 hours ago||
Increasing the difficulty level is a terrible solution. The problem with CTFs isn't that they're too easy. Making them harder just makes them even less accessible to people who don't cheat. It'd be like seeing people who put hidden electric motors in their bikes during Tour de France and conclude, "oh we just need longer distances and steeper hills".
StrauXX 2 hours ago||
LLMs don't tend to help much when solving challenges beyond their skill level. Either they one-shot a challenge, or thei are almost useless as a companion for them.
Retr0id 3 hours ago||
That doesn't work. The thing that made CTFs fun is the fact that the challenges are solvable in a short-ish timeframe, usually a day at most, if you have the requisite skills and talent.
xiphias2 3 hours ago||
,,a beginner is pushed toward using AI before they have built the instincts the AI is replacing. That is an anti-pattern.''

The same article talks about CTF skills as a way to learn about security best practices and separately a sport.

In reality it was all about learning an extremely important skillset (securing/attacking software and systems) that is getting automated.

The real thing the author seems to be frustrated about is AGI is coming in computationally verifiable domains first, and lot of his skillset was taken over in a big part.

kevinsimper 5 hours ago||
You could make it offline and with provided laptops only, just like with the competitive CS2 scene.
sheept 4 hours ago||
Offline CTFs could also incorporate physical security challenges, like lockpicking
tylerchilds 4 hours ago|||
I do like the idea of escape the room games becoming the cybersecurity employable competition meta
Retr0id 3 hours ago|||
They often do
hsbauauvhabzb 4 hours ago|||
Ctfs need preparation and unconstrained internet, even if you block domains it’s possible to tunnel out
Retr0id 3 hours ago|||
Unconstrained internet is nice, but I don't think it's a hard requirement. Just tricky to enforce, even in-person.
StrauXX 2 hours ago||
It is a hard requirement. Once you reach higher levels of challenges you spend most of your time reading through RFCs, web sepcs, Github issues, mailing lists, papers, random bugtrackers and library/framework code. There is no way to create a whitelist for that. Besides, a firewall won't stop good hackers.
Retr0id 2 hours ago||
Normal CTF workflows can involve a lot of research but that's not the point. You can design self-contained challenges with offline solving in mind, and bundle any truly necessary docs/src/etc. with the challenge download.
sheept 4 hours ago||||
Presumably if you block domains, you wouldn't be able to use AI to find a way around the block. So doing so demonstrates at least some human skill
hsbauauvhabzb 4 hours ago||
Or forethought, I’m sure you could ask an AI how to circumvent any blocks.
belabartok39 4 hours ago|||
Use jumpbox to access CTF. Disable all wireless for the playing hall.
hsbauauvhabzb 4 hours ago||
I think you’re forgetting hotspots, or laptops with inbuilt 4/5g
swiftcoder 2 hours ago||
Faraday cages exist. Finally a use for all those damn SCIFs tech companies were building in the late 2010's...
eastbound 4 hours ago||
Since real-life situations involve AI, banning AI would make CTFs just a simple game, not a demonstration of capabilities and talent.
mort96 4 hours ago|||
What do you mean? Solving a CTF challenge demonstrates way more capabilities and talent than just asking a chat bot to solve a CTF challenge.
loeg 4 hours ago|||
They always were just a game?
copx 3 hours ago||
>If adaptation means accepting that the scoreboard is now an AI orchestration benchmark, then we should say that honestly instead of pretending the old competition still exists.

This is like someone complaining that making machine parts has been ruined: Skillful craftsmen used to make them by hand using manual tools!

Nowadays the CAD/CAM/CNC cheaters have almost completely automated the whole thing. How is the next generation of craftsmen going to learn how to craft a gear by hand when the process of gear making has been reduced to pressing start on a CNC machine?!

See what I mean? Sorry, I think this article is just Luddite. I can empathize with the pain of your beloved craft basically being rendered obsolete by new technology, but the process can neither be stopped nor is it bad in general.

The manual skills you trained with CTF puzzles are now simply no longer relevant . (Field-specific) "AI orchestration" is the new cyber securtiy skill if LLMs really have become so good at this, and what the author used to do manually then has the same value as being able to craft a gear by hand.

raddan 2 hours ago|
The way I read the post is that the author is disappointed that the community is gone. The CTF was just a reason for a number of like-minded people to organize around an activity.

Indeed, in the real world, plenty of people organize to do formerly-skillful tasks together. I have not personally crafted a gear by hand, but I have built a house in a long-abandoned style with a group of people only using hand tools.

There _is_ a danger that society forgets how to do these things. During that house-building exercise, there were many tricks of the trade that, while likely documented somewhere in a book, would have been difficult to reproduce without seeing a demonstration. From the standpoint of “does it matter?” it depends on what you care about. We absolutely do not need cruck-framed houses with scribed joints. Modern construction is faster and cheaper and lasts long enough. But it would sadden me greatly if practices like this faded from memory, because it’s one of those things that makes you gasp “wow!” when you see it. And your appreciation only deepens when you try it yourself.

raphman 4 hours ago||
Interesting and well written article that mirrors/foreshadows how LLMs do and will change other scenes.

As I don't know much about the CTF scene, I looked for other takes on this topic.

Here's an article from 2015 about how tool-assistance already changed CTFs:

> Individual skill will undoubtedly be a factor next year. But, I'm left wondering whether next year's DEFCON CTF will tell us anything more than how well-developed each team's tools are (and how well they can interpret the results).

https://fuzyll.com/2015/ctf-is-dead-long-live-ctf/

But there are quite a few recent (2026) articles with the same core message as in the original article, e.g., https://blog.includesecurity.com/2026/04/ctfs-in-the-ai-era/ or https://k3ng.xyz/blog/ctf-is-dead

And here's someone explaining how Claude Max allowed them to win CTFs:

> I had always been interested in CTF as one of the only ways people could compete and show off their skill in coding/problem solving on a global scale. It was just too difficult and didn't make sense for me to learn the fundamentals as an electrical engineer. As time went on, I got better and better, and it was hard to tell whether it was because of experience or if it was because of improvements in AI.

> I accomplished my goals, and for that reason I'm quitting CTF, at least for now. [...] I'd like to think I highlighted the problem before it became a bigger issue. So, how do we fix this? Teams and challenge authors losing motivation is not good. CTF dying is not good. AI bad. Or is it?

https://blog.krauq.com/post/ctf-is-dying-because-of-ai

The only article that saw LLMs as a non-negative force for CTFs was this one. Fittingly, it sounds like LLM output ("Let's be honest", "This is where things get interesting.") and only contains hallucinated references.

https://caverav.cl/posts/ctfs-not-dead/ctfs-not-dead/

lokrian 3 hours ago||
Is AI also superior to humans at black box challenges and attacking actual targets on the internet? That seems like a really important question.
Avamander 3 hours ago|
No, the search space is much more vast and the feedback loop almost nonexistent.

The reason LLMs can do CTFs so well is partially because the challenges are usually designed to avoid wasting time and to introduce a single concept without noise.

motbus3 4 hours ago||
I think soon there will be ways to trick this models and I think when it happens it will be yet another layer like aslr

These models seems completely unbeatable only in the ads. There are 100+ times way someone puts Hindi Yoda talk In Morse Code and it goes nuts. The reason they are going to hard for PR Marketing on this is because they know it is a matter of time.

Avamander 2 hours ago|
The more you obfuscate a topic against LLMs the lower the educational value of a challenge.

The only things that works is novelty and obscurity. LLMs still suck with things mentioned in the footnotes of datasheets and manuals, things that deviate in subtle ways, unique constructions that alter something very very common. It's hard for LLMs to avoid common pitfalls in terms of making assumptions, while staying on track.

jimnotgym 3 hours ago||
You can still do competitions. But you'll all need to fly to the same place and work on laptops with a fresh install of Linux. 1 hour to install tooling then Internet off, challenge revealed.

Not as easy logistically...

SoylentOrange 4 hours ago|
Great article, well written, and good analogy to chess. I’ve been playing competitive chess most of my adult life and I think that the solution lies in how chess dealt with this problem:

Explicit ELO measurements with some cheating detection. AI assistance wholly banned. As you climb the ELO ladder, detection gets more onerous. At top level during online events, anti cheating teams require the use of both monitoring software and multiple cameras.

Idea is that you can cheat pretty easily at the lowest levels but it gets less easy the higher you go. This allows for better feeding into the truly elite competitions.

I think chess’s very firm stance that AI is never allowed in competition (neither online nor in person), rather than CTF’s acceptance, was the right call.

salt4034 52 minutes ago|
Yes, chess has been dealing with AI for decades at this point, and it's amusing/frustrating that so many other communities are deciding to re-discover everything from scratch, rather than just learn from the chess experience.

If CTF is a player-vs-player event, then AI should just be banned outright, otherwise it will devolve into AI-vs-AI, which is just not an interesting competition format, as we learned in chess. Compared to FIDE top events (which bans AI), only a tiny niche audience actually watches the Top Chess Engine Championship (AI-centered). It turns out what we care about is not whether chess can be solved by any means available, but what are the limits of the human mind in learning chess.

Pretty much all chess coaches/educators also warn against relying heavily on AI during learning; engines only give you an illusion of understanding.

More comments...