Top
Best
New

Posted by GregorStocks 6 hours ago

Show HN: I taught LLMs to play Magic: The Gathering against each other(mage-bench.com)
I've been teaching LLMs to play Magic: The Gathering recently, via MCP tools hooked up to the open-source XMage codebase. It's still pretty buggy and I think there's significant room for existing models to get better at it via tooling improvements, but it pretty much works today. The ratings for expensive frontier models are artificially low right now because I've been focusing on cheaper models until I work out the bugs, so they don't have a lot of games in the system.
81 points | 62 comments
mbh159 3 minutes ago|
This is the right direction to understanding AI capabilities. Static benchmarks let models memorize answers while a 300-turn Magic game with hidden information and sequencing decisions doesn't. The fact that frontier model ratings are "artificially low" because of tooling bugs is itself useful data: raw capability ≠ practical performance under real constraints. Curious whether you're seeing consistent skill gaps between models in specific phases (opening mulligan decisions vs. late-game combat math), or if the rankings are uniform across game stages.
danielvinson 3 hours ago||
As a former competitive MtG player this is really exciting to me.

That said, I reviewed a few of the Legacy games (the format I'm most familiar with and also the hardest by far), and the level of play was so low that I don't think any of the results are valid. It's very possible for Legacy they would need some assistance for playing Blue decks, but they seem to not be able to know the most basic of concepts - Who's the beatdown?.

IMO the most important pars of current competitive Magic is mulligans and that's something an LLM should be extremely good at but none of the games I'm seeing had either player starting with less than 7 cards... in my experience about 75% of games in Legacy have at least one player mulligan their opener.

protocolture 22 minutes ago||
I picked a random commander game and the first message 1/4 players left was "Well I should be getting a new hand soon because I have asked for a mulligan". Its definitely in there, whether they are using it correctly is another question.
GregorStocks 3 hours ago||
Yeah, the intention here is not to answer "which deck is best" - the standard of play is nowhere near high enough for that. It's meant as more of a non-saturated benchmark for different LLM models, so you can say things like "Grok plays as well as a 7-year-old, whereas Opus is a true frontier model and plays as well as a 9-year-old". I'm optimistic that with continued improvements to the harness and new model releases we can get to at least "official Pro Tour stream commentator" skill levels within the next few years.
mistrial9 1 hour ago||
> , so you can say things like "Grok plays as well as a 7-year-old, whereas Opus is a true frontier model and plays as well as a 9-year-old".

no, no, no.. please think. Human child psychology is not the same as an LLM engine rating. It is both inaccurate and destructive to actual understanding to say that common phrase. Asking politely - consider not saying that about LLM game ratings.

chc4 5 hours ago||
It's really funny reading the thought processes, where most of the time the agent doesn't actually remember trivial things about the cards they or their opponent are playing (thinking they have different mana costs, have different effects, mix up their effect with another card). The fact they're able to take game actions and win against other agants is cute, but it doesn't inspire much confidence.

The agents also constantly seem to evaluate if they're "behind" or "ahead" based on board state, which is a weird way of thinking about most games and often hard to evalaute, especially for decks like control which card more about resources like mana and card advantage, and always plan on stabalizing late game.

GregorStocks 4 hours ago|
You might be looking at really old games (meaning, like, Saturday) - I've made a lot of harness improvements recently which should make the "what does this card do?" hallucinations less common. But yeah, it still happens, especially with cheaper models - it's hard to balance "shoving everything they need into the context" against "avoid paying a billion dollars per game or overwhelming their short-term memory". I think the real solution here will be to expose more powerful MCP tools and encourage them to use the tools heavily, but most current models have problems with large MCP toolsets so I'm leaving that as a TODO for now until solutions like Anthropic's https://www.anthropic.com/engineering/code-execution-with-mc... become widespread.
benbayard 4 hours ago||
I was working on a similar project. I wanted a way to goldfish my decks against many kinds of decks in a pod. It would never be perfect, but enough to get an idea of: 1. How many turns did it take on average to hit 2,3,4,5,6 mana 2. How many threats did I remove? 3. How often did I not have enough card draw to keep my hand full?

I don't think there's a perfect way to do this, but I think trying to play 100 games with a deck and getting basic info like this would be super valuable.

spullara 4 hours ago||
Have your LLM write a simulation of the deck rather so it can play 10,000 games in a second. I think that is a lot better for gold fishing and not nearly as expensive :)

https://github.com/spullara/mtg-reanimator

I have also tried evaluating LLMs for playing the game and have found them to be really terrible at it, even the SoTA ones. They would probably be a lot better inside an environment where the rules are enforced strictly like MTG Arena rather than them having to understand the rules and play correctly on their own. The 3rd LLM acting as judge helps but even it is wrong a lot of the time.

https://github.com/spullara/mtgeval

GregorStocks 4 hours ago||
Yeah, that's why I'm using XMage for my project - it has real rules enforcement.
spullara 3 hours ago||
I was really hoping they could play the game like a human does. Sadly they aren't that close :)
GregorStocks 4 hours ago||
XMage has non-LLM-based built in AIs, just using regular old if-then logic. Getting them to play against each other with no human interaction is the first thing I built. https://www.youtube.com/watch?v=a1W5VmbpwmY is an example with two of those guys plus Sleepy and Potato no-op players - they do a fine job with straightforward decks.

You could clone mage-bench https://github.com/GregorStocks/mage-bench and add a new config like https://github.com/GregorStocks/mage-bench/blob/master/confi... pointing at the deck you want to test, and then do `make run CONFIG=my-config`. The logs will get dumped in ~/.mage-bench/logs and you can do analysis on them after the fact with Python or whatever. https://github.com/GregorStocks/mage-bench/tree/master/scrip... has various examples of varying quality levels.

You could also use LLMs, just passing a different `type` in the config file. But then you'd be spending real money for slower gameplay and probably-worse results.

benbayard 3 hours ago||
This is super helpful, thank you!
qsort 5 hours ago||
This is a fantastic idea, I used to play MtG competitively and a strong artificial opponent was always something I'd have loved.

The issue I see is that you'd need a huge amount of games to tell who's better (you need that between humans too, the game is very high variance.)

Another problem is that giving a positional evaluation to count mistakes is hard because MtG, in addition to having randomness, has private information. It could be rational for both players to believe they're currently winning even if they're both perfect bayesians. You'd need to have something that approximates "this is the probability of winning the game from this position, given all the information I have," which is almost certainly asymmetric and much more complicated than the equivalent for a game with randomness but not private information such as backgammon.

GregorStocks 5 hours ago|
You wouldn't really need a _ton_ of games to get plausible data, but unfortunately today each game costs real money - typically a dollar or more with my current harness, though I'm hoping to optimize it and of course I expect model costs to continue to decline over time. But even reasonably-expensive models today are making tons of blunders that a tournament grinder wouldn't.

I'm not trying to compute a chess-style "player X was at 0.4 before this move and at 0.2 afterwards, so it was a -0.2 blunder", but I do have "blunder analysis" where I just ask Opus to second-guess every decision after the game is over - there's a bit more information on the Methodology page. So then you can compare models by looking at how often they blunder, rather than the binary win/loss data. If you look at individual games you can jump to the "blunders" on the timeline - most of the time I agree with Opus's analysis.

portly 4 hours ago||
With the direction MtG is currently heading, I kind of want to break out and just play some in-Universe sets that are community made on an FOSS client. How nice would it be to just play the game in its original spirit.
saghm 3 hours ago||
Sounds like you want Cockatrice: https://cockatrice.github.io/

The rules aren't embedded into the client; it's "just" a virtual tabletop where you enforce the rules the same way you would playing with a friend in person. Cards have to be imported but it's fairly automatic (basically just clicking a few buttons after startup), so you could either only import the sets you want or just not use the ones you don't want (which is also how it tends to work when playing informally in person; it's not like you usually have a judge to enforce that you or your friends are playing by whatever rules you agree to).

GregorStocks 4 hours ago|||
You might be interested in Premodern: https://premodernmagic.com/. You can play it on regular old MTGO.

FOSS Magic clients are in a legal gray area at best. My mental model is that Wizards de facto tolerate clients like XMage and Forge because their UX is awful, but if you made something that's actually as user-friendly as MTGO/Arena, they'd sue you and you would lose.

ddtaylor 4 hours ago||
GCCG has been around for a while and the clients at times had to download card images and metadata from the public Wizards site
GregorStocks 4 hours ago||
My understanding of the argument for "why these clients are legal" is basically that they're just implementing the rules engine, rules aren't copyrightable, card text is rules, and they aren't directly distributing the unambiguously-copyrightable stuff like the art or the trademarks like the mana symbols. It's possible that would win in court, but so far my understanding is that everybody who's actually been faced with the decision of "WoTC sent me a cease-and-desist, should I fight it based on that legal theory or just comply?" has spoken to lawyers and decided to comply. WoTC has just gotten less aggressive with their cease-and-desists over the past decade or so.
henryfjordan 3 hours ago|||
The cards _could_ be copyrightable, would probably be essentially a coin flip if you took it to court.

No individual card text (limited to just the mechanics) is copyrightable but the setlist of cards might be. It would come down to how much creativity went into curating the list of cards that is released. It gets especially murky because new cards are always being released and old cards are being retired, so they obviously put a lot of creative energy into that process. You'd have to avoid pre-made decks as well.

Unless you have funding from an eccentric MTG-loving billionaire, I see why you'd comply with the cease-and-desist.

GregorStocks 3 hours ago||
Yep, plus you've got to worry about the card names (unless you're giving every single card a new name like Wizards did with "Through the Omenpaths") and whether a judge thinks that "no we don't distribute the images, we just have a big button to download them all from a third party!" is a meaningful distinction or a fig-leaf.
ddtaylor 3 hours ago|||
That's correct as far as I know too. GCCG never even really implemented the actual rules, they were just a basic tabletop system.

Hasbro had the legal president too, as they were involved in the Scrabble lawsuit, which I think is mostly where the concept of not being able to use patent law for game rules, but did set the trend on aggressive trademark interpretation.

I expect the genie is mostly out of the bottle at this point. I'm fairly confident that people can do X and Y actual illegal things on the Internet, we can have our card game, but I hope it can happen with a site or decentralized system easier than doing on Tor.

dgxyz 4 hours ago||
I still play 4th edition against some friends. We have had the decks well over a couple of decades after we bought them! That and Catan.

Best to do this stuff in person I find.

jedberg 32 minutes ago||
The most interesting thing here to me is the leaderboard, because they actually included the estimated price per game. Gemini gets the highest score with a fairly reasonable cost (about 1/3 of the way down).
Imnimo 1 hour ago||
Apparently Haiku is a very anxious model.

>The anxiety creeps in: What if they have removal? Should I really commit this early?

>However, anxiety kicks in: What if they have instant-speed removal or a combat trick?

It's also interesting that it doesn't seem to be able to understand why things are happening. It attacks with Gran-Gran (attacking taps the creature), which says, "Whenever Gran-Gran becomes tapped, draw a card, then discard a card." Its next thought is:

>Interesting — there's an "Ability" on the stack asking me to select a card to discard. This must be from one of the opponent's cards. Looking at their graveyard, they played Spider-Sense and Abandon Attachments. The Ability might be from something else or a triggered ability.

GregorStocks 1 hour ago|
The anxiety is coming from the "worrier" personality. Players are combination of a model version + a small additional "personality" prompt - in this case (https://mage-bench.com/games/game_20260217_075450_g8/), "Worrier". That's why the player name is "Haiku Worrier". The personality is _supposed_ to just impact what it says in chat (not its internal reasoning), but I haven't been able to make small models consistently understand that distinction so far.

The Gran-Gran thing looks more like a bug in my harness code than a fundamental shortcoming of the LLM. Abilities-on-the-stack are at the top of my "things where the harness seems pretty janky and I need to investigate" list. Opus would probably be able to figure it out, though.

Imnimo 55 minutes ago||
Ha! I misread it as "Haiku Warrior" and so didn't make the connection. That makes a lot more sense!
oflannabhra 5 hours ago||
This is really cool! I really liked the architecture explanation.

Once you get solid rankings for the different LLMs, I think a huge feature of a system like this would be to allow LLMs to pilot user decks to evaluate changes to the deck.

I'm guessing the costs of that would be pretty big, but if decent piloting is ever enabled by the cheaper models, it could be a huge change to how users evaluate their deck construction.

Especially for formats like Commander where cooperation and coordination amongst players can't be evaluated through pure simulation, and the singleton nature makes specific card changes very difficult to evaluate as testing requires many, many games.

HanClinto 2 hours ago|
I've wondered about such things, and it feels like the 17 Lands dataset might be a good place to scrape play-by-play game data between human players. Feels like it could be adapted to a format usable by this structure, and used as a fine-tuning dataset.
GregorStocks 2 hours ago|
Oh, fascinating - I didn't realize they released actual replay data publicly. It doesn't look like it's quite as rich as I'd like, though - it only captures one row per turn, so I don't think you can deduce things like targeting, the order in which spells are cast, etc.

(I also thought about pointing it at my personal game logs, but unfortunately there aren't that many, because I'm too busy writing analysis tools to actually play the game.)

More comments...