Posted by datavorous_ 11 hours ago
I am a great fan of demoscene (computer art subculture) since middle school, and hence it was a ritual i had to perform.
For estimating the Elo, I measured 240 automated games against Stockfish Elo levels (1320 to 1600) under fixed depth-5 and some constrained rules, using equal color distribution.
Then converted pooled win/draw/loss scores to Elo through some standard logistic formula with binomial 95% confidence interval.
(Partial answer, 2kB is a very small fraction of what we'd like to think counts as human.)
2kB is a very small fraction of what we'd like to think counts as human
This doesn't seem to mean anything. Why would 2KB have any relation to "counting as human". It's the data of about 10 comments.
P.S. I assume 1200 elo in chess com scale (not lichess / fide elo) and bullet chess variant?
These things always make me think back to Westworld season 2, where the finale revealed that human minds are much simpler than they themselves believe and fit completely into an algorithm that could be printed in an average book.
So while 4k is still very impressive for the code base, it comes with a significantly larger runtime footprint.
According to TCEC the time control is 30 mins + 3 sec, that’s a lot of compute!
> It's wild to think that 4096 bytes are sufficient to play chess on a level beyond anything humans ever achieved.
Now if you had a very good chess program running in very constrained (dynamic/RAM) memory, then that'd be partially more revealing. From a cursory search there's a 1800 ELO engine for the C64, which seems very impressive but very far from the best human players.
I'd be interested to see a curve of ELO x Avaliable RAM for the best chess engines (up to given RAM), and how that compares to other games and activities.
On RAM vs ROM (program size) memory, I think at a high level dynamic memory helps you keep track of search paths in a large tree search, saving you some computation. Program size tends to enable improving the effectiveness of your search heuristic, as well as pre-computing e.g. initial and final game optimal moves (potentially saving arbitrarily much compute). I like thinking about those things because I think the search paradigm is pretty informative of computation (and even intelligence) in general. Almost every problem is basically some kind of heuristic search in some kind of space. And you tend to get better at things by refining your heuristics (usually through some experimental training process or theoretical insight), considering more options, exploring deeper consequences, etc..
I think what really defines humans isn't really our ability to solve problems or play chess well etc. (although that's extremely useful and also enjoyable most of the time), it's really our emotions and inner world. We are not really Thinking Machines in essence, we're Feeling Machines most significantly. The thinking part is a neat instrumental part :) We can delegate thinking to machines but what we cannot extinguish is feeling or the human "soul", because that is the source of all meaning.
It is hideous now!
Your code is useless to anyone that wants to contribute, or maybe make something better by improving on the idea.
The PR:
The gap between your 1200 Elo in 2KB and the TCEC 4K engines at ~3000 Elo is interesting. That extra 2KB buys a lot when it goes to better evaluation and move ordering. Even a simple captures-first sort in alpha-beta pruning costs only a few bytes of code but can roughly double your effective search depth.