If you've played Wordle you might've solved the game in a minute once before as well. And if you've played a bunch then you've perhaps also taken the entire day to solve it.
So why is it that today’s puzzle was so intuitive but next month’s new puzzle shared here could be impossible. A more satisfying explanation than luck and the obvious “different things are different” (even though… Yeah different things are different)
Without a big jump, we're just going to boil the frog (ourselves).
I don't know if this is how we want to measure AGI.
In general I believe the we should probably stop this pursuit for human equivalent intelligence that encourages people to think of these models as human replacements. LLMs are clearly good at a lot of things, lets focus on how we can augment and empower the existing workforce.
That is a nice sentiment but not what the AI companies are out to do; they want your job.
Surprised at the comments here re. not figuring it. Simple game. Super annoying though lmao.
Maybe the internet will briefly go back to a place mainly populated with outliers.