Posted by kqr 1 day ago
It's fine, of course, to be for/against/etc. and have whatever view you have. Just please engage with the specific article. It will make for a less repetitive and (therefore) more interesting thread.
edit: I'm also going back to my bayesian theory days and would be super interested to see a deep dive into whether these markets are rationally updating their beliefs in time. My recollection is super vague here, but I recall something like non-transitive belief loops can lead to dutch-books (so like Johnny Punter things that Trump will win an election against Biden, Biden would win against Ross Perot, and Ross Perot would win against Trump). I'd like to know more about whether these kinds of issues are showing up in these markets?
The right test of this is to take the _same_ markets that run for 90+ days, and check accuracy 90 days out vs 30 days out. I've done this on other prediction market datasets, though not on Kalshi and Polymarket, and found that forecasts are in fact more accurate 30 days out.
I agree that if they weren't, that would be incredibly suspicious!
Studying prediction markets is one of my current research areas. In my latest paper (preprint at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6443103), we find that on Polymarkets, markets are, on average, quite accurate and unbiased. We did see a similar non-pattern between trade volume and accuracy, past a certain threshold.
The foundational idea of prediction markets is that this payment comes from the market itself. If you have a market full of suckers, the experts with real knowledge are incentivized to participate in the market to profit from the gap between what the market forecasts and their own expert forecast. This in turn will drive the market's prediction to be more accurate by incorporating the expert knowledge in directly form of the their "bets". In effect the market says put up or shut up to everyone who thinks they know better the market.
Viewed like this, prediction markets aren't much different from stock markets that also work by the premise of (active) investors claiming to know better than the market. It all follows from the efficient-market hypothesis.
All this said, I find the trend of prediction markets being used to predict what are effectively mostly random outcomes seems a bit strange. But on the other hand, the mere existence of such markets does provide financial incentive to come up with new, better ways to predict these outcomes. This itself would be very useful, at least as long as the subject is more akin to predicting the weather than the movements of a football.
Ultimately, I'm still an optimist when it comes to prediction markets.
Some summaries, like on some prediction markets, have objective accuracy that is much better than chance.
Obviously they are based on current knowledge. Nobody has any actual crystal ball.
But the outcomes are with regard to future events. So the correct term is predictions.
And they don't "just summarize the current knowledge". The whole point is that they better reflect the knowledge of people who presumably know better because they are willing to put their money where their mouth is, and ignore the vast majority of nonsense. That's not summarization. That's judgment. That's the whole point.
Put another way there needs to be SOME signal buried in all the noise.
What counts as "little predictive ability"? Do weather forecasts count as "predictions", or are they "indicators" too? Sure, they might have a more consistent track record, but then again weather is less susceptible to human interference than whatever happens in geopolitics within the next year. Prognostications about future climate might be less reliable, do those have to be downgraded to "indicators" too? On the flip side, prediction markets have a very good track record when forecasting certain events, such as interest rate decisions. Does that mean whether it's a "prediction" or a "indicator" depends on what you're forecasting?
>Ive thought hard about how to sell prediction markets to consumers. In 2020, I created Google’s current internal prediction market. Since then, I’ve served as the CTO of Metaculus, a non-market-based crowd-forecasting website, and now run FutureSearch, a startup that provides AI forecasters and researchers.
I feel like openly saying you professionally try to make people believe in markets reduced the impact of any further claim.
>Still, there is a benefit to speed. On March 11, 2026, the Financial Times reported that, upon news of Iran War escalation, the Polymarket odds of inflation at or above 2.8% rose to above 90%. This illustrated an immediate domestic impact to US foreign policy, which could influence the public in a way that updates months later from professional economists might not.
I don't understand the idea that this or similar predictions are of any value? "People strongly believe a war will worsen inflation" is information you could get anywhere and not necessarily based on any high quality decision making.
https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
Getting better calibrated really is worthwhile, I just wish there was more of an appetite to do that without involving money.
Even doing this, it's not apples-to-apples. One thing is, in this article, I filter only to "interesting" markets, so that controls for the % that are "easy" as you describe.
I thought this was the very thing we wanted to avoid by creating reputation or money based prediction platforms rewarding statistical accuracy. We already have plenty of pundits speculating inaccurately about vague things they don't know much about.
We don't need AI to get more of that!
Fun times.