The process by which markets combine dispersed private knowledge into a single consensus price signal.
Cluster: Information Theory
The process by which markets combine dispersed private knowledge into a single consensus price signal.
Referenced in 30 articles
Industry analysis mapping $63.5 billion in 2025 prediction market volume and a $200 billion+ 2026 run rate. Identifies a structural tension: sports drive current revenue (83% of Kalshi volume), but valuations price in an information infrastructure future that hasn't arrived yet. Argues distribution platforms like Robinhood and Coinbase will capture most value as they vertically integrate into exchange infrastructure.
Proposes a tiered framework for evaluating prediction market reliability, ranking financialized economic indicators highest and speculative prop bets lowest. Outlines three practical use cases: triangulating against traditional polls, nowcasting delayed economic data in real time, and hedging event risk. Draws on a Federal Reserve paper validating Kalshi's data quality and Tetlock's forecasting research to ground the argument.
Argues that prediction markets are a proof of concept for a broader shift: probability as infrastructure. Proposes three 'probability layers' beyond trading: attention markets that price content virality forward, credibility markets that turn trust into a continuously updated score, and demand markets that capture consumer intent before production. Frames the endgame as probability signals embedded invisibly into every decision surface on the internet.
Argues that prediction market TAM should include the supply side: as the cost of producing real-time probability estimates collapses, the addressable market extends beyond trading volume to every decision that benefits from better forecasts. Presents an ordered liquidity formation path from entertainment to information to institutional demand, and contends that scaling to $1T requires massive breadth in long-tail markets rather than concentrated depth in a few high-volume categories.
Asks whether large language models can outperform prediction market consensus prices and argues the more tractable framing is using LLMs as updaters rather than predictors. Distinguishes cold prediction (generating a probability estimate without prior context) from updating (revising an existing estimate as new information arrives), and considers what each role implies for AI tools deployed alongside human traders in live markets.
Sets out to defend insider trading in prediction markets but arrives at a more conditional position. Introduces a 'discovery vs betrayal' framework: in distributed-truth markets like elections, informed traders sharpen the signal because no one holds the full answer; in concentrated-truth markets like earnings, insiders monetize sealed results rather than synthesize public fragments. Argues the real question is not whether insiders should be allowed but what kind of informational asymmetry a market can absorb without losing the participation and trust that make the signal useful.
Reviews Philip Tetlock's Superforecasting and draws a direct line from the book's core thesis — that forecasting skill is measurable, trainable, and outperforms expert punditry — to Polymarket's success during the 2024 US election. Explains Tetlock's key concepts (foxes vs hedgehogs, the Good Judgment Project, Brier scores, calibration) and argues that Polymarket effectively operationalized Tetlock's framework at scale by converting crowd forecasting into a liquid financial market.
Frames prediction markets as a real-time information layer that complements traditional journalism by aggregating probabilistic forecasts from financially-incentivized participants. Argues that skin-in-the-game accountability produces more accurate signals than commentary-based analysis, with price movements often anticipating news before official announcements. Uses Polymarket and Kalshi as examples and acknowledges COVID-19 as a case where markets underperformed.
Argues that prediction markets are financial instruments, not gambling, by examining Polymarket's architecture across multiple layers: peer-to-peer order book mechanics, information aggregation through skin-in-the-game pricing, hedging use cases, and UX design that suppresses gambling patterns. Contrasts the exchange model with the house-edge casino model to argue the gambling label stems from outdated legal frameworks.
Argues prediction markets are the natural marketplace for sovereign AI agents to trade their core commodity: information. Frames decentralized PMs as the 'bazaar' where agents monetize alpha through positions, market creators earn fees from surfacing unanswered questions, and reproducible computation enables incorruptible AI judges for dispute resolution. Positions this as an alternative to centralized AI lab alignment—market incentives align agents through financial participation rather than top-down instruction.
Legal analysis explaining that insider trading in prediction markets is governed by existing fraud law rather than a distinct insider trading statute. The key question is whether a trader has deceptively breached an implied or explicit promise about how confidential information may be used. Argues prediction markets complicate this analysis by expanding tradable events into contexts where no clear company-based duty exists, making insider trading liability increasingly difficult to determine.
Responds to Kyla Scanlon's New York Times op-ed claiming prediction markets create reflexive loops that alter outcomes. Argues that unlike stock markets, prediction markets lack causal mechanisms through which odds could influence the events they forecast, making them thermometers rather than thermostats. Attributes concerns about market influence to journalism failures in contextualizing odds, not structural flaws in market design.
Questions whether prediction markets are capturing the right signal. Argues binary yes/no markets flatten complex beliefs into coin flips, losing the precision that separates superforecasters from average predictors. Uses the 2024 French trader whale ($30M moving election odds) and a Vanderbilt study (PredictIt's 93% accuracy vs 67% on high-volume platforms) to argue that more liquidity doesn't mean better signal.
Argues prediction markets are becoming a legitimate asset class with potentially the largest TAM, since anything with uncertainty and future resolution is tradable. Notes the category has reached an inflection point with professionals entering, but lacks proper tooling that mature asset classes have (Bloomberg for stocks, Axiom for memecoins). Makes the case for dedicated prediction market terminals.
Builds a formal framework to decompose why prediction markets have late volume: is it because information arrives late (hazard), or because early entry is punished by adverse selection (toxicity)? Introduces LOX, a metric computed from on-chain trades that measures whether new entrants hesitate more than volume alone would predict. Explains why boxing markets cluster with news markets despite being categorized as sports.
Argues binary event contracts fragment liquidity and flatten beliefs into 1-bit structures—achieving 8-bit resolution requires 256 separate markets. Proposes treating beliefs as vectors over probability distributions on a shared liquidity surface. Traders express full distributions and are rewarded for variance compression (reducing entropy), not just final outcome correctness.
Draws a parallel between prediction markets and Nielsen ratings to argue that coordination value matters more than accuracy. Points to Polymarket's Golden Globes and WSJ partnerships and Kalshi's CNN deal as signs that prediction markets are shifting from external forecasting tools to embedded institutional infrastructure. Once adopted as the shared reference point, displacement becomes nearly impossible regardless of methodological superiority.
Manifesto arguing binary yes/no prediction markets are incomplete—they flatten nuanced beliefs into coin flips and pay the same whether you were barely right or sharply right. Proposes distribution-native markets that reward precision: pay more for being closer to the actual outcome. Cites 130x volume growth from early 2024 to late 2025 as the category's credibility moment.
Comprehensive podcast covering prediction market fundamentals: information aggregation via Hayek's price signals, thick vs thin markets, when markets work (elections, scientific replication) and when they struggle, the oracle problem, and applications to corporate forecasting and futarchy governance.
Educational thread on the game-theoretic foundations of prediction markets. Explains why truth-telling is the dominant strategy through incentive compatibility, details how LMSR works as a proper scoring rule, and argues prediction market builders need economists and game theory experts on their teams.
Large-scale field experiment testing prediction market manipulation across 817 markets. Randomly shocked prices and tracked effects over 60 days with hourly data. Finds markets can be manipulated with effects persisting for months, though they gradually fade. Markets with more traders, higher volume, and external probability estimates prove more resistant.
Comprehensive 57-page guide covering prediction market fundamentals, tech stack (blockchain, collateral, market engines, oracles), current state (Polymarket vs Kalshi regulatory and product divergence), emerging builders across market engines and consumer apps, and open questions including oracle collusion, long-dated capital costs, and leverage.
Academic survey of prediction mechanism design from a mechanism design perspective. Covers scoring rules, market scoring rules (LMSR), cost-function-based market makers, dynamic parimutuel markets, incentive compatibility, combinatorial markets, and peer prediction systems for subjective events where ground truth doesn't exist.
Podcast discussion separating hype from reality in prediction markets. Covers foundational mechanics, comparative advantages over pollsters and experts, and future applications including corporate decision-making, scientific reproducibility, and governance innovations.
Argues that prediction markets represent one application within a broader 'info finance' ecosystem. Proposes these mechanisms can improve governance, scientific research, journalism, and social media through information-pricing mechanisms that go beyond simple betting.
Technical primer on prediction market design, from wisdom of crowds theory to decentralized oracle mechanisms. Argues prediction markets could systematize event probabilities to expand financial markets like derivatives did historically, but current implementations face challenges in liquidity fragmentation, oracle incentives, and complexity.
Compares prediction markets with traditional polls and expert commentary along two axes: grassroots vs top-down and expertise density. Uses the 2024 Biden-Trump race to show how Polymarket priced in Biden's withdrawal probability while polls measured only head-to-head support.
Introduction to decentralized prediction markets with a SWOT analysis of Polymarket. Covers how the platform works, its regulatory positioning, liquidity constraints, and growth opportunities.
Proposes a mechanism for prediction markets where outcomes cannot be objectively verified. Uses the last reporter's prediction as a reference point, creating incentives for truthful reporting through negative cross-entropy payments. Proves truthful reporting is a perfect Bayesian equilibrium.
Argues that prediction markets aimed at informing voters should operate as nonprofits rather than for-profit businesses. Points out that valuable political information rarely correlates with profitable trading opportunities, and charitable structures face less regulatory scrutiny.