Prediction Market
Glossary

35 key terms defined — from implied probability to calibration, Brier score, and consensus aggregation.

Last updated March 2026  ·  35 terms  ·  Free to use
B
Contract Type
Binary Option
Also: Yes/No contract, binary contract
A contract that pays a fixed amount — typically $1.00 — if a specific event occurs, and $0 if it does not. The price of a binary contract, expressed as a decimal between 0 and 1, is directly interpretable as a probability. Most prediction market contracts are binary options.
Example: A "Will the Fed raise rates in March?" contract priced at $0.18 means the market assigns an 18% probability to a rate hike. If the Fed raises rates, the contract settles at $1.00. If not, it settles at $0.00.
Accuracy Metric
Brier Score
Named after Glenn W. Brier (1950)
A scoring rule that measures the accuracy of probabilistic forecasts. Calculated as the mean squared error between stated probabilities and actual binary outcomes: BS = (1/n) Σ (ft − ot, where ft is the forecast probability and ot is the outcome (1 or 0). Scores range from 0 (perfect) to 1 (perfectly wrong). Lower is better. A naïve baseline of always predicting 0.5 scores 0.25.
Example: If a market predicted 72% probability of a team winning, and the team won, the Brier contribution for that event is (0.72 − 1)² = 0.0784. If the team lost, it would be (0.72 − 0)² = 0.5184. Prediction markets typically achieve Brier scores of 0.18–0.22 on political events, compared to simple baseline models scoring 0.25.
C
Accuracy Metric
Calibration
Also: probability calibration, reliability
A measure of how well a forecaster's stated probabilities match observed outcome frequencies over many predictions. A perfectly calibrated forecaster who assigns "70% probability" to a class of events is correct on exactly 70% of those events. Calibration can be assessed visually using a reliability diagram, or numerically. Prediction markets tend to be better calibrated than individual experts because financial incentives penalize overconfidence.
Example: If a forecaster says "80% likely" on 100 different events and wins on 78 of them, they are well-calibrated. If they win on only 60, they are overconfident — they claimed 80% but delivered 60% accuracy.
Data Engineering
Canonical Key
Also: event normalization key, market key
A standardized identifier used to match the same event across different prediction market platforms, which often use different naming conventions, date formats, and team abbreviations. Without a canonical key, aggregating data from multiple sources produces duplicates and mismatches. A common format is sport:team1-team2:date.
Example: One platform might call a game "NYK vs BOS 2026-03-15" while another uses "boston-celtics_new-york-knicks_mar15". A canonical key normalizes both to nba:boston-celtics_new-york-knicks:2026-03-15 so consensus data can be aggregated correctly.
Signal Quality
Confidence Rating
Also: signal confidence, data quality score
A qualitative or quantitative label indicating how reliable a consensus probability estimate is, based on the number of contributing data sources and the degree of agreement between them. Common ratings: LOW (few sources or high spread), MEDIUM (moderate agreement), HIGH (multiple sources, tight spread). Confidence ratings help consumers of consensus data weight estimates appropriately.
Example: An event with data from 4 independent sources and a spread of only 2 percentage points (e.g., 68%–70%) would receive a HIGH confidence rating. An event with data from only one source would receive LOW regardless of the probability value.
Core Concept
Consensus Probability
Also: aggregate probability, market consensus
An aggregated probability estimate derived by combining data from multiple independent prediction markets into a single figure per event. Unlike a raw price from one platform, a consensus probability pools information from multiple participant bases, reducing the impact of any one platform's idiosyncrasies, liquidity constraints, or user-base biases. Consensus probabilities are typically updated continuously as underlying market prices change.
Example: If Platform A prices a candidate's election win at 54%, Platform B at 58%, and Platform C at 56%, the consensus probability might be ~56% (weighted by liquidity or sample size). The consensus is more stable than any single reading.
D
Signal
Divergence
Also: market spread, platform disagreement
A condition where different prediction markets assign materially different probabilities to the same event. High divergence — typically 8+ percentage points between platforms — indicates that information is being incorporated unevenly across markets, or that liquidity differences are creating pricing gaps. Divergence often decreases rapidly as new information enters the market and prices converge.
Example: Platform A prices a team to win at 72%, while Platform B prices the same team at 58% — a 14-point divergence. This may reflect a major injury announcement that one platform's participants have already reacted to while the other has not.
E
Theory
Efficient Market Hypothesis (EMH)
Also: market efficiency, price efficiency
The theory that asset prices fully reflect all available information, making it impossible to consistently achieve above-average returns through analysis alone. Applied to prediction markets: if EMH holds, market prices are the best possible probability estimates at any given moment. In practice, prediction markets are believed to be more efficient than polls but less efficient than deep financial markets due to liquidity constraints and participant composition.
Example: After a breaking news event — say, a candidate withdrawing from a race — an efficient market would immediately update its prices to reflect the new reality. An inefficient market would take minutes or hours to fully adjust.
Contract Type
Event Contract
Also: event-based contract, outcome contract
A financial contract whose value is determined by the outcome of a specific real-world event, such as an election result, a sporting outcome, or an economic announcement. Event contracts are the primary instrument traded on regulated prediction market platforms. They differ from options in that payoffs are binary and predetermined.
Example: "Will the US unemployment rate be above 4.5% in Q3 2026?" is an event contract. It pays $1 if unemployment exceeds 4.5%, and $0 otherwise. The current price reflects the market's probability estimate.
F
Valuation
Fair Value
Also: theoretical value, model price
The price at which a contract should be priced given the available information — a model-derived probability estimate independent of current market prices. Fair value calculations use external data sources (polling averages, performance statistics, economic models) to produce an expected probability. The difference between a contract's market price and its fair value is sometimes called the "edge."
Example: A statistical model using team performance metrics estimates a team's win probability at 61%. If the market is pricing the contract at 55%, the fair value gap is 6 percentage points. Whether this gap represents a real opportunity depends on the quality and timeliness of the underlying model.
I
Core Concept
Implied Probability
Also: market-implied probability, price probability
The probability of an outcome as directly encoded in the current price of a binary prediction market contract. Since binary contracts pay $1 on a win and $0 on a loss, the contract price equals the market's collective probability estimate. No conversion is needed: a price of $0.63 implies a 63% probability. This is simpler than converting from traditional pricing, which include a platform margin.
Example: A contract asking "Will Party X win the Senate seat in State Y?" priced at $0.41 implies a 41% win probability. Compare this to a poll showing 44% support: the difference (if consistent) may reflect likely-voter adjustments, uncertainty about turnout, or late-breaking factors already priced into the market.
Theory
Information Aggregation
Also: dispersed information, Hayek's knowledge problem
The process by which markets combine dispersed private information held by many different participants into a single consensus price. Each participant knows something different — local conditions, domain expertise, early data signals. Market incentives cause them to reveal this information through their trades, with prices adjusting until all available information is incorporated. This is the primary mechanism explaining why prediction markets often show strong calibration relative to individual expert forecasts.
Example: A futures market on crop yields incorporates weather observations from thousands of farmers, satellite data from analysts, and supply-chain knowledge from distributors — far more information than any single analyst could synthesize. The resulting price reflects the aggregate of all this distributed knowledge.
L
Market Structure
Liquidity
Also: market depth, market volume
The total volume of contracts available in a prediction market without significantly moving the price. High liquidity markets can absorb large orders with minimal price impact, making their prices more reliable. Low liquidity markets can be easily distorted by a single large order. Liquidity varies significantly across events: presidential elections typically have far more liquidity than local races or niche sporting events.
Example: A US presidential election market might have $50M+ in total volume, making the price highly resistant to manipulation. A minor league sports event might have only $5,000 in volume — a single $500 trade could move the price by several percentage points.
M
Market Structure
Market Maker
Also: liquidity provider, automated market maker (AMM)
An entity (person, firm, or automated system) that continuously quotes both buy and sell prices for a contract, profiting from the bid-ask spread while providing liquidity to other participants. In traditional prediction markets, market makers may be human or algorithmic. In decentralized prediction markets, automated market maker (AMM) mechanisms set prices algorithmically based on the relative volume of yes vs. no contracts.
Example: A market maker on a political contract might continuously offer to buy at $0.48 and sell at $0.52, earning the $0.04 spread on every round-trip. Their presence ensures participants can always enter or exit positions without waiting for a counterparty.
Data Output
Market Signal
Also: price signal, market indicator
A data point or pattern derived from prediction market prices that carries informational value about the likely outcome of a future event. Market signals include current probability levels, probability changes over time, divergence between platforms, and convergence patterns following news events. Market signals are used for informational purposes — to contextualize news, assess uncertainty, or identify events with high information value.
Example: A sudden 12-point drop in a team's win probability from 68% to 56% in 10 minutes is a market signal indicating that new negative information — perhaps an injury report — has been incorporated into prices by informed participants.
Price Data
Movement / Price Movement
Also: probability drift, price change
The change in a contract's implied probability over a specified time interval. Movement is typically expressed in percentage points (e.g., "+4pp") or as a percentage of the previous price. Tracking movement highlights events where market opinion is shifting — either due to new information, changing conditions, or convergence with other markets. Direction (up or down) indicates whether the event is becoming more or less likely according to the market.
Example: A Fed rate decision contract moving from 22% to 31% overnight (+9pp) signals that market participants have updated toward a rate hike being more likely — possibly in response to a strong inflation print or Fed speaker comments.
N
Data Quality
N Platforms
Also: source count, platform count
The number of independent prediction market platforms contributing data to a consensus probability estimate for a given event. Higher n-platform counts produce more reliable consensus estimates, since each additional independent source reduces the influence of any single platform's biases or liquidity constraints. Events with n=1 should be treated as single-source estimates, not true consensus.
Example: An NBA game with data from 4 independent platforms is a much stronger consensus estimate than a minor political race covered by only 1 platform. The Meridian Edge API returns n_platforms as a field in each event response so consumers can filter by data quality.
O
Market Structure
Order Book
Also: limit order book, depth of market
A real-time list of all outstanding buy and sell orders for a contract, organized by price level. The order book shows market depth — how much volume is available at each price — and determines the best available price for any given order size. Prediction markets with deep order books (many resting orders at multiple price levels) are more liquid and their prices are more reliable indicators of true consensus probability.
Example: An order book might show 500 "Yes" contracts available to buy at $0.62, 800 at $0.63, and 1,200 at $0.65. A buyer wanting to purchase 600 contracts would fill 500 at $0.62 and 100 at $0.63, with their average price slightly above the quoted best price.
P
Core Concept
Prediction Market
Also: event futures market, information market, idea futures
A market where participants hold contracts whose value depends on the outcome of future events. Prices emerge from the interaction of buyers and sellers — each expressing their probability beliefs through their willingness to pay — and aggregate into a consensus probability estimate. Research consistently shows prediction markets are better calibrated than polls, individual expert forecasts, and most structured forecasting processes. Regulated prediction markets in the US operate under CFTC oversight.
Example: "Will the US CPI be above 3.0% in June 2026?" is a prediction market question. Participants who believe CPI will exceed 3.0% buy "Yes" contracts; those who disagree sell them. The price at which supply and demand balance — say, $0.31 — represents the market's collective 31% probability estimate.
Market Dynamics
Price Convergence
Also: divergence analysis convergence, spread compression
The process by which prices on different prediction market platforms move toward each other over time. When the same event trades at significantly different prices across platforms, divergence analysisurs have an incentive to buy on the cheaper platform and sell on the more expensive one. This activity compresses the spread until prices are near-equal or the transaction costs make further divergence analysis uneconomical. High-divergence events often exhibit rapid convergence following major news.
Example: If Platform A prices a contract at 45% and Platform B prices the same contract at 58%, divergence analysisurs can theoretically buy on A and sell on B. Their activity pushes A's price up and B's price down until the gap narrows to within transaction-cost range — typically 1–3 points on liquid markets.
Statistics
Probability Distribution
Also: outcome distribution, forecast distribution
A mathematical function that assigns probabilities to all possible outcomes of an uncertain event. For binary prediction market contracts (yes/no), the distribution is a Bernoulli distribution parameterized by a single probability p. For multi-outcome markets (e.g., "Which party prevails in the election?"), the distribution spans all candidates and must sum to 100%. Comparing distributions across platforms reveals where informational differences are most pronounced.
Example: A three-candidate election market might show Candidate A at 54%, Candidate B at 31%, Candidate C at 15%. These three probabilities form the outcome distribution and must sum to 100%. If Platform B shows A at 48%, B at 37%, C at 15%, the divergence is concentrated in the A-B split, not in C's probability.
R
Regulatory
Regulated Prediction Market
Also: CFTC-regulated market, designated contract market
A prediction market operating under regulatory oversight, such as approval from the US CFTC (Commodity Futures Commission) as a Designated Contract Market (DCM). Regulated platforms must meet standards for market integrity, participant protection, and contract design. Regulatory compliance provides important consumer protections and ensures that market prices reflect genuine participant beliefs rather than manipulation.
Example: Several US prediction market platforms operate as CFTC-regulated contract markets. Regulation typically imposes position limits, requires clearing arrangements, and mandates disclosure — factors that can affect liquidity and participation compared to unregulated alternatives.
Contract Design
Resolution Criteria
Also: settlement criteria, outcome criteria
The specific, predetermined rules that determine how a prediction market contract will be settled. Clear resolution criteria are essential for market integrity: participants need to know exactly what conditions trigger a "Yes" payout before they participate. Ambiguous resolution criteria can lead to disputes and erode trust. Well-designed criteria specify the data source, time window, and any edge-case handling.
Example: A contract asking "Will inflation be above 3% in June 2026?" needs to specify: which measure of inflation (CPI, PCE, headline, core?), which reporting release, the exact threshold, and what happens if the data is revised after settlement. A contract with ambiguous criteria is a source of risk for all participants.
S
Market Structure
Settlement
Also: resolution, contract expiry
The process by which a prediction market resolves — the real-world outcome is determined against the resolution criteria, and contracts are paid out accordingly. "Yes" holders receive $1 per contract; "No" holders receive $0 per contract, or vice versa depending on the outcome. Settlement may be immediate (for events with clear outcomes) or delayed (waiting for official data releases). Historical settlement data enables calibration analysis.
Example: A contract "Will Team X win Game 7?" settles immediately after the game ends. If Team X wins, all "Yes" contract holders are credited $1.00 per contract. If Team X loses, "No" holders receive $1.00 per contract. Settlement is final and not reversible.
Market Structure
Slippage
Also: market impact, price impact
The difference between the expected price of a trade and the actual execution price, caused by insufficient liquidity at the target price level. In thin prediction markets, placing a large order exhausts available contracts at the best price and requires buying at progressively worse prices. Slippage is a key consideration for high-volume participants — it effectively increases the cost of entering or exiting positions in illiquid markets.
Example: A participant wants to buy 1,000 contracts at $0.62. The order book shows only 200 available at $0.62, 300 at $0.63, and 500 at $0.65. The average execution price is $0.636 — 1.6 cents above the quoted price. This 1.6% slippage is the cost of participating in a thin market.
Data Visualization
Sparkline
Also: mini chart, inline trend chart
A small, high-density line chart — typically without axes or labels — that shows the trend of a data series over time in a compact space. In prediction market data, sparklines show the recent probability history for an event, enabling quick visual assessment of trend direction and volatility without requiring a full chart interface. Sparklines are commonly encoded as small arrays of historical values in APIs.
Example: The Meridian Edge API returns a sparkline field containing the last 6 consensus probability readings for each event. A sparkline of [0.45, 0.47, 0.51, 0.55, 0.58, 0.61] shows a steady upward trend — the event has been becoming more likely over recent intervals.
Core Concept
Spread
Also: platform spread, cross-market spread
The difference in implied probability between the highest and lowest estimates for the same event across different prediction market platforms. A spread of 5 percentage points or less generally indicates good consensus; spreads above 10 points suggest meaningful disagreement between markets, which may reflect information asymmetry, liquidity differences, or regulatory constraints affecting one platform's participants. Spread is a key input to confidence rating calculations.
Example: If the same NBA game shows win probabilities of 64%, 67%, and 71% across three platforms, the spread is 71% − 64% = 7 percentage points. A spread of 7pp would typically receive a MEDIUM confidence rating. A spread of 2pp across 4 platforms would receive HIGH.
T
Market Structure
Thin Market
Also: illiquid market, low-volume market
A prediction market with low market volume and limited order book depth, where even small trades can significantly affect the price. Thin markets are more susceptible to noise, manipulation, and stale prices — the last trade may have occurred hours ago and no longer reflects current information. Prices in thin markets are less reliable probability estimates than prices in deep, active markets.
Example: A market on a minor city council race might have fewer than 100 total contracts traded. A single informed participant buying 50 contracts could move the price from 30% to 55%. The resulting price may not reflect broad information aggregation — it may just reflect one person's opinion.
Data Engineering
Time-Series Data
Also: historical snapshots, probability history
A sequence of data points indexed by time, capturing how a value changes over regular intervals. In prediction markets, time-series data captures the evolution of implied probability for each event from market open to settlement. Time-series data enables trend analysis, volatility measurement, calibration studies, and backtesting of forecasting models. Each snapshot typically records the contract price, volume, and timestamp.
Example: Recording a contract's probability every 10 minutes from 7 days before an election to settlement produces ~1,008 data points per event. Aggregating this across hundreds of events creates a rich dataset for studying how markets update probability as election day approaches.
W
Theory
Wisdom of Crowds
Named after Francis Galton (1907); popularized by James Surowiecki (2004)
The empirical observation that the aggregate judgment of a large, diverse, and independent group is often more accurate than any individual expert opinion. Conditions for wisdom of crowds to hold: diversity of opinion, independence of participants, decentralization, and an aggregation mechanism. Prediction markets are specifically designed to capture and aggregate crowd wisdom via price discovery. The classic example is Galton's observation that the median crowd guess for an ox's weight (1,207 lbs) was within 1% of the actual weight (1,198 lbs).
Example: The average prediction market price for presidential elections has historically been within 2–3 percentage points of the final vote share — comparable to high-quality polls, but derived purely from aggregated market decisions rather than direct questioning. This accuracy emerges from information aggregation, not from any single participant being especially knowledgeable.
Methodology
Weighted Average
Also: weighted consensus, liquidity-weighted average
An average in which each data point is multiplied by a weight reflecting its relative importance before being summed and divided by the total weight. In prediction market aggregation, weights may be assigned based on each platform's liquidity, historical calibration accuracy, or n_participants. A liquidity-weighted average gives more influence to high-volume platforms, which are presumed to be more efficient. A naive (equal-weight) average treats all platforms identically regardless of volume.
Example: If Platform A has 10× the liquidity of Platform B on a given event, a liquidity-weighted average would give Platform A's price 10× the influence of Platform B's price. This prevents a thin, potentially noisy market from distorting the consensus estimate equally with a deep, liquid one.

Frequently Asked Questions

What is the difference between implied probability and traditional pricing?
Traditional pricing include a platform margin (the "vig" or "juice"), meaning they don't directly represent true probabilities. To convert American notation to implied probability you must adjust for the overround. Prediction market prices are different: since binary contracts pay $1 on a win and $0 on a loss, the price is the probability — no conversion needed. A contract at $0.63 directly implies a 63% probability, without any platform margin embedded in the price.
How is a Brier score calculated?
Brier Score = (1/n) × Σ (forecastt − outcomet)² where forecastt is the stated probability (0–1) and outcomet is the actual result (1 if the event occurred, 0 if not). Lower scores are better. A perfect forecaster scores 0. Always predicting 0.5 scores 0.25. Prediction markets typically achieve Brier scores of 0.18–0.22 on political binary questions, compared favorably to the 0.25 baseline and many individual expert forecasters.
Why do prediction markets sometimes disagree with polls?
Prediction markets and polls measure different things. Polls ask a representative sample of voters what they plan to do. Prediction markets aggregate the beliefs of financially-incentivized participants about what will happen. Markets incorporate more information (they react to news, fundraising data, economic indicators) and their participants have stronger incentives to be accurate. Polls can be biased by sampling methods, question wording, and social desirability effects. Research by Rothschild (2009) and Wolfers & Zitzewitz (2004) shows prediction markets historically show stronger calibration than polls.
What makes a prediction market well-calibrated?
A well-calibrated prediction market wins approximately the right proportion of events at each probability level. Events priced at 70% should resolve positively about 70% of the time. Key factors that improve calibration: high liquidity (prices are harder to distort), diverse participant base (reduces correlated biases), rapid information incorporation (prices update quickly on new data), and clear resolution criteria (no ambiguity at settlement). Calibration is best measured by plotting reliability diagrams across large sample sizes of settled events.
How does consensus aggregation improve probability estimates?
Each prediction market has its own participant base, liquidity profile, and potential biases. Aggregating across multiple independent platforms — like ensemble methods in machine learning — tends to cancel out platform-specific noise and produce more stable, accurate estimates. Research in forecasting shows that simple averages of independent forecasters tend to be more accurate than most individual forecasters. The same principle applies to prediction markets: a consensus built from 4+ platforms is more reliable than any single platform's price, especially for events where platforms show divergent probabilities.

Related reading: See our Prediction Markets vs Polls comparison for academic research on forecast accuracy, or the How Consensus Works guide for technical details on aggregation methodology.

See Consensus Probabilities Live

Live dashboard required. Live consensus data updated every 10 minutes.

Open Dashboard API Docs