Aggregated consensus from regulated prediction markets. Real-time divergence detection. One API.
Today's Market Snapshot
Loading today's consensus summary...
Updated every 10 minutes from regulated prediction markets
Three steps from raw market data to actionable intelligence.
We ingest real-time prices from multiple regulated prediction markets every 30 seconds.
Our engine computes aggregated consensus and identifies divergences — events where markets disagree.
Access consensus data via dashboard, REST API, embeddable widget, or automated daily email report.
Every signal starts with infrastructure most platforms can't replicate.
Live Data
Aggregated from regulated prediction markets. Updated every 10 minutes.
Works with
Every morning. Top events and divergence alerts.
280+ pages of live consensus data, updated every 10 minutes.
Trusted by research teams, quant desks, and AI developers worldwide.
For individual analysts and developers
That's $0.029 per API call
14-day money-back guarantee
Why Starter
Get consensus data 10x faster than checking individual platforms
Set up in 10 seconds — your first API call in under a minute
Automate your daily prediction market research
For teams building models and applications
That's $0.0099 per API call
Most popular — chosen by 3 out of 4 teams
Why Pro
See which platforms agree and where they diverge
Backtest against 14 days of historical consensus shifts
Build models with fair value estimates most analysts can't access
For research desks and research organizations
That's $0.0099 per API call
Volume discount available
Why Teams
One subscription covers your entire research desk
50,000 calls/day handles portfolio-wide monitoring
Priority support means answers in hours, not days
For production systems and institutional use
Flat rate, unlimited scale
Schedule a Call →Custom SLA + onboarding included
Why Enterprise
Zero rate limits — built for production pipelines
Full historical archive back to 2025 for deep research
Dedicated account manager who understands your use case
TRUSTED BY TEAMS AT RESEARCH INSTITUTIONS, QUANTITATIVE FIRMS, AND AI COMPANIES
| Feature | Starter | Pro | Teams | Enterprise |
|---|---|---|---|---|
| API calls/day | 1,000 | 10,000 | 50,000 | Unlimited |
| Update frequency | 10 min | Real-time | Real-time | Real-time |
| Consensus data | ✓ | ✓ | ✓ | ✓ |
| Divergence alerts | ✓ | ✓ | ✓ | ✓ |
| Fair value estimates | — | ✓ | ✓ | ✓ |
| Platform breakdown | — | ✓ | ✓ | ✓ |
| Historical data | 7 days | 14 days | 30 days | Full archive |
| SSE real-time stream | — | ✓ | ✓ | ✓ |
| Team seats | 1 | 1 | 5 | Unlimited |
| Support | Priority | Dedicated | ||
| SLA | — | — | 99.5% | 99.9% |
| WebSocket stream | — | — | — | ✓ |
| Custom consensus | — | — | — | ✓ |
All plans include HTTPS encryption · 99.9% uptime · Cancel anytime
For informational purposes only. Not investment advice. Participation in prediction markets involves risk of loss.
Join analysts and research teams using aggregated prediction market intelligence.
When three different prediction markets price a Lakers game at 58%, 61%, and 55%, what's the real probability? That's the question we answer.
Individual prediction markets are noisy. A single platform might have thin order books, a cluster of uninformed participants, or a brief lag before incorporating breaking news. But when you aggregate across multiple independent, regulated sources, those idiosyncrasies cancel out. What remains is a cleaner signal — one that reflects the genuine collective estimate of thousands of participants.
That's what we call consensus probability. It's not an average (simple averages overweight low-liquidity markets). We use a volume-weighted methodology that accounts for depth, recency, and source reliability. Markets with deeper order books and faster settlement histories contribute more to the final number. The result is a single probability that's more stable, more informative, and — in our backtesting — more calibrated than any individual source alone.
We update every ten minutes. Our team built the infrastructure to handle over 100 million data points per month because we believe consensus — not any single source — is the most reliable signal available. It's the same principle behind ensemble methods in machine learning: combining independent estimators almost always beats picking one.
1. We collect. Every ten minutes, our pipeline pulls pricing data from 25+ regulated prediction market sources. We normalize event names — different platforms use wildly different naming conventions — and match them to a shared canonical identifier. This matching step alone took us months to get right.
2. We compute. For each matched event, we calculate a volume-weighted consensus probability. Alongside it, we generate a confidence score (how much do the sources agree?) and a spread metric (the gap between the highest and lowest prices). High spread usually means someone knows something the other markets haven't priced in yet — that's where it gets interesting.
3. We deliver. You can access the data however fits your workflow. Our REST API returns structured JSON. The live dashboard shows everything at a glance. We've got embeddable widgets for publishers, a Python SDK on PyPI, and MCP integration for AI agents. Pick the format; we'll handle the plumbing.
As of March 2026, we track over 500 active events across NBA, NFL, MLB, NHL, MLS, and select political and economic markets. Our data engineering team runs automated quality checks every cycle to make sure no single source dominates the consensus — and to flag any data anomalies before they reach your endpoint.
Research teams. Academic researchers studying forecasting accuracy need clean, normalized probability data — not raw scrapes from five different platforms with five different schemas. Our API gives them a single, consistent feed they can pipe directly into R or Python without spending a week on data cleaning. We've heard from teams at policy institutes and universities who previously maintained brittle custom scrapers. They don't miss that.
Quantitative analysts. Quant teams need reliable probability feeds with consistent uptime and predictable schemas. Our REST API and SDKs (pip install meridianedge or npm install meridian-edge) were designed with this in mind — structured JSON, typed fields, 10-minute update cycles, and rate limit headers in every response. If you're building models that consume probability data, this is the plumbing you'd eventually build yourself (we just did it first).
AI agents and LLMs. A growing number of AI systems use prediction market data as a grounding signal — a way to anchor probabilistic reasoning in real market prices rather than training data. We support MCP (Model Context Protocol) for Claude and Cursor, and our A2A endpoint lets agent systems query consensus programmatically. If your LLM needs to know "what probability does the market assign to X," we're the structured data layer behind that answer.
If you're building something that needs reliable probability data, we'd genuinely like to hear about it. Drop us a line at [email protected].