If you want a clear answer to “what was the platform’s uptime over the last 12 months?” you need two things: the right sources and the right way to read the numbers. Vendors sometimes publish tidy percentages (for example “99.95% availability”), but that shorthand hides choices about measurement windows, exclusions and sampling frequency. This article walks through where to look, how uptime percentages are calculated, concrete examples of what the percentages mean in practice, and the traps that cause apparent discrepancies between different reports. If you use a trading platform, uptime matters for order execution and market access; remember that trading carries risk and outages are only one of many factors that affect performance. This is general information, not personalised advice.
Where the documented uptime numbers usually come from
Platforms generally publish availability information in three places: the SLA (Service Level Agreement), public status pages or monthly uptime reports, and independent monitoring dashboards. The SLA states the contractual uptime target and how credits are calculated if the provider misses it. Status pages and monthly reports show measured availability for recent reporting periods and often contain incident timelines. Independent monitoring services and community outage trackers give a separate view using synthetic checks from multiple locations.
To find a documented 12‑month uptime figure, start with the platform’s public status or transparency page and look for a “monthly uptime” archive or an annual summary. If that isn’t available, consult the SLA for measurement methodology and ask support for the provider’s published 12‑month SLI (Service Level Indicator) or availability report. If you need greater confidence, compare the provider’s numbers with third‑party monitoring records or your own synthetic checks.
How uptime percentage is calculated (simple explanation)
Uptime percentage answers a simple question: of the total time in the measurement window, what fraction was the service available? The basic formula is:
Availability % = (Total time in window − Total downtime in window) ÷ Total time in window × 100
For a 12‑month window you use the total minutes (or seconds) in that year and subtract minutes classified as downtime according to the provider’s measurement rules. Different reports can use different windows (rolling 30 days, calendar month, 12 months) and different definitions of “downtime” (complete outage versus service degradation), so always confirm the measurement method.
Concrete example: a calendar year has 525,600 minutes. If a platform recorded 60 minutes of outage in that year, the availability is (525,600 − 60) ÷ 525,600 × 100 ≈ 99.989%.
What common availability percentages mean in practice
It’s helpful to translate percentages into allowable downtime so the numbers feel tangible. Using a 12‑month (annual) view:
- 99.9% uptime (three nines) allows about 8.76 hours of downtime per year.
- 99.95% uptime allows about 4.38 hours per year.
- 99.99% uptime allows about 52.6 minutes per year.
- 99.999% uptime allows about 5.26 minutes per year.
If a platform reports “99.95% availability over the last 12 months,” that means the measured outages added up to roughly 4.4 hours during that period under their measurement rules. If the provider reports a rolling 30‑day SLI of 99.95%, that is a short‑term snapshot and may not equal the 12‑month figure.
Step‑by‑step: how to verify a provider’s documented 12‑month uptime
Begin by locating the provider’s published data: find the SLA, the status or transparency page, and any monthly uptime reports. Confirm these three things in the documentation or by asking support: the measurement interval (how often checks are made), the definition of “downtime” (what counts and what’s excluded), and the reporting window (calendar month, rolling 30 days, rolling 12 months, etc.).
Next, read the incident log. Providers usually list incidents with start and end times; add those incident durations together (honouring the provider’s rules about planned maintenance and excluded events) and recompute availability using the formula above. If the provider already publishes a 12‑month availability figure, compare it to your own tally or to independent monitoring for the same interval.
Finally, compare the published figure to the platform’s SLA target. If the SLA guarantees 99.9% and the reported 12‑month availability is below that, the SLA will normally specify remedies such as service credits and the process to claim them. Note that SLAs also often exclude scheduled maintenance windows and force majeure events.
Interpreting differences between published numbers
You will often see small inconsistencies between a vendor’s SLA, the monthly numbers on a status page, and independent monitors. Those differences usually arise from three sources: exclusions (planned maintenance, force majeure), measurement granularity (checks every 60 seconds vs every 5 minutes), and regional or feature‑level differences (a single region or a specific API endpoint may have different availability than the global control plane).
For example, a vendor may report 99.99% availability for its global API but only 99.9% for a specific trading execution endpoint because that endpoint depends on a third‑party liquidity provider. Another common case: the status page shows “partial degradation” incidents that don’t meet the SLA’s definition of downtime and therefore are not counted in the SLA calculation.
Using independent measurement to validate vendor numbers
If uptime matters to you—particularly for trading platforms where access and execution latency matter—run your own synthetic monitoring. Simple HTTP checks from multiple geographic locations every 30–60 seconds will give you an independent record of availability and response time. Combine these checks with real‑world data: order confirmations, fills, and logs from your trading client, because availability at the API level doesn’t always reflect execution success.
Third‑party outage trackers and community reports can spot broad outages quickly, but they are noisy and can overstate the impact during partial regional problems. Use them as signals, not definitive measures.
Example calculation (12‑month case)
Imagine a platform publishes an incident log for the past year listing three customer‑impacting incidents: 90 minutes, 45 minutes and 120 minutes. The SLA excludes two maintenance windows that lasted 60 minutes in total. If the platform’s measurement methodology matches your interpretation, the total downtime would be 90 + 45 + 120 − 60 = 195 minutes. Annual availability = (525,600 − 195) ÷ 525,600 × 100 ≈ 99.9629%, or about 99.96%.
If the provider reported 99.95% for the same period, your independent tally is consistent within rounding and classification choices. If the provider reported 99.999%, you would have good reason to ask for their exact calculation methodology because your incident totals don’t match that claim.
Risks and caveats when using documented uptime figures
Published uptime percentages are only as useful as the definitions behind them. Vendors use different measurement windows and may exclude scheduled maintenance, customer‑caused events, and third‑party failures. They also may report availability for different scopes: whole platform versus a specific service, region, or API. Sampling frequency matters: a monitoring probe every five minutes can miss short outages that a one‑minute probe would catch, and vice versa.
Another caveat is partial degradation: slow responses, intermittent errors or failed trades can seriously affect trading even if a service is technically “up.” Uptime percentages do not measure latency, error rates or the quality of execution. Finally, marketing rounding and legal definitions in SLAs can make published percentages appear cleaner than the underlying reality; always ask for the raw SLI data and incident timelines if precise verification matters.
What to do if the published 12‑month uptime is unclear or missing
If you cannot find a clear 12‑month figure, request it from the provider’s support or account team and ask three pointed questions: what exact time window was used, what incidents were excluded and why, and what sampling/measurement methodology was used. If you receive an SLA target but no historical SLI data, ask for the monthly uptime archive or for a machine‑readable incident log. Where possible, run your own synthetic monitoring alongside the vendor’s data so you can see how availability and performance affect your real‑world trading activities.
Trading platforms: why uptime percentage is necessary but not sufficient
For traders, a single percentage number is a starting point, not the whole picture. Availability percentages tell you how often the platform was reachable, but they don’t tell you whether orders were routed correctly, fills were timely, or price feeds stayed accurate during incidents. Combine uptime data with latency, error‑rate metrics and execution quality measures. And always account for the fact that outages and degraded service increase operational risk; incorporate contingency plans such as failover methods, alternative liquidity sources, and clear procedures for manual intervention.
Remember: trading carries risk. Platform availability is one operational risk among many, and historical uptime does not guarantee future performance.
Key Takeaways
- Check the provider’s SLA, status/transparency pages and monthly reports to find documented 12‑month uptime, and confirm the measurement method and exclusions.
- Uptime % = (time available ÷ total time) × 100; translate percentages into concrete downtime (hours/minutes) to understand real impact.
- Differences between published figures often come from measurement windows, exclusions and scope (global service vs specific endpoint); independent monitoring helps validate claims.
- For traders, uptime is necessary but not sufficient—also evaluate latency, error rates and execution quality; trading carries risk and historical uptime doesn’t guarantee future reliability.
References
- https://incident.io/blog/slo-sla-sli
- https://www.motadata.com/blog/network-monitoring-statistics/
- https://techkooks.com/blogs/6-key-metrics-for-application-uptime-monitoring
- https://www.cockroachlabs.com/blog/2025-top-outages/
- https://uptime.is/
- https://www.demandsage.com/internet-outage-statistics/
- https://resilienceforward.com/uptime-institute-releases-annual-outage-analysis-2025-report-looks-at-it-and-data-center-outage-trends/
- https://secureframe.com/blog/disaster-recovery-statistics
- https://www.coresite.com/blog/data-center-outage-trends-good-news-flags-in-the-uptime-institute-reports
- https://uptimeinstitute.com/about-ui/press-releases/uptime-announces-annual-outage-analysis-report-2025