Transparency & rigour
IQ Benchmark Methodology
How we calculate the IQ Procurement Benchmark Index — what data we use, how we weight it, how we handle uncertainty, and how the methodology evolves as Bundle IQ's transaction database grows. Published in full because transparency is not optional in research.
Current edition
Baseline Edition — April 2026.
This is the first edition of the IQ Benchmark Index. At this stage, our proprietary transaction data component is limited (40+ events, Q1 2025–Q1 2026). We weight heavily toward published primary sources in this edition and are explicit about where our own data contributes. This document describes the methodology as it stands today and how it will evolve. We will update both the benchmarks and this methodology document quarterly.
1. Our four guiding principles
01
Transparency over authority
Every rate in the index cites its source. Where our transaction data contributes, we state the sample size. Where we rely on published data, we name the publication and date. A benchmark without a source is an opinion.
02
Conservative over confident
Where sources conflict or samples are small, we widen the range rather than narrow it. We would rather show a range of £18–28 and be honest about uncertainty than show £23 and imply false precision.
03
Honest about what we don't have
Bundle IQ is an early-stage platform. Our transaction dataset is growing, not grown. We do not pretend otherwise. The value of this edition is the methodology and the baseline — not the proprietary data volume.
04
Better over time, not just bigger
Each edition adds more data and more rigour. We track methodology changes between editions and document them. The direction of travel matters as much as the current position.
2. Data sources by tier
The IQ Benchmark uses a tiered source hierarchy. Where Tier 1 sources exist and are current, they take precedence. Tier 2 sources supplement or corroborate. Tier 3 — our own transaction data — is weighted proportionally to sample size and clearly flagged.
| Tier | Source type | Examples | Current weight |
| Tier 1 |
Government & regulatory primary data |
Ofgem Commercial Energy Price Index · ONS Annual Business Survey · SRA Price Transparency Register · ABI SME Insurance Market Statistics · Companies House published data |
Primary — full weight |
| Tier 2 |
Professional body & major industry reports |
CIPS/Hays Salary Guide · Law Society Annual Statistics · REC JobsOutlook · Hays sector salary guides · IPA Agency Census · BIFM Market Report · Logistics UK · Cornwall Insight · LMA Rate Monitor |
High — corroborative |
| Tier 3 |
Bundle IQ transaction data (proprietary) |
Competitive event outcomes, Q1 2025–Q1 2026. Categorised by spend category, region, organisation size band, contract age, and route (individual vs pool). Anonymised. |
Proportional to n — stated per rate |
| Tier 4 |
MCIPS practitioner analysis |
Rate range validation, overpay signal calibration, and contextual insights by Bundle IQ Research Team. Applied where Tier 1–3 data is insufficient to establish a range with confidence. |
Supplementary — flagged where used |
3. How we calculate rate ranges
For each service type within each category, we establish a rate range using the following process:
Step 1 — Identify the comparable unit
We define the service precisely enough that rates are comparable. "IT support" is not a unit; "IT helpdesk support, Mon–Fri 9–5, 4-hour response SLA, including remote monitoring and patch management, per user per month" is. Imprecise service definitions are the most common reason published benchmarks are misleading.
Step 2 — Gather Tier 1 and 2 sources
We pull the most recent edition of each relevant source. Where sources use different service definitions, we adjust for comparability. Where sources conflict materially (more than 30% variance between them), we widen the range rather than average them — conflicting sources are a signal of genuine market variation, not measurement error.
Step 3 — Incorporate transaction data
Where Bundle IQ has transaction data for the relevant service type (sample size n≥3), we incorporate the distribution of outcomes. With small samples (n=3–10), we weight this at 20–30% of the final range. With larger samples (n>20), transaction data may become the primary source. Sample size is always stated.
Step 4 — Set the overpay signal
The overpay signal is the rate above which we believe a competitive process is very likely to deliver a saving. It is set at approximately the 75th percentile of rates observed across our sources — the point where three-quarters of comparable markets are pricing below this level. It is not the maximum we have ever seen; it is the threshold above which paying more requires explanation.
4. What the benchmarks don't capture
Every benchmark has limits. Ours specifically do not capture:
- Regional variation below London/non-London. We note where London adds a significant premium but do not publish granular regional benchmarks at this stage.
- Organisation size effects below 10 employees. Our benchmark range is calibrated for businesses of approximately 10–250 employees. Rates for very small businesses may differ.
- Specification complexity effects. A bespoke IT support contract with unusual SLA requirements may legitimately sit above our benchmark. The benchmark is for standard-specification procurement.
- Rapidly moving markets. Energy prices can move significantly between quarterly publications. We note categories with high price volatility and recommend more frequent self-assessment in those categories.
- Relationship value. Some above-benchmark pricing reflects genuine relationship value — priority service, embedded knowledge, trust built over time. The benchmark identifies where that premium is being paid; it does not determine whether it is worth paying. That is a judgement only the buyer can make.
5. How the methodology evolves
This methodology document is versioned. Changes between editions are documented below and in the benchmark itself. The planned evolution is:
Edition 1 — April 2026
Baseline: public sources primary
Rate ranges derived primarily from Tier 1 and Tier 2 sources. Bundle IQ transaction data (n=40+) contributes as Tier 3 with proportional weighting. Methodology established and published. This document is version 1.0.
Edition 2 — July 2026
Q2 transaction data incorporated
First quarterly update. Bundle IQ transaction data refreshed with Q2 2026 events. Expected sample growth to n=80–120+. Where transaction data sample exceeds n=20 for a service type, it becomes co-primary with Tier 1/2 sources. Methodology note updated to reflect any rate movements.
Edition 3 — October 2026
First year-on-year comparison
One year of Bundle IQ transaction data enables first year-on-year rate movement analysis. Categories where rates have moved materially from baseline are highlighted. This is when the Index begins to generate its own primary trend data.
Edition 4+ — January 2027 onwards
Proprietary data primary in most categories
With a full year of transactions across all categories, Bundle IQ transaction data becomes the primary source in categories where we have sufficient sample size (target n≥50 per service type). Published sources remain as corroboration and for categories with limited Bundle IQ volume. The Index becomes genuinely differentiated from anything else available to UK SMEs.
6. How to contribute
The IQ Benchmark improves as more organisations contribute their anonymised rate data. If you are willing to share your current supplier rates — anonymously and in aggregate — it makes the benchmarks more accurate for everyone, including you.
We ask for: the service type, your current rate, the contract start date, and your approximate organisation size band. We never ask for your name, your supplier's name, or any information that could identify either party. Contributed data is incorporated into the next quarterly edition.