In a busy multi-site lab, OTA uncertainty is rarely the thing that breaks first—until you try to correlate two chambers, across two shifts, with three fixtures, and the same device suddenly “moves” by 1–2 dB. At that point the debate isn’t about radiated performance; it’s about whether your measurement system is telling the truth consistently enough to make engineering decisions.
This post lays out a practical way to build an OTA measurement uncertainty budget that actually survives day-to-day lab life: multiple sites, rotating staff, frequent changeovers, and a steady stream of FR1/FR2 devices, phased arrays, and satellite terminals.
Why budgeting uncertainty matters more in multi-site OTA labs
Single-site labs can sometimes get away with “golden runs” and tribal knowledge. Multi-site labs can’t. The moment you need to compare results between chambers (or between a development lab and a certification lab), your uncertainty budget becomes the contract that explains what differences are expected, what differences are suspicious, and what differences require corrective action.
Three trends are pushing this to the forefront:
- mmWave OTA is now the default path for FR2 verification in many device programmes, not a specialist corner case. Industry guidance on 5G NR OTA testing increasingly emphasises uncertainty and stability as gating factors for test time and pass/fail confidence.
- Standards bodies are tightening OTA methodologies and uncertainty expectations. CTIA’s Measurement Uncertainty document (CTIA 01.70, v6.0.3, © 2024) formalises contributors such as mmWave measurement grids and temperature influences—exactly the items that get messy at scale.
- 3GPP is expanding OTA requirements into new domains. Release 19 (specification version 19.0.0, March 2025) introduced a new OTA methodology for spatial emission requirements, with uncertainty evaluation discussed in TR 38.908. That’s a strong signal: uncertainty isn’t an afterthought; it’s part of the compliance story.
Define the measurand first (or your OTA uncertainty will be meaningless)
Before you list contributors, lock down what you are measuring and under what operational mode. Typical OTA measurands include:
- TRP / EIRP (including beamformed EIRP for FR2)
- TIS / EIS (and sensitivity under specific noise/interference conditions)
- Radiation pattern metrics (e.g., sidelobe levels, null depth, beam peak direction)
- Spurious and spatial emission metrics (increasingly relevant with newer 3GPP requirements)
Then freeze the test conditions that change the answer: channel bandwidth, power control state, thermal state of the DUT, beam management settings, and any cable/IF routing. In multi-site labs, 80% of “uncertainty” arguments are actually measurand definition arguments.
Build the OTA uncertainty budget from the signal chain outwards
A robust approach is to treat the OTA system like an RF link budget—except every term is an uncertainty contribution rather than a deterministic loss/gain. Group contributors so you can assign ownership and mitigation:
1) Instrumentation and traceability (the boring bits that bite hardest)
These are the contributors auditors love and engineers forget until correlation fails:
- Power sensor / VNA / spectrum analyser calibration uncertainty (traceable cal intervals, connector repeatability)
- Frequency reference uncertainty (especially relevant for narrowband spurs or high-Q measurements)
- Receiver linearity and noise figure uncertainty (dominates TIS/EIS at the low end)
Mitigation: keep a single metrology “truth source” across sites where possible—common calibration providers, harmonised intervals, and cross-check artefacts (attenuators, noise sources, power splitters) that physically travel between labs.
2) RF path stability: cables, waveguide, switches, converters
In FR2 OTA, the RF path is often a patchwork: converters near the chamber, short mmWave waveguide runs, and control cabling that gets disturbed during maintenance. The uncertainty contributors to treat explicitly:
- Cable/waveguide loss uncertainty from flexing, re-torquing, and ageing
- Switch repeatability and temperature drift
- Up/down-converter gain/phase drift over time and ambient temperature
Mitigation: characterise “touch events”. If a cable gets moved, assume it’s a new state and measure it. Where possible, place sensitive conversion stages close to the antenna/feed to reduce loss and improve SNR—industry solutions for FR2 CATR systems explicitly pursue lower loss and lower uncertainty by architectural design, not by paperwork.
3) Chamber and positioning: geometry is an uncertainty term
Positioning and geometry dominate once instrumentation is under control:
- Positioner accuracy/repeatability (azimuth/elevation, tilt, centring)
- Quiet-zone quality (CATR reflector alignment, probe weighting, taper errors)
- Multipath residuals from imperfect absorber performance, leakage, door seams, cable penetrations
Mitigation: treat alignment as a calibration activity, not an installation milestone. A simple rule: if a mechanical adjustment can change the measured peak beam direction or peak gain, it belongs in the uncertainty budget and in a periodic verification checklist.
4) Sampling and grids: the “we didn’t measure there” problem
For spherical scanning or multi-probe systems, your measurement grid is a form of numerical approximation. CTIA 01.70 explicitly calls out mmWave TRP grid influences: fewer points and wider angular steps increase uncertainty because you are integrating an incomplete sampling of the radiated field.
Mitigation: decide what you’re optimising—test time or uncertainty—and set a site-wide grid policy. If Site A uses a dense grid and Site B uses a sparse grid, your correlation exercise is already compromised.
5) DUT behaviour: the hardest contributor to admit
DUT variability is real, and it gets mislabelled as “system uncertainty”. Typical culprits:
- Thermal drift (PA compression, array efficiency changes, beam weight quantisation vs temperature)
- Closed-loop power control differences between runs
- Mode-dependent antenna behaviour (handset grip emulation, enclosure proximity, radome effects)
Mitigation: separate “measurement system uncertainty” from “DUT repeatability” in your budget. Run a Type A study (repeat measurements) using stable reference devices to estimate repeatability by operator, by shift, and by site. CTIA-style Type A thinking is valuable even outside formal certification workflows.
Turning contributors into a number: a practical combination method
A workable lab method is:
- List contributors and classify them as Type A (measured statistically) or Type B (from calibration sheets, specs, prior knowledge).
- Convert everything to standard uncertainty (1σ). For rectangular distributions (common for specs), use: u = a/√3.
- Combine independent terms by RSS: uc = √(Σui2).
- Choose a coverage factor (often k=2 for ~95%) and report expanded uncertainty: U = k·uc.
Two engineering cautions:
- Don’t RSS things that are clearly correlated. Example: temperature-driven drift affecting both converter gain and cable loss. In multi-site labs, correlation assumptions should be written down, not implied.
- Budget uncertainty by test objective. A fast regression test may accept higher uncertainty than a cross-site correlation run or a customer acceptance test.
Reducing OTA uncertainty where it matters (without gold-plating the lab)
Uncertainty reduction is about leverage. The best improvements come from the few terms that dominate your combined budget:
- Stabilise temperature (room and equipment). CTIA 01.70 highlights ambient temperature influence on test equipment; in practice it’s also a proxy for drift in converters, switches, and even mechanical structures.
- Control mechanical repeatability: torque wrenches, connector care, fixed cable dressing, and “no-touch” zones inside chambers.
- Use reference radiators / reference DUTs and run them weekly. Trend charts catch slow failures (absorber degradation, positioner wear, converter drift) before they become correlation crises.
- Harmonise software and post-processing across sites: near-field to far-field transforms, probe correction tables, integration methods, and peak-search behaviour can change results just as surely as hardware changes.
Novocomms Space use-cases: why satellite terminals make uncertainty budgeting non-negotiable
Satellite terminals—especially compact LEO/GEO user terminals and electronically steered arrays—are unforgiving in OTA. A couple of dB of avoidable uncertainty can masquerade as link margin, or worse, hide a real performance regression until field trials.
Novocomms Space engineering teams work with compact, high-efficiency terminal antenna solutions (including advanced phased-array miniaturisation and integrated RF controls). In that context, an OTA uncertainty budget becomes a design tool:
- Design validation: proving that a beamforming update genuinely improved EIRP, rather than simply shifting measurement bias.
- Manufacturing screens: setting pass/fail limits that reflect true product spread plus measurement uncertainty—so you don’t scrap good units or ship marginal ones.
- Interference mitigation verification: confirming that filtering/mitigation changes have measurable, repeatable impact on radiated behaviour in crowded RF environments.
In short: when the antenna is the product, you can’t outsource confidence to hope. You have to budget it.
Conclusion: treat OTA uncertainty like a system requirement
The strongest multi-site OTA labs don’t chase perfect numbers; they build a repeatable measurement system with a transparent uncertainty budget. That budget is what allows you to move faster—because it tells you when a 0.7 dB change is real, and when it’s just your chamber having a bad day.
If you’re building or scaling an OTA facility—especially for FR2 devices, phased arrays, or satellite terminals—Novocomms can help you translate standards expectations into a practical uncertainty plan, and then engineer the hardware and processes that keep it stable across sites.
Contact Novocomms to discuss your OTA lab architecture, correlation strategy, or terminal/antenna verification needs: https://novocomms.com/contact-us/.