On a crowded battlefield, the radio that fails is rarely “broken” — it’s often being degraded. SDR desense is the quiet killer in tactical VHF/UHF: a nearby transmitter, a co-sited datalink, a vehicle inverter, or even another friendly net raises the effective noise floor until weak signals simply disappear. If you’re an EW test engineer, you’ll recognise the symptom: sensitivity that looks fine in isolation, then collapses the moment you introduce real-world RF density.
This post sets out a practical, repeatable approach to measuring blocking and desense in tactical SDRs, and—crucially—how to interpret the results so design teams can act on them.
Blocking vs SDR desense: what you’re actually measuring
Engineers sometimes use “blocking” and “desense” interchangeably. On the bench, it pays to separate them because the mechanisms (and fixes) differ.
Blocking is large-signal overload. A strong unwanted signal drives the receiver front end (LNA, mixer, preselector, ADC, or DDC chain) into compression or clipping. The result is a reduction in gain for the wanted channel, a burst of intermodulation, or AGC action that effectively turns your receiver down.
Desensitisation is a performance loss: the wanted signal must be increased to maintain a target quality (SINAD, BER, PER, EVM) in the presence of an interferer. Desense can be caused by blocking, but also by reciprocal mixing (LO phase noise), spurious responses, internal harmonics, and broadband noise folding in wideband SDR architectures.
A useful way to frame modern measurements is the industry push towards receiver “resilience” metrics. Recent work highlighted in the NTIA Receiver Interference Immunity discussions points to methodologies aligned with ECC Report 356 (approved May 2024) that combine noise floor, acceptable desense, and interferer leakage into a comparable resilience figure. You don’t need to adopt the full framework to benefit from the mindset: measure the receiver as a system under interference, not just a sensitivity number on a datasheet.
Why tactical SDRs are more vulnerable than legacy superhets
SDRs win on agility and waveform flexibility, but tactical SDR implementations often trade analogue selectivity for bandwidth and SWaP. Three trends are making SDR desense more prominent in fielded platforms:
1) Wider RF front ends and pooled apertures. Open-architecture programmes (CMOSS/SOSA-aligned approaches) encourage shared RF resources and modular cards. That improves upgradeability, but co-site RF becomes a first-order design constraint: multiple emitters, shared apertures, and dense cabling raise the probability of front-end overload.
2) Direct-sampling and wideband DDC chains. When the ADC sees too much out-of-band energy, it doesn’t matter that your digital filters are excellent — you’ve already paid the price in quantisation noise, clipping, or intermod products. “Digital selectivity” cannot rescue analogue overload.
3) Phase-noise-limited scenarios. Strong off-channel signals mix with LO phase noise and appear as in-channel noise (reciprocal mixing). This is a classic desense pathway that can look like “mystery noise” unless you measure it explicitly.
SDR desense test basics: define the metric before you touch the generator
A blocking/desense campaign goes wrong most often at the requirements level. Before setting up, agree these items with the product or systems team:
• Wanted-signal performance criterion: SINAD (analogue FM), BER/PER (digital voice), EVM (wideband), or a waveform-specific decode threshold.
• Reference sensitivity point: e.g., wanted signal at MDS or “12 dB SINAD” level, then measure additional wanted signal required to maintain the criterion under interference. (This becomes your desense in dB.)
• Interferer type: CW, modulated carrier (often more realistic), or noise-like emissions representing adjacent friendly nets.
• Offset plan: On-channel, adjacent-channel, and far-out offsets. Legacy standards remain useful: ETSI EN 300 086, for example, defines blocking/desensitisation as the ratio of unwanted to wanted signal at a specified degradation point, and calls out a blocking ratio not less than 84 dB across stated ranges (except at spurious response points). Even if you’re not building to ETSI, the methodology is sound.
Test setups for SDR desense and blocking (bench methods that hold up in design reviews)
At minimum you need two coherent RF sources (wanted + interferer), a combiner with known isolation, and a way to measure the receiver’s demodulated performance. The details matter.
1) Two-signal conducted test (the workhorse)
Setup: Two RF generators → calibrated pads/attenuators → high-isolation combiner → DUT RF input. Monitor DUT audio/data output; log SINAD/BER/PER vs levels.
Method: Set the wanted signal to the reference point (e.g., just meeting BER threshold). Sweep interferer level upward at a given frequency offset until the performance criterion degrades by a defined amount (e.g., 1% PER increase or 3 dB SINAD drop). Record the interferer level and compute desense.
What this exposes: LNA/mixer compression, AGC-induced degradation, ADC headroom limits, and reciprocal mixing if your offset plan includes close-in blockers.
Practical notes: Use enough external attenuation that the generators and combiner behave linearly; otherwise you’ll measure your test set’s intermod rather than the radio’s. Verify combiner isolation and include it in your uncertainty budget.
2) Adjacent-channel and “near-far” realism
If your tactical SDR carries both voice and data, you need both narrowband and wideband adjacent-channel cases. Adjacent-channel selectivity can look acceptable on paper but collapse in SDRs when the interferer is a wideband OFDM-like emission or when front-end filtering is shallow.
Run adjacent tests with modulated interferers representative of fielded waveforms (not just CW), and capture the receiver state: AGC value, front-end gain mode, ADC level flags, and any internal “overload” telemetry. Those internal indicators are gold during root-cause analysis.
3) Spurious response and harmonic mixing checks
Don’t stop at a single interferer frequency. Sweep the interferer across a wide range to identify discrete spurious response points. The NTIA interference-immunity discussion notes how strong spurious signals and internal harmonics can create in-band products via mixing with LO harmonics; in practice, you’ll often find a handful of “death frequencies” that dominate operational complaints.
Log these points and feed them directly into frequency planning guidance and filter design updates.
4) Conducted susceptibility and platform coupling (MIL-STD thinking)
Many “RF desense” problems arrive through wiring, not the antenna connector. Vehicle power, Ethernet, handset leads, and intercom harnesses can inject broadband rubbish into the receiver reference planes. While your programme may not certify to it, MIL-STD-461 conducted susceptibility methods (CS114/CS115/CS116 families) are a useful template: inject controlled disturbances onto cables and quantify receiver degradation. That gives you an evidence-backed split between front-end RF overload and platform EMC coupling.
Interpreting results: diagnosing the desense mechanism
Once you have curves (wanted level required vs interferer level and offset), the shape tells you where to look:
• Sharp knee, then rapid collapse: classic compression or ADC clipping. Check front-end gain distribution, LNA P1dB, and any preselector bypass modes.
• Gradual degradation close-in: reciprocal mixing / phase-noise limited behaviour. This is often solved by cleaner LO generation, different synthesiser architecture, or tighter analogue filtering ahead of the first conversion.
• Discrete “holes” at specific interferer frequencies: spurious responses or harmonic mixing. Investigate LO harmonics, mixer products, and shielding/grounding around high-level digital clocks.
• Desense only when certain I/O is connected: cable-borne coupling. Revisit filtering on power and data ports, bonding, and enclosure partitioning.
Design levers that actually move the needle (and where Novocomms Space fits)
The fixes for SDR desense are rarely glamorous, but they’re effective when applied with discipline:
Front-end selectivity and linearity: Better preselection (switched filters, cavity/SAW/BAW where appropriate), higher-linearity LNAs/mixers, and careful gain distribution to protect the ADC. In rugged tactical hardware, the mechanical implementation—screening cans, gasket strategy, and thermal stability—matters as much as the schematic.
Co-site isolation: Antenna placement, polarisation choices, and passive isolation can buy you tens of dB before you touch the receiver design. This is where antenna and RF system engineering must be done as a single problem, not two separate subcontract packages.
Platform EMC hardening: Filters on power and I/O, correct bonding, and partitioned enclosure design reduce conducted and radiated coupling that masquerades as receiver weakness.
At Novocomms Space & Defence, we’re typically brought in when teams need rugged RF and antenna subsystems that survive dense RF environments: front-end filtering strategies, co-site antenna solutions, and integrated RF design support for mission-critical tactical networks. The point isn’t just to “meet a spec” — it’s to keep the link up when the spectrum is hostile and the platform is electrically noisy.
Conclusion: measure like the battlefield, not like a datasheet
Blocking and SDR desense are not edge cases anymore; they’re predictable outcomes of wideband SDRs, dense co-site operation, and open-architecture integration. A solid measurement plan starts with a clear degradation metric, uses two-signal testing with realistic interferers, and expands into spurious and platform-coupling checks. The payoff is practical: you can pinpoint whether you need more analogue selectivity, more linearity, cleaner synthesis, or simply better co-site isolation.
If you’re tackling desense in a tactical VHF/UHF SDR programme — from early architecture through qualification and field issues — speak to Novocomms Space & Defence. Contact us here: https://novocomms.space/contact-us/.