Abstract: Tactical radios are increasingly wideband, multi-channel and co-sited with other emitters, making SDR desense and blocking a day-to-day reality rather than an edge case. This post sets out practical, repeatable lab methods to measure desensitisation and blocking in VHF/UHF SDRs, explains the dominant mechanisms (ADC overload, reciprocal mixing, front-end compression), and highlights mitigation options—from test discipline to filtering and antenna/RF distribution strategies.

Introduction: Why this Suddenly Hurts More

If you’re testing a modern tactical VHF/UHF SDR, you’ll have seen it: sensitivity that looks excellent on a quiet bench, then collapses when a nearby transmitter keys up, a wideband payload powers on, or a second channel is enabled. That’s SDR desense in the real world—receiver performance degrading in the presence of strong off-channel energy—and it’s often accompanied by classic receiver blocking.

The industry context isn’t helping. NATO’s recent work on contested electromagnetic environments calls out the reality of congested spectrum and the need for radios that can adapt at pace, with resilient links and agile spectrum use. Meanwhile platform integration density is rising (open architectures, tighter packaging, more digital payloads), and with it the EMC/coexistence burden: emissions measured at module level rarely match the full product once enclosures, cabling, apertures and resonances get involved. The net result is simple: you can’t treat blocking/desense as a spec-sheet footnote. You have to measure it the way the radio will be used.

Blocking vs Desensitisation: Pick the Right Yardstick

In day-to-day lab language, “blocking” and “desense” get mixed together. For test engineers, it pays to be precise because the setup and pass/fail criteria differ.

Desensitisation is the increase in required wanted-signal level to maintain a defined performance point (e.g., BER/FER, audio SINAD, packet error rate) in the presence of an interferer. It’s usually reported in dB as:

Desense (dB) = Wanted level with interferer − Wanted level without interferer

Blocking is a specific desense condition driven by strong off-channel signals (often far enough away that selectivity isn’t the primary limiter). Standards and vendor practices vary, but you’ll often see a blocking point defined at 1 dB desense, 3 dB desense, or a specified BER/FER threshold.

Adjacent-channel desense is frequently tied to Adjacent Channel Rejection (ACR), with well-established methods in land-mobile specifications (e.g., TIA procedures). The key takeaway from those methods is still current: the interferer source matters. A “clean” signal generator gives one answer; a real transmitter with a higher noise skirt can give a worse—and more operationally relevant—answer.

Where SDR Desense Really Comes From (and why VHF/UHF is unforgiving)

On tactical SDRs the failure mode is often not the narrow IF filter anymore; it’s dynamic range management across a wide RF/IF/ADC chain. Typical culprits:

1) Front-end compression (LNA/mixer)
A strong off-channel signal drives gain stages into compression. Even before obvious compression, you can see gain “softening” and noise figure worsening. In VHF/UHF co-site environments, this is common when high-power transmitters are physically close or share antenna/RF distribution.

2) ADC overload / digital front-end headroom
Wideband SDRs often digitise a large chunk of spectrum. A single strong interferer can steal converter headroom, raising quantisation noise and reducing effective bits for the wanted channel. You may also see spur generation if digital scaling/AGC is not robust.

3) Reciprocal mixing (LO phase noise)
A strong blocker mixes with LO phase noise and “spreads” into your wanted channel, elevating the noise floor. This is particularly nasty because it can look like an inexplicable sensitivity loss with no obvious compression.

4) In-product EMC and self-interference
As highlighted in recent EMC guidance, enclosure resonances, apertures, cable routing and grounding can shift emissions and susceptibility when you integrate the full product. In practice, a platform’s own digital subsystems, DC/DC converters, displays, Ethernet/USB, or nearby RF modules can create in-band or near-band noise that behaves like a continuous interferer.

Test planning: Define “wanted”, “interferer”, and “failure” before Touching a Generator

Good desense data starts with three definitions:

Wanted signal: frequency, modulation, bandwidth, and performance metric. For tactical SDRs, BER/FER at a specified waveform setting is usually more meaningful than SINAD, but both have their place depending on mode.

Interferer: frequency offset(s), modulation/noise characteristics, and sweep strategy. Don’t only test a single tone at a single offset. VHF/UHF environments include narrowband FM, wideband noise-like emissions, and transmitters with realistic phase noise skirts.

Failure criterion: e.g., “BER > 10-3”, “audio SINAD drops below 12 dB”, “packet error rate > 1%”, or “link margin degrades by 3 dB”. If you change the failure definition mid-test, your data won’t compare across builds.

Finally, decide whether you’re validating a requirement, diagnosing a mechanism, or building a coexistence budget. Each goal implies different offsets, power ranges and repeat counts.

Core Lab Setups to Measure SDR Desense (repeatably)

1) Conducted two-signal desense test (the workhorse)

Objective: quantify desense vs interferer level at a given offset.

Setup: two RF sources into a high-isolation combiner, then into the receiver port (or via the radio’s RF distribution path if that’s what you want to validate). Use good pads/isolators to control mismatch and reduce source pulling. Calibrate power at the radio connector, not at the generator front panel.

Method:

1) Measure baseline sensitivity: set interferer off, reduce wanted until you hit the failure criterion. Record wanted level (Psens).

2) Turn on interferer at a defined offset (e.g., ±25 kHz, ±100 kHz, ±1 MHz, ±10 MHz) and increase interferer power in steps.

3) At each interferer step, re-find the wanted level required to meet the same performance point. Record desense in dB.

What to watch: generator phase noise and noise floor. A “worse” interferer (higher noise skirt) can create more reciprocal-mixing desense than a pristine lab generator at the same carrier power. If you only ever test with a very clean source, you can under-call field issues.

2) Blocking curve vs frequency offset

Objective: map the radio’s vulnerability across offsets and identify whether the limiter is selectivity, linearity, or LO phase noise.

Method: fix wanted at a modest margin above baseline sensitivity (e.g., +3 to +10 dB) and sweep interferer frequency across offsets while stepping interferer power until the failure criterion is met. Plot required interferer power vs offset. A “valley” close-in often points to reciprocal mixing; a “shoulder” further out can indicate front-end compression/ADC headroom limits; sharp improvements at certain offsets can indicate preselector action.

3) Multi-signal / co-site realism (recommended for tactical radios)

Objective: represent the platform: multiple emitters, multiple receivers, and RF distribution components.

Introduce two or three interferers (e.g., a strong nearby VHF transmitter, a UHF datalink, plus a broadband noise-like source). Add realistic RF distribution (multicouplers, diplexers, splitters) and include antenna-port protection devices as fitted. This is where issues like intermodulation in external components, poor port isolation, and unintended coupling show up.

This is also where companies like Novocomms Space & Defence tend to get pulled in: not to “make the receiver better” in isolation, but to engineer the system—antenna placement, RF distribution, filtering and isolation—so the radio can achieve its theoretical performance on a real platform.

Common Measurement Traps (the ones that waste weeks)

Not controlling the interferer’s spectral purity: If the interferer’s noise floor is high, you may be measuring the generator, not the receiver. Conversely, if it’s too clean, you may miss reciprocal-mixing vulnerabilities that a real transmitter would expose.

Ignoring combiner isolation and reverse coupling: Poor isolation lets one generator modulate the other or creates unexpected IM products. Use isolators/pads and verify with a spectrum analyser at the combiner output.

Forgetting the radio’s own AGC modes: SDRs may behave very differently depending on AGC settings, gain states, preselector modes, and whether other channels are active. Lock configurations for comparability.

Testing the radio, not the integration: Recent EMC guidance is blunt: module-level behaviour changes in the final product. If your end users see desense in-vehicle or on an aircraft, you need to replicate the cabling, enclosure, grounding and nearby emitters—or you’ll chase ghosts.

Mitigations that Actually Move the Needle

Front-end filtering where it matters: A well-chosen preselector or notch can prevent LNA/mixer/ADC overload. In co-site builds, the best filter is often not “sharper”; it’s “placed correctly” (ahead of vulnerable gain, with the right power handling and minimal insertion loss in the wanted band).

RF distribution done as an RF design, not an afterthought: Multicouplers, diplexers and splitters must be specified for isolation, linearity, and IM performance. Cheap distribution can create IM products that land right in-band.

LO quality and gain planning: If reciprocal mixing dominates, you won’t fix it with a marginally better cavity filter. You fix it with oscillator phase noise improvements, gain distribution, and ensuring blockers don’t sit at levels where LO noise becomes your in-band floor.

Platform-level antenna strategy: Separation, polarisation choices, and pattern control can deliver “free” isolation. In many tactical platforms, a few dB of isolation is worth more than an expensive receiver tweak.

Novocomms Space & Defence supports these mitigations with ruggedised antenna systems and RF engineering that’s built for harsh, co-sited environments—where reliability, environmental tolerance and predictable RF behaviour matter as much as headline gain figures.

Conclusion: Treat SDR Desense as a Systems Test, Not a Receiver Test

Measuring blocking and desense in tactical SDRs is not difficult, but it is unforgiving: you need disciplined definitions, calibrated setups, and interferers that represent the real electromagnetic environment. With spectrum congestion and integration density continuing to rise—as highlighted by NATO’s focus on contested electromagnetic environments—your test plan must evolve from single-tone bench checks to multi-signal coexistence characterisation.

If you’re seeing SDR desense in the lab or on-platform and need help turning symptoms into engineering actions—front-end filtering, RF distribution, antenna strategy, or rugged integration—Novocomms Space & Defence can support from concept through test and qualification.

Contact Novocomms Space & Defence: https://novocomms.space/contact-us/