INDIGO AirGuard
EUDIS 2026
Home Demo Paper Doctrine Technology Tour

INDIGO AirGuard

Multi-Modal Drone Detection for European Cities

INDIGO Team — EUDIS Space & Defence Hackathon 2026 • March 2026

Abstract

INDIGO AirGuard is a multi-modal drone detection system that transforms civilian smartphones into a distributed acoustic sensor network, augmented by fixed GPS-PPS localization nodes and passive bistatic radar. The system addresses the critical gap in European low-altitude air defence against small UAS threats such as the Shahed-136. Three technical innovations — Acoustic Channel Impulse Response (ACIR) anomaly detection, a distributed CFAR cascade achieving 99.97% collective detection from individually unreliable sensors, and passive bistatic radar using DVB-T illuminators — enable city-scale coverage at orders-of-magnitude lower cost than traditional radar. Output is NATO-standard Cursor-on-Target (MIL-STD-6090) for military C2 integration, plus civilian push alerts. Privacy is architectural: on-device inference ensures zero raw audio transmission. A working prototype demonstrates the full pipeline from acoustic detection through tracking to ATAK alert in under 2 seconds.

Keywords: drone detection, counter-UAS, acoustic sensing, passive bistatic radar, sensor fusion, CFAR, ATAK, CoT

1. The Threat Landscape

On any given night in 2026, between 50 and 100 Shahed-136 one-way attack drones strike Ukrainian cities. Odesa, a primary target, is 400 km from Bucharest. The drone threat to European cities is not hypothetical — it is a nightly operational reality separated from EU territory by a single border crossing.

The threat is already here. In February 2026, Romanian F-16s were scrambled to intercept unidentified drones near NATO airspace. Throughout 2024–2025, Estonian and Latvian airspace was violated by unidentified UAVs. In January 2025, the NATO Secretary General called drone defence “the most immediate gap in Alliance capability.”

Why traditional radar cannot solve this. The Shahed-136 has a radar cross-section of approximately 0.1 m² — a fiberglass-bodied delta wing with a 2.5-metre wingspan. At low altitude in ground clutter, conventional radar cannot reliably distinguish it from birds, weather returns, or terrain. Deploying radar coverage sufficient to protect a single European city costs EUR 50–100M and takes 18–36 months. The physics are unfavourable: radar was designed for aircraft and cruise missiles, not sub-100 kg drones flying at 100–300 metres altitude.

The gap is total. The number of European cities with civilian drone early warning is zero. The number of NATO eastern flank nations with a low-altitude urban detection network is zero. Acoustic detection exploits an entirely different physical modality — the engine noise that radar cannot suppress — and does so at a fraction of the cost.

Detection range vs urban noise
Figure 1: Detection range degrades with urban noise — but node density compensates

The fundamental insight is that detection range per sensor is not what matters. What matters is detection probability across a network. A system of individually weak sensors, distributed at sufficient density, can achieve near-certain collective detection. This is the principle on which AirGuard is built.

2. System Architecture

Two-Tier Sensing Principle

AirGuard separates detection from localization across two hardware tiers, each performing the task its physics allows:

  • Tier 1 — Smartphones (DETECT): Civilian smartphones run on-device YAMNet TFLite inference on 960 ms audio buffers. The output is a binary detection event — drone-present or drone-absent. Zero raw audio leaves the device. Privacy is enforced by architecture, not by policy.
  • Tier 2 — Fixed GPS-PPS Nodes (LOCALIZE): Raspberry Pi 4 units with MEMS microphone arrays and GPS-PPS time synchronization achieve sub-microsecond timing accuracy, enabling TDoA triangulation to metre-level precision.

This separation is not a compromise — it is the optimal design. Smartphone GPS has ±3 m accuracy with NTP time synchronization jitter of ±10–50 ms, making smartphone-based TDoA localization physically impossible. Fixed nodes with GPS-PPS provide the timing precision that smartphones cannot.

System architecture
Figure 2: Complete system architecture

Message Infrastructure

The system uses NATS JetStream as its message backbone, with four persistent streams:

StreamTopicsRetentionPurpose
DETECTIONSdetection.acoustic.*, detection.rf.*, detection.acir.*1 hour, memoryRaw sensor events
TRACKStrack.confirmed.*, track.tentative.*24 hours, fileFused track state
ALERTSalert.priority.*72 hours, fileOperator-facing alerts
HEALTHhealth.edge.*1 hour, memoryNode heartbeats

Edge-to-Alert Pipeline

The end-to-end pipeline from acoustic event to operator alert executes in under 2 seconds:

  1. Edge node captures 960 ms audio buffer and runs YAMNet inference on-device (~50 ms)
  2. Detection event published to NATS (detection.acoustic.{node_id}) — no raw audio
  3. Fusion engine correlates events within a 2-second temporal window, checks spatial consistency
  4. Track manager creates or updates tracks via Extended Kalman Filter (state vector: position + velocity in 3D)
  5. CoT gateway emits MIL-STD-6090 Cursor-on-Target XML for ATAK consumption
  6. Dashboard renders real-time track on Leaflet map via NATS WebSocket

Privacy is not a feature bolted onto this pipeline — it is a structural property. On-device inference means raw audio physically cannot leave the smartphone. The system is GDPR Article 25 compliant by design, not by policy enforcement.

3. Detection Theory: From Individually Useless to Collectively Certain

The Fundamental Problem

A single smartphone microphone in an urban environment achieves a signal-to-noise ratio of approximately 0 dB against a Shahed-136 at 300–500 metres. At 0 dB SNR with a false alarm rate of Pfa = 10-3, the detection probability is approximately 1.8% per observation. This is individually useless.

AirGuard does not rely on a single phone.

Four-Level Cascade

LevelUnitSizeAggregation RulePurpose
L0PhoneSingle deviceCA-CFAR, adaptive thresholdLocal noise adaptation
L1Cell500m × 500mK-of-N voting (K=3)Spatial coherence
L2Cluster2–5 kmSPRT accumulationTemporal confirmation
L3CityFull coverage areaCross-cluster correlationCity-wide awareness

Collective Detection Performance

Non-coherent integration of N independent observations yields SNReff = N × SNRsingle. For 20 phones within detection range, each at SNR = 0 dB: SNReff = 13 dB, yielding combined Pd > 82%. At 100 phones (realistic for a city block at 0.5% participation), collective SNR reaches 20 dB, yielding Pd > 99.97% with Pfa < 10-6.

ROC curves
Figure 3: ROC curves showing dramatic improvement with sensor count
SNR improvement
Figure 4: Collective SNR improvement with node count
CFAR cascade
Figure 5: The four-level CFAR cascade architecture

4. Three Innovations

4.1 ACIR — Acoustic Channel Impulse Response

Conventional acoustic detection asks: “Can I hear the drone?” ACIR asks a different question: “Has anything changed in the acoustic environment?”

The concept transfers a well-established technique from RF sensing to the acoustic domain. In WiFi Channel State Information (CSI) based intrusion detection, a moving body perturbs the wireless propagation channel between a transmitter and receiver. Over 1,000 papers have been published on WiFi CSI sensing since 2015. The underlying physics is identical for acoustics.

Using the Farina log-swept-sine method (2000), the system continuously measures the acoustic impulse response between ambient sound sources and microphone receivers. A drone transiting the propagation path changes the channel — it shadows, diffracts, and scatters the ambient sound field. ACIR detects these perturbations using Wiener deconvolution and anomaly detection on the time-varying impulse response.

ACIR concept
Figure 6: ACIR concept — detecting the acoustic shadow, not the acoustic source

Novelty assessment: No published prior art combines acoustic impulse response measurement with airborne target detection. The cross-domain transfer from WiFi CSI to acoustics for drone detection appears to be genuinely novel.

4.2 Distributed CFAR Cascade

The cascade is built from established components — CA-CFAR (Finn and Johnson, 1968), K-of-N voting (Varshney, 1997), and SPRT (Wald, 1945) — assembled into a novel four-level hierarchical architecture.

The original contribution is the avalanche sensitization protocol: a detection event at one phone simultaneously functions as a detection and as a command to lower thresholds at neighbouring phones within a confidence-dependent radius. This creates an information cascade that amplifies true detections and self-limits false alarms.

4.3 Passive Bistatic Radar

Using DVB-T television signals as illuminators of opportunity, AirGuard's RF subsystem processes the cross-ambiguity function between the direct-path reference signal and target-reflected echoes to extract bistatic range and Doppler velocity. No transmitter is required — the system is completely passive and covert.

Critical note on Shahed-136 and RF: The Shahed-136 navigates by INS/GLONASS. It has no continuous RF command link. We do not claim RF detection of Shahed control channels. Our passive bistatic radar detects radar cross-section returns — the physical reflection of the drone body — not control signals.

5. Signal Processing Core

ModuleLanguageFunction
EKFRustExtended Kalman Filter (Joseph stabilised form), 6-state vector
TDoARustTime-Difference-of-Arrival solver: Gauss-Newton + Levenberg-Marquardt
CFARRustCA / OS / GO-CFAR, adaptive threshold per noise environment
FFTRustCross-ambiguity function for PBR
ACIRRustFarina swept-sine, Wiener deconvolution
Edge daemonGoSensor node management, NATS publishing
Fusion serverGoTemporal correlation, multi-modal fusion, EKF tracking
CoT gatewayGoMIL-STD-6090 Cursor-on-Target XML generation
YAMNetPythonTFLite inference, acoustic classification
DashboardTypeScriptReact + Leaflet + NATS WebSocket

The complete codebase comprises ~15,000 lines of production code with 50 Rust DSP unit tests passing across all five modules.

SNR waterfall
Figure 7: SNR waterfall analysis across detection stages

6. Competitive Landscape

SystemModalitiesScaleCost / SiteNATO C2GDPR
Sky Fortress (Ukraine)AcousticNational (14K nodes)Custom HW/nodeUkrainian militaryNo (wartime)
DroneShieldAcoustic + RF + RadarSiteEUR 200K–500KCoT capablePartial
Dedrone (Axon)RF + Radar + CameraFacilityEUR 300K–1MYesUS data
Robin RadarDedicated radarAirport / siteEUR 500K–2MYesYes
R&S ARDRONISRF + AcousticSiteEUR 200K+YesYes
INDIGO AirGuardAcoustic + PBR + ACIRCity-scaleEUR 50–100KNative CoTBy design
Competitive positioning
Figure 8: Competitive positioning across 5 dimensions

We do not claim to be better than Sky Fortress. They have 14,000 deployed nodes, 99.6% detection rate against Shahed-136, and combat validation. What AirGuard brings is a system designed for European regulatory, privacy, interoperability, and multi-modal requirements that Sky Fortress was never built to address.

7. EU & NATO Strategic Alignment

ProgrammeBudget / StatusAirGuard Relevance
European Defence FundCounter-UAS priority areaCollaborative detection work programme
PESCO CUASActive projectCounter-UAS project, AirGuard provides detection input
ReArm EuropeEUR 150 billionIncludes C-UAS and critical infrastructure protection
Article 42 TEUMutual defence clauseNo detection system exists to confirm a drone border crossing

AirGuard is designed for European deployment under European law: 100% on-device processing, GDPR Article 25 privacy by design, EU data residency, and open architecture with no vendor lock-in to non-EU defence contractors.

8. Prototype Demonstration

The hackathon demonstration executes the full detection pipeline in real time:

  1. Shahed-136 audio plays from a speaker (recorded acoustic signature at representative SPL)
  2. Smartphone detects drone acoustic signature via on-device YAMNet TFLite inference
  3. Dashboard displays detection event with estimated bearing and track initiation on Leaflet map
  4. ATAK CoT alert fires — the same XML message a NATO military operator would receive

What you see took 2 seconds from sound to alert. That is the difference between shelter-in-place and no warning at all.

Timeline
Figure 9: Time to city-wide coverage comparison
Multi-spectral timeline
Figure 10: Multi-spectral detection timeline

9. Known Limitations & Roadmap

We believe acknowledging gaps builds more credibility than hiding them.

“AirGuard is a credible early-stage concept with genuine operational potential as a gap-filler in the low-altitude, urban detection regime. The architecture is sound. The theory is grounded. The team’s honesty about limitations is the strongest signal that this project could eventually matter.”
— Independent evaluation, senior air defence operator with 2 years combat experience against Shahed-136/238
Rating: 5/10 current deployability, 8/10 architectural potential
LimitationSeverityPlan to Resolve
No field-validated detection dataCriticalTRL 3–4. The pilot is to obtain field data.
Shahed-238 jet variantHighSeparate classifier needed. Jet engines are louder, spectrally distinct.
Lancet-3 (electric motor)High15–20 dB quieter. PBR more promising for this target class.
Smartphone range (100–300m)MediumDensity: 9,000 nodes at 0.5% participation in Bucharest.
False positives (motorcycles)MediumMulti-node TDoA correlation discriminates airborne vs ground.
Indoor attenuationMediumOnly outdoor / near-window phones contribute.
Civilian app adoptionMediumGovernment partnership, integration with RO-ALERT.

Roadmap

PhaseTimelineObjectiveExit Criteria
HackathonMarch 2026End-to-end pipeline demoSound to alert in 2 seconds
PilotQ3 202610 nodes in Bucharest72-hour continuous operation
Field trialQ4 2026 – Q1 2027Real drone targetsFalse alarm rate < 2/hour
Fine-tuningQ1–Q2 2027YAMNet on real recordingsField-validated classifier
Certification2027+MIL-STD-6090 testingAccredited compliance

10. The Ask

Support to deploy a 10-node hardware pilot in Bucharest, Q3 2026, in partnership with the Romanian Ministry of Defence.

We are not asking for millions. The software is built.

  • 10 fixed nodes: RPi4 + MEMS microphone array + GPS-PPS (~EUR 200 each)
  • Deployment at strategic locations in Bucharest, coordinated with Romanian MoD
  • 3-month operational evaluation with real acoustic and RF data collection
  • Validation dataset for Shahed-136 and commercial drone acoustic signatures
  • Total pilot cost: EUR 50–100K — not millions

11. Team

NameRoleContribution
NicoLead EngineerFull-stack architect — Rust, Go, TypeScript. EKF, NATS, CoT implementation
MarcML & AudioYAMNet fine-tuning, acoustic classification pipeline, TFLite optimisation
LiviuAI InfrastructureMulti-model verification systems, distributed compute, quality assurance
GeoConsultantDeployment logistics, Romanian MoD relationship, business development

Combined: Production-grade systems engineering + ML expertise + AI infrastructure + defence industry access.

Based in Bucharest — on NATO's eastern flank, where this system is needed first.


Contact: [email protected]
TRL: 3–4 (validated in controlled environment, pilot-ready)

INDIGO AirGuard — EUDIS Space & Defence Hackathon 2026, Bucharest