GOMA Rugged Solutions
Mission
Computers
Edge AI Mission Computers
AIX Series
SWaP Mission Computers
AWS Series
Modular VPX Mission Computers
MAG Series
Rugged Servers
& Workstations
Rugged Edge Servers & Workstations
GAP Series
Extreme Rugged Edge Servers
XRS Series
MarketsEngineeringGuidesAbout
Talk to Engineers
GOMA Rugged Solutions
Strada Antica di Collegno 225
10146 Torino (TO), Italy
+39 011 7725024gomarugged@goma.it

Solutions

AIX SeriesAWS SeriesMAG SeriesGAP SeriesXRS Series

Company

  • Engineering
  • About
  • Contact

Resources

  • Markets
  • Guides
  • Privacy Policy

© 2026 GOMA Elettronica SpA. All rights reserved.

Proudly Made in Europe 🇪🇺

Edge AI in Military Platforms: Selecting the Right GPU Module
Home/Guides/Edge AI in Military Platforms: Selecting the Right GPU Module
Technical Guide7 min read

Edge AI in Military Platforms: Selecting the Right GPU Module

NVIDIA Jetson Orin vs discrete GPU cards vs FPGA accelerators — a decision framework for EO/IR, ISR and sensor fusion applications requiring onboard AI inference.

Edge AINVIDIAJetson Orin

Artificial intelligence inference at the tactical edge — onboard a vehicle, aircraft or unmanned platform — imposes constraints that datacentre AI does not face. Power budgets are measured in tens of watts, not kilowatts. Thermal envelopes are fixed. Environmental qualification is mandatory. Yet the compute demand for real-time EO/IR target recognition, sensor fusion and ISR exploitation continues to increase. This guide helps defence engineers navigate the platform selection decision.

The Three AI Accelerator Families

  • ›NVIDIA Jetson SoMs (Orin NX, Orin AGX): Integrated CPU + GPU + DLA (Deep Learning Accelerator) on a single module. SWaP-optimised — as low as 10–60W. Ideal for UAVs, vetronics and small airborne platforms. GOMA AIX Series integrates Jetson Orin in a MIL-qualified conduction-cooled chassis.
  • ›Discrete GPU cards (NVIDIA RTX / L-series / A-series): Higher throughput — hundreds of TOPS. PCIe form factor. Require higher power and active cooling. Suited to rackmount radar/EW servers and shelter computing (GOMA XRS Series).
  • ›FPGA accelerators (Xilinx/AMD Versal, Intel Agilex): Deterministic, low-latency inference. Reconfigurable. Lower peak throughput than GPUs but ideal for hard real-time applications like radar signal processing. Typically on VPX or PCIe cards.

Key Decision Criteria

  • ›Power budget: If total platform power is below 100W, Jetson Orin is the practical choice. Above 150W, discrete GPU cards become viable.
  • ›Thermal: Fanless conduction-cooled installations require integrated SoM solutions. Forced-air environments can accommodate discrete GPUs.
  • ›Framework support: PyTorch, TensorFlow and ONNX are well-supported on NVIDIA (CUDA ecosystem). FPGA deployment requires model conversion and vendor-specific toolchains.
  • ›Latency vs throughput: Hard real-time (<1 ms) inference favours FPGAs. Throughput-optimised batch inference favours GPUs.
  • ›Qualification maturity: NVIDIA Jetson Orin-based platforms have a growing track record in MIL-qualified form factors. Discrete GPU cards in ruggedised chassis are more established for server applications.

EO/IR and Target Recognition

For electro-optical and infrared target recognition on UAVs and helicopters, NVIDIA Jetson Orin AGX (275 TOPS) provides sufficient throughput for real-time inference with common object detection networks (YOLOv8, RT-DETR) at video frame rates. Power consumption of 25–60W fits within UAV payload budgets. GOMA AIX Series provides a MIL-STD-810 and DO-160G qualified chassis for Jetson Orin integration with MIL circular connector I/O.

ISR and Sensor Fusion

Intelligence, Surveillance and Reconnaissance (ISR) exploitation — fusing radar, EO/IR, SIGINT and geospatial data — typically requires higher throughput than a single Jetson SoM can deliver. Rackmount systems with NVIDIA RTX A4000 or A5000-class GPUs (16–24 GB VRAM, 150–230W) provide the necessary compute for multi-stream inference and large model inference. These platforms require forced-air cooling and are suited to shelter or vehicle-rack installations.

Model Optimisation for the Edge

Regardless of accelerator choice, model optimisation is critical for edge deployment. Quantisation (INT8, FP16) typically reduces model size by 2–4× with minimal accuracy loss and doubles throughput on Tensor Core-equipped GPUs. NVIDIA TensorRT converts trained models to optimised inference engines for Jetson and discrete GPU targets. FPGA deployment requires conversion via Xilinx Vitis AI or Intel OpenVINO. Budget time for model optimisation and validation as part of your programme schedule — it is not a trivial step.

Key Takeaways

  • ✓NVIDIA Jetson Orin is the practical choice for SWaP-constrained UAV, vehicular and airborne AI inference (10–60W).
  • ✓Discrete GPU cards (RTX/A-series) are suited to shelter and rack-mount ISR exploitation requiring higher throughput.
  • ✓FPGAs offer hard real-time, deterministic inference — appropriate for radar signal processing, not general neural network inference.
  • ✓Model optimisation (TensorRT, INT8 quantisation) is as important as hardware selection — plan for it in your programme schedule.
  • ✓Engage your rugged platform supplier early to understand thermal constraints and connector I/O before selecting your AI accelerator.
Recommended Solutions

Explore Our Platforms.

AIX Series Series
MISSION COMPUTERS

AIX Series Series

Edge AI Mission Computers

NVIDIA Jetson Orin NX platform with up to 157 TOPS. Fanless, IP65, 1.75 Kg. Purpose-built for autonomous navigation, EO/IR processing and real-time at the edge.

AWS Series Series
MISSION COMPUTERS

AWS Series Series

SWaP Mission Computers

Intel Xeon W on COM Express. Compact fanless design with dual removable SSDs, multiple video outputs, and MIL connectors. No external airflow required.

MAG Series Series
MISSION COMPUTERS

MAG Series Series

Modular VPX Mission Computers

3U OpenVPX architecture with NVIDIA Ampere / Pascal GPU. 3G-SDI video capture. Available in air-cooled and conduction-cooled variants for airborne and defense platforms.

Contact

Any Mission Starts with a Conversation.

At GOMA Rugged Solutions, you talk to engineers - not salespeople. Tell us about your program requirements and we will identify the right platform, configuration and qualification path.

gomarugged@goma.it
+39 011 7725024
Strada Antica di Collegno 225
10146 Torino (TO), Italy

Your data is processed according to our Privacy Policy. We will only use it to respond to your inquiry.