- High capacity GPU server for mixed reality and AI/inferencing
- 3x full-height expansions slots
- Flexible I/O design for GPUs and add-in cards
- Semi-rugged platform withstands temperature extremes and vibration
- 2U short-depth 20.5″ rackmount server with front access dual USB ports weighs 31 lbs.
Product Tag: AI / Inferencing
- Functions as a standalone or embedded AI inferencing device, can connect multiple camera streams for analytics
- Built for big performance in a small foot print
- 10+ Year lifecycle starting in 2023
- Optional VESA mount allows the server to install within larger devices
- x86 based motherboard, supports Linux and Windows operating systems
- 750W SFX Power Supply
- Single Socket Intel LGA 1700 13th/12th Gen (Raptor Lake-S/Alder Lake-S) Core™ Processors, up to 65W
- Dual Channel SO-DIMM ECC/non-ECC DDR4 3200 MHz 64GB (32 GB per DIMM)
- 2x 2.5GBe Base-T Intel I225LM based LAN
- 1x PCIe x16 slot which can accommodate a single wide graphics card like the NVIDIA® RTX® and Ampere series, capture cards, network adapters and much more!
- 2x 2.5″ drive bays
- Touchscreen support
- Rack-mount inspired frame allows a choice of computing workstations
- Adaptable with a handle, articulating arm, mountings for touch monitors, HMI and LED displays, and other hardware
- Available with PoE switching, UPS, intercom, and video configurations
- 5 port PoE unmanaged switch
- USB speakerphone with over 20’ range
- RJ45 rear plug
- Programmable RGB LED lights with developers kit
- Electrostatic discharge chain
- Functions as a standalone or embedded imaging device, diagnostic device or workstation
- Ultra-small case with a bezel that functions as a carrying handle
- Weighs16 lbs., dimensions 13.2″ x 5.55″ x 8.49″ and whisper-quiet
- Optional VESA mount allows the server to install within larger devices
- 1x PCIe x16 slot which can accommodate a single or double-wide graphics card like the NVIDIA® Quadro® and GeForce series
- 1x 3.5″ internal or 2x 2.5″ hot-swappable SATA/SSD drive bays
- AI in a small footprint
- High performance while using less power than a full server
- NVIDIA JetPack SDK
- Supports cloud native applications
- Inferencing on device
- Compact fanless design
- NVIDIA® Jetson™ Xavier NX embedded
- Linux OS with board support package (BSP)
- Supports deep learning trained models
- Wide operating temperature range
- AI in a small footprint
- High performance while using less power than a full server
- NVIDIA JetPack SDK
- Supports cloud native applications
- Inferencing on device
- NVIDIA Jetson™ Xavier AGX embedded
- Fanless compact design
- Bundles with Linux Ubuntu 18.04
- Low power consumption
- Support for a PCIe Add-on Card
- Multi GPU AI Inference
- High performance for multiple AI inputs or Cameras
- Analytics / AI Server
- Up to 2x 3rd Generation Intel Xeon Scalable processors, up to 40 cores per processor
- Supports up to 32 DDR4 DIMM ECC memory, supports RDIMM 2 TB max or LRDIMM 4 TB max, speeds up to 3200 MT/s
- Up to 8x 2.5″ SAS/SATA/NVMe HDD/SSD
- Supports up to 4x High Performance A40 GPUs
- Supports NVIDIA Clara Holoscan
- Rackmount configuration

Looking for the right hardware solution? We're here to help.