Atlas 800I A2-G
Atlas 800I A2-G
The Atlas 800I A2-G inference server adopts 8-module efficient inference mode and provides strong AI inference capability. It has advantages in computing power, memory bandwidth and interconnection capability, and can be widely used in generative large model inference, such as intelligent customer service, copy generation, knowledge precipitation and other content generation scenarios.
Technical Specification

Mode

4U AI server

NPU

8 * Ascending AI module

CPU

4 * Kunpeng 920

Memory

32 DDR4 memory slots with a maximum capacity of 3200 MT/s A single memory module supports 16/32/64 GB

Local storage

8 * 2.5 SATA+2 * 2.5 NVMe

4 * 2.5 SATA+6 * 2.5 NVMe

RAID

RAID 0, 1/10/5/5/50/6/60 is supported

Network

8 * 200GE QSFP interface direct outgoing, RoCE protocol

PCIe Ports

Supports a maximum of three PCIe 4.0 expansion slots

Power Supply

Four hot-swapable 2.6 kW power modules in 2+2 redundancy

Power Supply

200 ~ 240 V AC

240 V DC

Heat Dissipation Mode

air cooling

Fan Module

Supports eight hot-swappable fan modules in N+1 redundancy

Working Temperature

5℃~35℃(41℉~95℉)

Structure Size

175 mm (H) × 447 mm (W) × 790 mm (D)

* The descriptions and information displayed in the product promotional materials are for reference only. The actual delivered product shall prevail. The final interpretation right belongs to GLORY.