Inspur Information Announces Full Support for the NVIDIA AI Platform for Inference Processing

Exceptional AI server performance is delivered with NVIDIA A100, A30, and A2 Tensor Core GPUs.

SAN JOSE, Calif.–(BUSINESS WIRE)–#AI–Inspur Information, a leading IT infrastructure solutions provider, announced at NVIDIA GTC that its AI and Edge inference servers will support NVIDIA A100, A30, and the newly announced A2 Tensor Core GPUs throughout its entire inference server portfolio.

As the demand for AI inference continues to grow and diversify, Inspur Information has launched a comprehensive inference product line of NVIDIA-Certified Systems built for applications from data centers to edge computing, providing high performance for users across various application scenarios. Inspur’s NVIDIA-Certified Systems are ideal for running the NVIDIA AI Enterprise software suite, which deploys and manages AI workloads on VMware vSphere.

For data centers, NF5468M6 is an intelligent elastic architecture AI server, featuring 4U with 8x NVIDIA A100 or A30 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. It has the unique function of automatic switching of three topologies — balance, common and cascade — to flexibly meet the needs of various AI applications, including deep learning training, language processing, AI inference, massive video streaming and more. It provides ultra-flexibility for AI workloads.

NF5468A5 is an integrated, efficient AI server, featuring 4U with 8x NVIDIA A100 or A30 GPUs and 2x AMD Rome/Milan CPUs. It has a high performance architecture, with a CPU-to-GPU non-blocking design that provides superior commutation efficiency and a much smaller P2P communication delay. It is also optimized for conversational AI, intelligent search and high-frequency trading scenarios.

NF5280M6 is a reliable and flexible AI server, featuring 2U with 4x NVIDIA A100 or A30 GPUs or 8x NVIDIA A2 GPUs and 2x Intel 3rd Gen Intel Xeon Scalable processors. The NF5280M6 can operate stably in a variety of AI application scenarios, covering small and medium-scale AI training and high-density edge inference.

In edge computing, NE5260M5 is an open computing standard edge server with NVIDIA A100, A30 and now A2 Tensor Core GPUs and two Intel CPUs. With a 430mm chassis, it can adapt to unusual spaces and harsh working environments, including high temperatures and humidity. In the recent MLPerf Inference V1.1 results, NE5260M5 ranked first in four tasks in the Edge category of the Closed Division. NE5260M5 has been implemented in a variety of edge AI inference scenarios, such as smart campus, smart shopping mall, smart community and smart substation. It provides diverse computing power support for different AI edge applications.

About Inspur Information

Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions, and a top two server manufacturer worldwide. Through engineering and innovation, Inspur Information delivers cutting-edge computing hardware design and extensive product offerings to address important technology segments such as open computing, cloud data centers, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle real-world challenges and custom workloads. To learn more, please go to https://www.inspursystems.com/.

Contacts

Fiona Liu

Inspur Information

liuxuan01@inspur.com