Inspur Unveils Cloud SmartNIC Solution Based on NVIDIA DPU at GTC 2020
The Inspur Cloud SmartNIC solution deeply integrates the Inspur Server with NVIDIA DPU, enabling the combined capabilities of embedded processing, SmartNIC networking and high-performance PCIe 4.0 host interface, which can oﬄoad functions like traffic management, storage virtualization, and security isolation, significantly freeing CPU computing resources.
SAN JOSE, Calif.–(BUSINESS WIRE)–#DPU–Inspur, a leading data center and AI full-stack solutions provider, unveiled its Cloud SmartNIC solution based on NVIDIA BlueFieldⓇ-2 data processing unit (DPU) at GTC 2020. The Inspur Cloud SmartNIC solution deeply integrates the Inspur Server with NVIDIA DPU, enabling the combined capabilities of embedded processing, SmartNIC networking and high-performance PCIe 4.0 host interface, which can deliver hyper-performance network acceleration at a speed of up to 200Gb/s. The Cloud SmartNIC is used for oﬄoading functions like traffic management, storage virtualization, and security isolation, significantly freeing CPU computing resources, and delivering efficient software-deﬁned, hardware-accelerated services for AI, big data analytics, cloud, virtualization, microsegmentation, and next-generation ﬁrewalls.
The DPU, a key part of network computing, is a new accelerated computing element. It is an integrated system on a chip that combines a programmable multi-core Arm CPU, state-of-the-art SmartNIC networking, high-performance PCIe interface, and a powerful set of networking, storage, and security features. It offloads functions like software-defined networking (SDN), software-defined storage (SDS), and encryption and security processing from the host CPU. In the traditional model, legacy appliances or the CPU were required to run data center services. In the hardware-accelerated, NVIDIA BlueField DPU-enabled server, these services are offloaded to the DPU, freeing the CPU to run applications, and accelerating data center services that are safe, reliable, convenient, and powerful.
Accelerating the software-defined data center
Today’s data centers must run a combination of modern, accelerated applications – such as AI and high-performance data analytics, alongside existing legacy applications. Traditional data center networking, storage, and security technologies can effectively deal with north-south traffic coming in and out of the data center. But they are inadequate to address distributed, cloud-native, accelerated workloads based on dynamic microservices. These services move around the data center as workloads scale out, and most of the traffic is east-west, or between nodes within the data center. Moreover, fixed-function security appliances lack the flexibility to scale and support cloud-native applications, and thus expose a large attack surface through unprotected east-west communications and virtual machine (VM)-to-VM traffic.
The software-defined data center implements networking, storage, and security functions as software running on powerful servers, and is more flexible and scalable than architectures based on fixed appliances. It also achieves application compatibility and resource scalability by pushing data center functions into software running on VMs or containers. However, this flexibility and scalability comes at the expense of additional CPU loading as a result of software-defined services and resource virtualization. The challenge is to conserve precious CPU resources while efficiently integrating and accelerating cloud, data access, and AI capabilities with the scalability of software-defined data centers and cloud-native applications.
Through a dedicated intelligent hardware-accelerated data center services chip, the Cloud SmartNIC solution enables advanced networking functions of the software-defined data center, such as virtual switching and routing, load balancing, and virtual machine and container networking services. For storage, the NVMe controller is accelerated by the DPU to allow high-performance ﬂash to be spread across all the nodes in the data center and oﬀer elastic block storage capabilities to applications. As the foundation of a secure platform, the DPU offers a hardware root of trust, secure firmware authentication and updates, and encryption accelerators. Additional advanced security accelerators offload connection tracking, deep packet inspection, and regular expression matching to accelerate next-generation firewalls and intrusion detection and prevention systems.
Liu Jun, GM of AI and HPC at Inspur, noted that, “Inspur is innovating four key processes in the data center —producing, scheduling, aggregating, and releasing AI computing power. The Cloud SmartNIC solution can efficiently empower computing power aggregation, deliver maximum computing power, and effectively tackle major challenges in big data analytics, data processing, and hyperscale AI model training.”
“Solutions that build on NVIDIA BlueField-2 DPUs deliver more efficient accelerated networking functions for users of enterprise applications. Optimized to offload critical networking, storage and security tasks from CPUs, BlueField-2 DPUs enable organizations to transform their IT infrastructure into state-of-the-art data centers that are accelerated, fully programmable and armed with ‘zero-trust’ security features to prevent data breaches and cyberattacks.” said Erik Pounds, head of product marketing, enterprise computing, at NVIDIA.
Inspur Electronic Information Industry Co., LTD is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com.