List of NVIDIA NetQ Integrations
This is a list of platforms and tools that integrate with NVIDIA NetQ. This list is updated as of April 2026.
-
1
SONiC
NVIDIA Networking
Empower your network with independent, flexible open-source solutions.NVIDIA introduces pure SONiC, an open-source, community-focused, Linux-based network operating system that has been enhanced within the data centers of prominent cloud service providers. By adopting pure SONiC, businesses can overcome distribution limitations and fully harness the benefits of open networking, supported by NVIDIA's vast expertise, thorough training, detailed documentation, professional services, and ongoing support to facilitate successful deployment. Moreover, NVIDIA provides extensive backing for Free Range Routing (FRR), SONiC, Switch Abstraction Interface (SAI), systems, and application-specific integrated circuits (ASIC), all integrated into a single platform. Unlike conventional distributions, SONiC enables organizations to remain independent from a sole vendor for updates, bug fixes, or security improvements. This independence allows businesses to simplify management tasks and make use of their current management tools across their data center activities, leading to improved operational efficiency. Consequently, the flexibility of SONiC not only enhances network management but also empowers organizations to adapt to their specific needs, making it an invaluable choice for those aiming for effective network oversight. -
2
NVIDIA Magnum IO
NVIDIA
Revolutionizing data I/O for high-performance computing efficiency.NVIDIA Magnum IO acts as a sophisticated framework designed for optimizing I/O processes in parallel data center environments. By improving the functionality of storage, networking, and communication across various nodes and GPUs, it supports vital applications such as large language models, recommendation systems, imaging, simulation, and scientific studies. Utilizing storage I/O, network I/O, in-network computation, and well-organized I/O management, Magnum IO effectively accelerates and simplifies the movement, access, and management of data within complex multi-GPU and multi-node settings. Its compatibility with NVIDIA CUDA-X libraries ensures peak performance across a variety of NVIDIA GPU and networking hardware configurations, maximizing throughput while minimizing latency. In architectures that utilize multiple GPUs and nodes, the conventional dependence on slow CPUs with limited single-thread performance poses challenges for efficient data access from both local and remote storage. To address this issue, storage I/O acceleration enables GPUs to bypass the CPU and system memory, facilitating direct access to remote storage via 8x 200 Gb/s NICs, thus achieving an impressive 1.6 TB/s in raw storage bandwidth. This technological advancement substantially boosts the overall operational efficiency of applications that require extensive data processing, ultimately allowing for faster and more responsive data-driven solutions. Such improvements represent a significant leap forward in managing the increasing demands of modern data workloads.
- Previous
- You're on page 1
- Next