Cisco, Nvidia Find Deeper Integration Delivers for Deep Learning Applications

essidsolutions

Network giant Cisco Systems and chipmaker Nvidia are among the companies partnering to ease the implementation of deep learning for IT infrastructures as enterprise customers find that deep learning requires deeper integration.

The Silicon Valley vendors are teaming up to smooth the transformation of the potentially disruptive technology, and launching both hardware and software materials.

Last month, Cisco debuted a dedicated server for deep learning, harnessing techniques that train machines to learn complex tasks by processing large data sets. The C480 ML serverOpens a new window features Tesla V-100 graphics processor units (GPUs) from Nvidia and is the latest addition to its Unified Computing System (UCS) product line.

Nvidia turned its RAPIDS code librariesOpens a new window out to the open-source community last week to help developers accelerate application development with its GPUs. The release follows the September launch of a platform that accelerates the processing inferences of voice, imaging, video and recommendation services and generated via deep-learning applications that form the core of artificial intelligence.

Partnerships Accelerate Analytics and Processing

By bringing together enhanced processing and storage capacity, the Cisco server enables users to more efficiently manage the changes in traffic flows that arise from the implementation of deep learning (and similar advanced technologies) and that limit the constraints they can put on latency of data analysis and machine execution.

Working together with a range of partners, Cisco is actively developing validated solutions that encompass an array of technologies and formats for faster implementation. These include Hadoop data lakes and containerized environments for machine learning applications.

Nvidia also is addressing efficiencies to meet the demand for accuracy in low-latency data processing. The Tensor RT Hyperscale Inference PlatformOpens a new window uses software developed specifically for end-to-end applications that run on the Tesla GPUs, and interfaces for applications housed in Docker and Kubernetes containers.

Meanwhile, developers and data scientists can use the RAPIDS suite of software libraries to simplify common data-preparation tasks for integration with machine-learning algorithms written in Python and other machine-learning software languages.

Support for multiple GPUs also dramatically cuts the time previously spent on training and processing analytic models for large datasets, like those employed in neural networks for deep learning.

Are Applications Limitless?

With eight Nvidia GPUs, the C480 ML server aims to leave no stone unturned. Coupled with a pair of Intel Xeon Scalable central processors and 30 data storage flash drives, it provides an infrastructure upgrade that can be integrated in UCS data centers with relative ease.

Cisco says that use cases include those in financial services, where early adopters are working out the parameters for logarithmic trading, while others are using the accelerated applications to detect credit card and other types of fraud.

In digital medicine, diagnostics and drug development are taking center stage, with image classification and research and development benefitting in part from the server’s large memory footprint.

Nvidia officials are touting customer testimonials for their GPUs and platform accelerators, including from OEMs like Cisco that are in the process of incorporating the technologies into their servers and chipsets. The company also makes a pitch for applications around enhanced analytics of customer buying behavior and summary benefits in retail inventory management.

While the Cisco servers are being marketed primarily for on-premises deployment, they are designed to work with distributed datasets and in multi-cloud environments. Compatibility with the UCS product line’s management software stack and core-to-edge fabric speeds development of neural networks for deep learning applications.

Spelling the End of Custom-Made

Reliant as it is on Nvidia’s CUDA toolkit for GPU optimization, the RAPIDS release for open source development features a similar level of lock-in, at least as far as the community it aims to engage. However, the initial libraries on offer include Python and C GPU data frames, as well as a dedicated RAPIDS library for machine learning.

Available on GitHub, the Microsoft-owned open source development platform, the libraries are protected under an Apache license. Nvidia is working to integrate RAPIDS with Apache SparkOpens a new window , a processing engine used in software developing, testing and implementing machine learning projects in the data center.

With specs dictating costs of custom builds that can run to $500,000 per server, the average price of a C480 ML should come in at around a third of that price. Cisco say the servers will be ready to ship in the fourth quarter of 2018.