Intel, Microsoft Team Up to Push Data Science to the Edge

essidsolutions

Intel and Microsoft are building a data center foundation for data science delivered in the cloud, creating virtual machines that get the most out of silicon cores and ironing out the data sets that they process.

In separate partnerships announced last week, Intel is putting virtual machine software out for open-source developmentOpens a new window on Microsoft’s Azure marketplace. Microsoft also is working with a Boston-based start-up, Linker Networks, to smooth datasetsOpens a new window for machine-learning applications that run in its Azure cloud.

The Intel Optimized Data Science Virtual Machine, or DSVM, is an extension that lets users of the Python programming language get the most out of those frameworks by running them on Intel’s Xeon family of silicon chips. In 2017, the chip maker based in Cupertino, California, introduced the Xeon chips built on its 14nm Skylake micro-architecture as the processing engines for applications-specific server and desktop hardware.

Written for the Ubuntu distribution of the Linux desktop-to-cloud operating system, the optimized DSVMsOpens a new window offer a performance boost that cuts the times necessary to train machine-learning applications for inference by better than seven times, the companies say. The Python environments supplement frameworks for MXNet and TensorFlow and can draw on Intel’s Math Kernal libraries to construct neural networks for machine learning compute.

The Linker-Microsoft partnership, announced at the MWC trade showOpens a new window in Barcelona, will provide the Azure cloud with a scalable auto-labelling capability that lets users pre-process data sets for machine learning applications in a range of industries. Linker’s technology uses artificial intelligence to identify and label data sets, allowing them to be re-used in changing object-recognition scenarios.

According to Linker, the tasks require a high degree of human input and oversight given the customized requirements of optical applications in such fields as autonomous vehicles, medical imaging and manufacturing. Customers, using continuous learning with AI models supporting quality assurance and service delivery via the Azure platform, can import their own models to take advantage of the faster compute.

Founded in 2011, Linker’s AI managed service offering has grown to embrace Smart Cities applications, as well as those for real-time inference that are delivered via the 5G telecommunications standard being rolled out in global markets. Its Azure-delivered AI models will run in parallel with devices in edge deployments when the service launches in June, the companies said.

Optimization of ML compute also is geared to edge deployments of real-time AI. Intel began working with Microsoft on deep-learning development when the Redmond, Washington-based multinational technology company selected its Stratix field-programmable gate array technologyOpens a new window  to process data with ultra-low latency in the Azure cloud.

Dubbed Project Brainwave,Opens a new window the Azure-delivered platform runs deep neural networks on distributed systems architectures with direct-to-metal algorithms that can automatically configure gate array structures in AI applications. The absence of a middle layer of software increases throughput, boosting processing speeds for ML modelling by ten times as compared to CPU- and GPU-based computing.

In November, Intel debuted an updated version of a neural compute stickOpens a new window that lets users test, tune and prototype their network configurations. Based on technology gained in the 2016 acquisition of specialist start-up Movidius, the vision processing unit, or VPU, attaches to desktop and laptop computers through a USB port and provides an engine for accelerating and customizing ML workloads.

Microsoft began supporting Intel’s VPU technology with the Windows ML extension to its flagship operating system. It distributes AI workloads across hardware running Windows. Like Intel’s  Data Science Virtual Machine, the Microsoft tech relieves developers from needing to alter code in advance of deploying the tech in their networked servers and desktops.

High-speed ML processing delivered at the intelligent edge reduces the resource drain on user hardware, Intel says. The advances offer precision customization of core OS functions, including for digital personal assistants and for voice, facial and image recognition applications that enhance biometric security.