Long known as a pioneer of the graphics processing units necessary for computer gaming, Silicon Valley chip maker Nvidia earlier this month gave the working world a boost with the release of a chipset designed specifically for robotic applications.
Unveiled at the Computex 2018 trade show, Nvidia’s Isaac platformOpens a new window augments its trademark GPU with visual, image and video processors and a central processing unit (CPU) that features a pair of deep-learning accelerators. The platform also offers a toolkit for developers of robotic software, a library of robotic algorithms and a simulator for robot training.
The company claims that the chipset â€“ dubbed the Jetson Xavier â€“ can perform 30 trillion operations per second. This is the speed necessary for robots to take stock of their environment and to execute functions autonomously based on decisions the chipsets make after processing input information. And it can do so with very low power consumption â€“ around one-third the energy cost of a single light bulb, the company says.
The six processors that comprise the Jetson Xavier feature 9 billion transistors, enabling the chipset to calculate multiple algorithms simultaneously. The company’s Volta Tensor Core GPU is paired with an ARM64 CPU from fellow Silicon Valley maker ARM, a partner with Nvidia in machine learningOpens a new window .
Early reviews are scant, given the newness of the release. However, at least one evaluator sees the advance as a decided head startOpens a new window on would-be competitors. What’s more, given the company derives a portion of its $9.7 billion in annual revenue from licensing of the 7,000 patents it ownsOpens a new window , players in the sector likely as not are building on Nvidia technologies.
The Isaac platform’s allied components let robots learn by watching humansOpens a new window perform tasks. These feature systems capable of developing neural networks for perception, program generation and execution, which recognize objects and their relationships in space and perform the calculations necessary to execute what the robot has just observed.
This sensory perception means the robot can self-correct based on task performance and changes in the surrounding environment. The technology also allows employees to correct and refine operations thanks to Isaac’s generation of human-readable programs.
The platform’s simulator, first employed by the company for self-driving vehicles, enables developers to test the code they write with Isaac’s toolkit before implementing those operations in real-world settings.
Aimed at capturing a share of the 21 million autonomous vehicles it predicts will be in use worldwide by 2035, Nvidia’s push into developing the â€œbrainâ€ for those machines paved the way for the Isaac platform’s algorithmic functionality.
In addition to those for perception and recognition, the platform library contains algorithms for mapping, path-planning and movement, as well as those that enable robots to anticipate the motion of nearby objects, including humans, and safely navigate around them â€“ technologies critical to the operationOpens a new window of self-driving cars and trucks.
The chipset, whose genesis is in gaming, is the engine that weaves together those process threads. Nvidia’s more than 27,000 Volta GPUs also are at work at the US Department of Energy’s Oak Ridge National Laboratory, home of the Summit supercomputer.
That machine, designed for research including artificial intelligence of the sort applied in robotics, can perform three billion billion calculations a second. Earlier this month, Summit assumed the mantle of the world’s fastest supercomputerOpens a new window .
The company envisions robots built with the Jetson Xavier and programmed on the Isaac platform working side-by-side with humans in a variety of settings, including in manufacturing, construction, agriculture, and in warehousing and logistics. They may also be deployed in the home in applications ranging from cleaning and cooking to care of the elderly and infirm.