Silicon Valley Chip Makers Add New Twist to Moore’s Law

essidsolutions

With the demands of artificial intelligence outpacing Moore’s Law, a pair of Silicon Valley chip designers are rethinking the architectural approaches for machine-learning applications.

In a sector where smaller has been beautiful for decades, can bigger really be better for lowering a chip’s workload latency?

Designers at the companies, Cerebras Systems and Xilinx, used the Hot Chips symposiumOpens a new window in Palo Alto, California, last week as the backdrop to their largest product releases, each of which aims to pack more processing power for intensive technologies.

To do it, the companies are bucking the trend named for Gordon Moore, the former chief executive of Intel, who observed five decades ago that transistor densities on a microchip double about every two years. While the prediction has guided R&D teams in the intervening years, the limits of physical spaceOpens a new window on a single chip now are leading designers to explore novel solutions.

Plates and Gates

Cerebras Systems, headquartered in Los Altos, California, says its Wafer Scale Engine is the largest-ever microprocessor etched from a single die. The size of a dinner plateOpens a new window , the multilayer chip’s 1.2 trillion integrated circuits are separated by 20 nanometers, giving it 400,000 processing cores with which to execute machine-learning algorithms.

Xilinx, based in Palo Alto, is touting its new Virtex Ultra Scale Plus field-programmable gate arrayOpens a new window , or FPGA. It possesses nine billion logic cells in a system-on-chip platform that can access 1.5 terabits of memory per second. Built on a 16-nanometer standard, the chip’s 35 billion transistors are larger by 1.6 times the logic density of its previous iteration, the Virtex UltraScale.

To make their designs work, Cerebras and Xilinx both stepped back from the sector’s drive to produce shorter distances between transistors. From 10 micrometers in 1971, designers have shrunk those lengths to infinitesimal distances to accommodate more integrated circuits on their chips.

Following Moore’s Law, Korea’s Samsung and Taiwan’s TSMC began producing 5nm chipsOpens a new window earlier this year, and both companies are working on 3nm designs that could hit the market in 2021. Shortening the distance between circuits shaves processing time, but it also raises issues around quality control in manufacturing and cooling in operations.

Going the Distance

TSMC signed on to fabricate the Cerebras Systems behemoth, which its designers say is optimized for deep learning – the layering of machine-learning algorithms that usually takes place in neural networks of data center servers. The Wafer Scale Engine is designed to be liquid cooled.

The chip’s huge size is aimed mostly at cloud services providers like Amazon Web Services, Microsoft and Google that rent processing power and storage to corporations and government agencies. While Cerebras says select customers are using its chip, it has yet to release details about when the Wafer Scale Engine will be available on the open market.

Xilinx scaled down its latest FGPAOpens a new window from the 20nm standard. Doing so allowed it to forge 2,000 user input-output connections and achieve a per-second transceiver bandwidth of 4.5 terabits. With it, users can implement advanced SoC designOpens a new window architectures or prototype their own, the company says.

To improve time to market, Xilinx offers co-validation that lets users integrate and customize hardware and software designs both before physical parts become available. The feature is part of a development platform for the FPGA that includes de-bugging and visibility tools. The company plans to bring the Ultra Scale Plus to market next year.

Go Big or Go Home

Given the lead times, both companies are playing catch-up in a market that’s become adept at custom-tailoring silicon for specific processesOpens a new window . That is because machine-learning applications are advancing with similarly blinding speed.

According to Open AIOpens a new window , a foundation that works to guide the development of artificial technology, computational resources used to train the most advanced machine-learning algorithms grew by 300,000 times between 2012 and 2018.

The rate means chip makers must redouble their efforts, both traditional and unconventional, to keep pace with demand.