As Intel’s competitors unveil smaller processors, the world’s largest chipmaker is claiming the size of its chipsÂ doesn’t really matter â€” even as it struggles to fit more circuits on its silicon.
Can stacking its chips in silicon help Intel keep up with its rivals’ innovations? The Silicon Valley giant recently introduced upgrades to a toolkit which aims to do exactly that.
Intel is packaging blocks of integrated circuits â€“ called chiplets â€“ in systems on chips (SoCs) that can be customized to run specific tasks.
It’s an approach that Intel says will lead to more efficient handling of data center workloadsOpens a new window than the traditional method for increasing processor power by simply adding more transistors to a chip.
Intel released details about its upgraded toolkit at last week’s Semicon West trade show in San Francisco.Â It happenedÂ about the same time as AMD’s shipped its new 7-nanometer chips to the makers of desktop computers.
Intel, which began shipping its new Ice Lake family of 10nm chips in May,Â saysÂ chipmakers must adopt new modular architecturesÂ to lower costs and improve computing performance rather than observe Moore’s Law. The 54-year-old proposition holdsÂ that chipmakers must double transistor counts on a circuit every 18 to 24 months to remain competitive.Â Many forecasters believe theÂ theoryÂ has a short life cycle and will evolveÂ over the next few years.
Meanwhile, GPU maker Nvidia reported an increase of data centers using its DGX hardware and software technology for data-hungry artificial intelligence applications. Intel is targeting the same type of AI applications with its modular designs.
The number of Nvidia’s data centers has reached 22, the company reported last weekOpens a new window , and they’re located in regions outside North America where the graphics designer initially pitched its so-called co-location program for the specialist servers.
Each Nvidia’s DGX box contains an estimated two petaflops of processing power â€“ a petaflop is the ability of a computer to do 1 quadrillion floating point operations per second â€“ and they bested a string of machine-learning benchmarks inÂ results released last weekOpens a new window by a consortium of companies and universities.
But delivering that performance comes at a cost for both power and cooling, so not every data center can accommodate them, Nvidia says.
Intel increasingly has touted performance over real estate in disputing claims made by both AMD and Nvidia. However, Intel is developing tools that improve the efficiency of modular chipsets, another step toward its goal of customizing chipsOpens a new window Â for individual tasks.
Intel’s modular chipsets are built on three-dimensional packaging, called Foveros, that the chipmaker introduced at the beginning of the year. By stacking memory, CPUs, GPUs and caches, Foveros creates systems on chips from a variety of component architectures.
To boost bandwidth and lower power consumption, the multi-die connection bridge embedded in a chip system is expanded to link multiple chips built on the Foveros platform. The expansion allows the systems to function much like the single piece of silicon they replace. Called Co-EMIB, the technology forges high-density connections with analog, memory and chiplet elements on the SoCs.
Those ties are made vertical with an Omni-Directional Interconnect (ODI), which uses channels that flow from the base layer to direct power to component chiplets. Like embedded bridges, ODIs also let SoCs act in concert to process workloads.
Featuring a library of circuit block designs, Intel’s new MDIO interface lets users manage data inputs and output. The die-to-die construction of the bus doubles pin speed and bandwidth density, Intel says.
Along with timing the toolkit updates with AMD’s July shipments of its Ryzen 7nm chips, Intel also challenged its rivalOpens a new window when it announced the release date. At the E3 expo in Los Angeles, executives said the Ryzen 3000 series chips that AMD is targeting at the so-called creator class of content designers failed to outperform larger Intel chips in head-to-head gaming competition.
A similar public argument over benchmarking broke out in May, when Intel took shots at Nvidia over claims about neural networks facilitated by connecting its Tesla GPUs. Nvidia is aiming its DGX servers at data center operators, who then can sell that higher capacity processing capability to their major corporate and research customers.
What’s more, both rivals are hard at work on modular designs. At an annual circuits-tech symposium in Kyoto, Japan, last month, Nvidia displayed a research chipOpens a new window made from dozens of chiplets.
Meanwhile, AMD’s modular composition underpins the Zen2 architecture on which its Ryzen chips for desktops are built. The Zen2 also is built into the Epyc chip that the company aims at data center applications.
Such is the temporal nature of competition, a partnership Intel formed two years ago with AMD to produce a GPU chipsetOpens a new window challenging Nvidia’s graphics cards led to the bridging technology that Intel is baking into its SoC designs.