IBM Telum: A New Chapter in Real-Time Fraud Detection

essidsolutions

IBM Telum is Big Blue’s attempt at delivering AI-driven processing right where the data is located. The new processor enables AI inference to generate insights for real-time commercial and security applications. 

American multinational technology company IBM on Monday leaped ahead once more in just a few months. The company unveiled a new processor that leverages artificial intelligence to detect the possibility of fraud even as a transaction is going on. Dubbed IBM Telum Processor, it is designed specifically to thwart fraud across several industries in real-time.

IBM’s latest innovation comes just months after the company introduced the world’s first 2 nm silicon chip design. The company’s 2 nm breakthrough, however, is years away from broad-scale production, and by extension its adoption. Its 7 nm chip, which was made years ago, is yet to hit the markets.

The nanometer-scale nomenclature, i.e., prefixing a number with ‘nm’ ran its course back in 1997. It has no material or technical basis but for the sake of understanding, let us continue using it just for a while. IBM Telum is also based on the 7 nm process, which has been under development for three years.

“Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions,” IBM statedOpens a new window .

IBM Telum accelerates AI inferencing in systems where it is installed. The new chip is expected to be useful in AI implementations, particularly where AI processing is needed when the data resides. The security of corporate systems is one of the areas that can benefit from IBM Telum’s AI inference capabilities.

Christian JacobiOpens a new window , IBM distinguished engineer and chief architect of IBM Telum told Toolbox:

“The IBM Telum Processor is the company’s first IBM Z processor that contains on-chip acceleration for AI inferencing. This innovation will allow inferencing to occur where the data resides meaning it enables real time fraud detection – not just after a transaction has taken place. By featuring a centralized design to handle AI specific workloads, it will allow businesses to efficiently run AI at scale increasing the value and impact it will bring to our clients.”

What is AI Inference

AI inference is the process of feeding live data points to machine/deep learning models to generate a desired output. It is nothing but an AI system in production as opposed to one in training. It is the second phase of the AI lifecycle, with the first being the training phase.

In the training phase, the ML/DL model, which is basically a software formulated basis ofmathematical algorithms, is trained using relevant data subsets. Inference, on the other hand, is what comes after, the second phase of the lifecycle. Inference is the process of generating live outputs based on a feed of real-time data.

So we could say that inference without training is impossible.

See Also: CEO Pat Gelsinger Unveils a New Roadmap To Turn the Scales Back in Intel’s Favor

One of the biggest challenges in AI inference is low throughput and high latency if deployed away from the system where data resides. This is a major reason why organizations are often too late in detecting security issues and fraud. “Due to latency requirements, complex fraud detection often cannot be completed in real-time – meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place,” IBM explained.

IBM Telum

IBM Telum is designed to reduce the inference time or latency on an ML deployment. It enables AI inference acceleration on the chip itself, by processing real time input data locally on the machine where data is being hosted. The chip itself is optimized for the demands of heterogeneous enterprise class workloads.

“From an AI point of view, I have been listening to our clients for several years and they are telling me that they can’t run their AI deep learning inferences in their transactions the way they want to. They really want to bring AI into every transaction,” general manager of IBM Z Ross MauriOpens a new window told reporters.

“And the types of clients I am talking to are running 1,000, 10,000, 50,000 transactions per second. We are talking high volume, high velocity transactions that are complex, with multiple database reads and writes and full recovery for transactions in banking, finance, retail, insurance and more.”

IBM Telum has a dual-chip module design containing 22 billion transistors and 19 miles of wire on 17 metal layers. Some technical specifications of IBM Telum:

  • 8 processor cores with a deep super-scalar out-of-order instruction pipeline
  • Clocked at over 5GHz frequency
  • Features a redesigned cache and chip-interconnection infrastructure
  • Has 32MB cache per core
  • Scalable up to 32 Telum chips
  • 7 nm node

The new chip is based on Samsung’s 7nm Extreme Ultraviolet Lithography (EUV), a highly advanced and cost-intensive technique. EUV is currently used only by TSMC besides Samsung for chip manufacturing. Both companies leverage EUV for respective 5 nm chips. Intel will also leverage EUV to manufacture Intel 4 and 3 (first and second-generation 7nm nodes).

Closing Thoughts

On-chip AI acceleration could prove to be a game-changer for the majority of corporate systems. It has the potential to not only detect fraud but also prevent it in real-time. IBM Telum’s capabilities also makes it ideal for corporate and other use cases where on-hand analytics are critical or at the least important business function. These include banking & finance, trading, insurance transactions, customer support, other commercial dealings, etc., all in real-time.

Since IBM Telum is based on the 7 nm process, it is expected to be earlier than its 2 nm chip. IBM said it has planned the release of a Telum-based system in the first half of 2022.

Let us know if you enjoyed reading this story on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!