Intel reveals its first AI chips
New processors will handle AI workloads in data center environmentsEuropost
Intel has unveiled this week two new CPUs designed for large computing centers which will be the chipmaker's first to utilize artificial intelligence (AI). The two chips are company's first offerings from its Nervana Neural Network Processor (NPP) line, with the one being used to train AI systems while the other will handle inference.
The first CPU is the Nervana NNP-T, codenamed Spring Crest. It is to be used for training and comes with 24 Tensor processing clusters that have been specifically designed to power neural networks. This is when most of the heavy number crunching and vector math operations takes place as the software pores over batches of information to learn patterns in the data. Spring Crest achieves this by using Intel's new system on a chip (SoC), which is claimed to provide users with everything they'll need to train an AI system on dedicated hardware. Furthermore, the component can hit 119 trillion operations per second (TOPS) using a mixture of BFloat16 with FP32 accumulation, it is claimed, and does it within a given power budget. The chip is also designed with flexibility in mind, to offer a balance between compute, communication, and memory, and is optimized for batched workloads.
On the other hand, the Nervana NNP-I, codenamed Spring Hill, is the company's inference SoC. This one is built on Intel's 10-nanometer manufacturing process and features Ice Lake cores to accelerate high workloads coping by using minimal amounts of energy.
"The Intel Nervana NNP-I offers a high degree of programmability without compromising performance or power efficiency. As AI becomes pervasive across every workload, having a dedicated inference accelerator that is easy to program, has short latencies, has fast code porting and includes support for all major deep learning frameworks allows companies to harness the full potential of their data as actionable insights," Intel explains.
Both the Nervana NNP-T and NNP-I were designed to compete with Google's Tensor Porcessing Unit, Nvidia's NVDLA-based tech and Amazon's AWS Inferentia chips.
Vice President and General Manager of Intel's Artificial Intelligence Products Group, Naveen Rao explained how the company's new processors will help facilitate a future where AI is everywhere, saying:
“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," he said, adding that data centers and the cloud need to have access to performant and scalable general purpose computing and specialised acceleration for complex AI applications.
"In this future vision of AI everywhere, a holistic approach is thus needed - from hardware to software to applications,” Rao added.