Published in News

Competition in AI platform market to heat up in 2017

by on03 April 2017


Intel, Qualcomm, Nvidia, AMD vying for customers in AI-fueled markets


Major component manufacturers in the artificial intelligence (AI) market have all increased their efforts to develop more aggressive processors for AI-fueled markets in 2017 including autonomous vehicles, enterprise drones, medical care, smart factories, image recognition, and general neural network research and development.

Intel’s Nervana platform is a $400 million investment in AI

Back in November, Intel announced what it claims is a comprehensive AI platform for data center and compute applications called Nervana, with its focus aimed directly at taking on Nvidia’s GPU solutions for enterprise users. The platform is the result of the chipmaker’s acquisition of 48-person startup Nervana Systems back in August for $400 million that was led by former Qualcomm researcher Naveen Rao. Built using FPGA technology and designed for highly-optimized AI solutions, Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years.

The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. During Q1, it will test the Nervana Engine chip, codenamed ‘Lake Crest,’ and make it available to key customers later within the year. The chip will be specifically optimized for neural networks to deliver the highest performance for deep learning, with unprecedented compute density and a high-bandwidth interconnect.

For its Xeon Phi processor lineup, the company says that its next generation series codenamed “Knights Mill” is expected to deliver up to four times better performance for deep learning. Intel has also announced a preliminary version of Skylake-based Intel Xeon processors with support for AVX-512 instructions to significantly boost performance of inference tasks in machine learning workloads.

“Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society,” said Intel CEO Brian Krzanich.

Nvidia partners with Microsoft on AI cloud computing platform

Earlier last month, Nvidia showed no signs of slowing its AI cloud computing efforts by announcing a partnership with Microsoft for a hyperscale GPU accelerator called HGX-1. The partnership includes integration with Microsoft Project Olympus, an open, modular, very flexible hyperscale cloud hardware platform that includes a universal motherboard design (1U/2U chassis), high-density storage expansion, and a broad ecosystem of compliant hardware products developed by the OCP community.

Nvidia claims that HGX-1 establishes an industry standard for cloud-based AI workloads similar to what the ATX form factor did for PC motherboards more than two decades ago. The HGX-1 is powered by eight Tesla P100 accelerators connected through NVLink and the PCI-E standard. Nvidia’s hyperscale GPU accelerator will, it claims, allow cloud service providers to easily adopt Tesla and Quadro accelerator cards to meet the surging demand for AI computing. The company plans to host another GPU technology conference in May 2017 where it is expected to unveil more updates on its AI plans.

On the consumer front, Nvidia’s Shield platform integrates with Amazon Echo, Nest and Ring to provide customers with a "connected home experience", while Spot is its direct take on Amazon Echo and brings ambient AI assistance into the living room. For drivers, the company’s latest AI car supercomputer is called Xavier and is powered by an 8-core custom ARM64 processor, and a 512-core Volta-based GPU. The unit is designed with ASIL D safety rating, the highest classification of initial hazard, and can deliver 30 tera ops of double-precision learning in a 30W design.

Qualcomm’s acquisition of NXP signals investment in AI market

Back in October, San Diego-based Qualcomm bought NXP, the leader in high-performance, mixed-signal semiconductor electronics – and a leading solutions supplier to the automotive industry – for $47 billion. The two companies, joined into a single entity, have now represented what is considered a strong contender in automotive, IoT, and security and networking industries. Using several automotive safety sensor IPs from the acquisition, including radar microcontroller units (MCUs), anti-lock braking systems, MCU-enabled airbags, and real-time tire pressure monitoring, Qualcomm is now positioned to be a "one-stop solution" for many automotive customers.

With its Snapdragon and Adreno graphics capabilities, the company is well positioned to compete with Nvidia in the automotive market and stands a much better chance of developing its self-driving car platform with the help of NXP and Freescale IP in its product portfolios.

AMD targets AI learning workloads with Radeon Instinct accelerators

Back in December, AMD also announced its strategy to dramatically push its presence into the AI-related server computing business with the launch of new Radeon Instinct accelerator cards and MIOpen, a free, comprehensive open-source library for developing deep learning and machine intelligence applications.

The company’s Radeon Instinct series is expected to be a general-purpose solution for developing AI-based applications using deep learning frameworks. This includes autonomous vehicles, HPC, nano-robots, personal assistance, autopilot drones, telemedicine, smart home, financial services and energy, among other sectors. Some analysts note, however, that AMD is the only company uniquely positioned with both x86 and GPU processor technologies, allowing it to fulfill the role of meeting a variety of data center solutions on demand. The company has been developing what it claims is an efficient connection between both application processor types to meet the growing needs of AI’s technological demands.

The MIOpen deep learning library was expected to be released in Q1, but may have been delayed by a couple of weeks. Meanwhile, AMD’s Radeon Open Compute (ROCm) Platform lets programmers focus on training neural networks through Caffe, Torch 7, and Tensorflow, rather than wasting time doing mundane, low level, performance tuning tasks.

Last modified on 03 April 2017
Rate this item
(0 votes)

Read more about: