Published in Cloud

AMD launches AMD Instinct MI100 accelerator

by on17 November 2020


HPC computing on Azure

AMD launched the new AMD Instinct MI100 accelerator with ROCm  4.0 open ecosystem support and has been showing off its list of AMD EPYC  CPU and AMD Instinct accelerator based deployments at this year’s SC20 virtual tradeshow.

It is all part of AMD’s cunning plan to take a bigger slice of Intel’s HPC market and so it has been mostly talking about its Microsoft partnership  Azure for HPC in the cloud.

The outfit said that it is on track to begin volume shipments of the 3rd Gen EPYC processors with “Zen 3” core to select HPC and cloud customers this quarter in advance of the expected public launch in Q1 2021, aligned with OEM availability.

The new AMD Instinct MI100 accelerator, is the world’s fastest HPC GPU accelerator for scientific workloads and the first to surpass the 10 teraflops (FP64) performance barrier.

AMD said that its new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. Supported by new accelerated compute platforms from Dell, HPE, Gigabyte and Supermicro, the MI100, combined with AMD EPYC CPUs and ROCm 4.0 software, is designed to propel new discoveries ahead of the exascale era.

AMD Data Centre and Embedded Solutions Business Group senior vice president Forrest Norrod said:  “No two customers are the same in HPC, and AMD is providing a path to today’s most advanced technologies and capabilities that are critical to support their HPC work, from small clusters on premise, to virtual machines in the cloud, all the way to exascale supercomputers.

“Combining AMD EPYC processors and Instinct accelerators with critical application software and development tools enables AMD to deliver leadership performance for HPC workloads.”

Azure is using 2nd Gen AMD EPYC processors to power its HBv2 virtual machines (VMs) for HPC workloads. These VMs offer up to 2x the performance of first-generation HB-series virtual machines[ii], can support up to 80,000 cores for MPI jobs], and take advantage of 2nd Gen AMD EPYC processors’ up to 45 per cent more memory bandwidth than comparable x86 alternatives[iv].

HBv2 VMs are used by numerous customers including The University of Illinois at Urbana-Champaign’s Beckman Institute for Advanced Science & Technology which used 86,400 cores to model a plant virus that previously required a leadership class supercomputer and the U.S. Navy which rapidly deploys and scales enhanced weather and ocean pattern predictions on demand. HBv2 powered by 2nd Gen AMD EPYC processors also provides the bulk of the CPU compute power for the OpenAI environment Microsoft announced earlier this year.

AMD said that its EPYC processors have helped HBv2 reach new cloud HPC milestones, such as a new record for Cloud MPI scaling results with NAMD, Top 20 results on the Graph500, and the first 1 terabyte/sec cloud HPC parallel filesystem. Across these and other application benchmarks, HBv2 is delivering 12x higher scaling than found elsewhere on the public cloud.

Adding on to its existing HBv2 HPC virtual machine powered by 2nd Gen AMD EPYC processors, Azure announced it will utilize next generation AMD EPYC processors, codenamed ‘Milan’, for future HB-series VM products for HPC.

Last modified on 17 November 2020
Rate this item
(2 votes)

Read more about: