The job, handled by Ansys Fluent for Baker Hughes, involved crunching a 2.2-billion-cell axial turbine simulation. On a traditional setup with 3,700 CPU cores, it took 38.5 hours. Plugged into Frontier with 1,024 MI250X accelerators and matching EPYC CPUs, the time dropped to 1.5 hours—fast enough to turn design iteration into something approaching real-time.
Frontier — once the fastest box on the planet before being nudged aside by El Capitan — runs on 9,408 EPYC processors and 37,632 MI250X GPUs. El Capitan, meanwhile, flexes 44,544 of AMD’s newer MI300A accelerators. Both machines left Nvidia's AI GPU dominance untouched, but — in raw compute, AMD was not to be underestimated.
This CFD run didn’t even use the full power of Frontier. There’s still a load of horsepower left idling, meaning these gains are only the beginning. And with competitors like Nvidia hogging AI mindshare, this proves AMD's kit can handle bleeding-edge workloads at scale.
AMD data centre VP Brad McCredie said: “By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges.”
AMD’s Achilles’ heel remains its software stack. Data centres still tend to reach for Nvidia because its driver support is tighter and the dev ecosystem is more mature. Tiny Corp’s TinyBox system struggled with Radeon RX 7900 XTX cards until Lisa Su stepped in. And even then, TinyBox shipped an Nvidia version — and recommended it.
If AMD can sort out the software side, there’s no reason it shouldn’t be chewing up more of the AI and HPC market.