Real-time AI gets closer with Project Brainwave | Tech News

Microsoft has the Azure ML platform to develop machine learning applications in the cloud, and Windows ML lets you bring your models to desktop PCs and edge systems using the ONNX standard. Now it’s bringing machine learning to a new platform: Azure’s high-performance FPGA (field-programmable gate array) systems with a public beta of its Project Brainwave service, originally announced a year ago.

FPGAs: Hardware for machine learning

General-purpose CPUs like those in our PCs, our datacenters, and the public cloud aren’t the fastest way to process data. They’re designed to be adaptable, running many different workloads. That gives them an economic advantage, as manufacturers can make many millions of them with no need to know how they’re going to be used. But if you go back to the early days of computing, many of the fastest systems were single-purpose, using dedicated silicon to solve specific problems.

That approach isn’t really possible today, outside of scientific research or the military, where there’s the budget to build those single-purpose machines. So how then to give the cloud a supercomputer-like performance on commodity hardware, such as for machine learning?

Luckily, there’s a middle road: programmable silicon. FPGAs are reconfigurable arrays of logic gates, set up as blocks of functionality that can be connected to implement specific functions. Unlike traditional logic circuits, they often also contain memory elements, increasing the complexity of the functions that can be implemented. But FPGAs aren’t for everyone, because they can’t be programmed in familiar languages or using everyday development tools. To get the most out of a FPGA, you need to use silicon design tools and languages to define the functions you want to deliver in your FPGA.

You might also like More from author