Move Over, Moore’s Law: Make Way for Huang’s Law | AI

But Huang did make sure nobody attending GTC missed the memo.

Just how fast does GPU technology advance? In his keynote address, Huang pointed out that ‘s today are 25 times faster than five years ago. If they were advancing according to Moore’s Law, he said, they would have increased their speed only by a factor of 10.

Huang later considered the increasing power of GPUs in terms of another benchmark: the time to train AlexNet, a neural network trained on 15 million images. He said that five years ago, it took AlexNet six days on two of Nvidia’s GTX 580s to go through the training process; with the company’s latest hardware, the DGX-2, it takes 18 minutes—a factor of 500.

So Huang was throwing a variety of numbers out there; it seems he’s still working out the exact multiple he’s talking about. But he was clear about the reason that GPUs need a law of their own—they benefit from simultaneous advances on multiple fronts: architecture, interconnects, memory technology, algorithms, and more.

“The innovation isn’t just about chips,” he said, “It’s about the entire stack.”

GPUs are also advancing more quickly than CPUs because they rely upon a parallel architecture, Jesse Clayton, an Nvidia senior manager, pointed out in another session.

A version of this post, along with material from this post, appears in the May 2018 print magazine.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More