• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Article [engadget] An algorithm could make CPUs a cheap way to train AI

Hitman928

Platinum Member
Apr 15, 2012
2,773
2,122
136

Researchers say a new algorithm could make CPUs competitive again for neutral network training.
 
  • Like
Reactions: maddie and john3850

eek2121

Senior member
Aug 2, 2005
653
584
136

Researchers say a new algorithm could make CPUs competitive again for neutral network training.
I hear Python does a good job at this as well. ;)
 
  • Like
Reactions: Atari2600

moinmoin

Golden Member
Jun 1, 2017
1,779
1,785
106
Researchers say CPUs can be used universally and algorithms can be programmed to do anything you want, news at eleven!
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,440
375
126
Top-of-the-line GPU Deep Learning Platforms cost $100 grand because they can sell them for that. Just like with the Xeon Platinum CPUs that recently took a price haircut, if they could sell those for $50K a piece they would. If this somehow "takes off" and CPUs become relevant for deep learning again, expect capable CPUs to start increasing in price along with it.
 

Hitman928

Platinum Member
Apr 15, 2012
2,773
2,122
136
Researchers say CPUs can be used universally and algorithms can be programmed to do anything you want, news at eleven!
The point isn't that CPUs can be used to train NN, they've been used since the beginning. The point is that currently, GPUs and specialized hardware are dominating the deep learning scene because they can do it so much faster than a CPU. This research suggests that this new algorithm would allow CPUs to be competitive (if not faster) than GPUs for at least certain types of NN training. This would be a huge deal not only for big time AI companies and hardware vendors, but would make training NNs much more accessible to everyone.
 
  • Like
Reactions: maddie

maddie

Diamond Member
Jul 18, 2010
3,293
2,080
136
The point isn't that CPUs can be used to train NN, they've been used since the beginning. The point is that currently, GPUs and specialized hardware are dominating the deep learning scene because they can do it so much faster than a CPU. This research suggests that this new algorithm would allow CPUs to be competitive (if not faster) than GPUs for at least certain types of NN training. This would be a huge deal not only for big time AI companies and hardware vendors, but would make training NNs much more accessible to everyone.
Exactly. I'm going to look more into this as it could be useful for some stuff I'm exploring.
 

soresu

Golden Member
Dec 19, 2014
1,413
589
136
Suddenly those telling me bfloat16 and matmul on CPU are pointless seem somewhat shortsighted.
 

Hitman928

Platinum Member
Apr 15, 2012
2,773
2,122
136
Suddenly those telling me bfloat16 and matmul on CPU are pointless seem somewhat shortsighted.
If this research pans out, then yes, expect more DL extensions to be added to CPUs. Maybe they'll have special SKUs for DL models as well, we'll see.
 

moinmoin

Golden Member
Jun 1, 2017
1,779
1,785
106
This research suggests that this new algorithm would allow CPUs to be competitive (if not faster) than GPUs for at least certain types of NN training.
Until those algorithms are adapted to FPGAs which blow both CPUs and GPUs out of the water again.

Honestly this area of research has insanely fast cycles. This is a neat snapshot. But beyond including specialized hardware there's simply no way CPUs will be competitive longer than until the next refreshes of GPUs and FPGAs even for certain types of NN training.
 

soresu

Golden Member
Dec 19, 2014
1,413
589
136
Until those algorithms are adapted to FPGAs which blow both CPUs and GPUs out of the water again.

Honestly this area of research has insanely fast cycles. This is a neat snapshot. But beyond including specialized hardware there's simply no way CPUs will be competitive longer than until the next refreshes of GPUs and FPGAs even for certain types of NN training.
Again with none of those options being as efficient as custom designed NN ASIC's - Google are even using ML to optimise logic/wire placement in chip design now, I imagine gen 4 of their TPU will be a nice efficiency improvement when combined with whatever high level changes they have planned.

Link to the article about ML logic/wire placement here.
 
  • Like
Reactions: moinmoin

soresu

Golden Member
Dec 19, 2014
1,413
589
136
An interesting part about the linked article in my post above is the implication that not only can ML placement be better than a humans, it can also be done in hours/days instead of weeks, meaning it could well be responsible for drastically streamlining chip development in an increasingly expensive time of leading edge node shrinks.
 

ThatBuzzkiller

Senior member
Nov 14, 2014
977
127
106

This is the paper in question the article was talking about. Big limitation of SLIDE algorithm so far is that the current implementation doesn't support convolutional layers which means it can't be used to train CNNs (convolutional neural nets) or any other types of neural networks with convolutional layers ...
 
  • Like
Reactions: maddie

ASK THE COMMUNITY