Next Step: 1000 core processor

hackmole

Senior member
Dec 17, 2000
250
3
81
The nice thing about this processor is not just the incredible speed bump it will possess by combined usage but the fact that its power usage is so low it can be run on an AA battery. Such a processor may come to market faster than anyone thinks. Imagine playing X-Box games on your computer while rendering a 2 hour 60 FPS 4K video and rendering a 3d model of a full-size human all at the same time in 3 seconds. Yeah, that's what's going to happen and probably within 2 years.
-----------------------------------------------------------------
check out the story at:

http://motherboard.vice.com/read/behold-the-worlds-first-1000-processor-chip
 
Last edited:

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
The nice thing about this processor is not just the incredible speed bump it will possess by combined usage but the fact that its power usage is so low it can be run on an AA battery. Such a processor may come to market faster than anyone thinks. Imagine playing X-Box games on your computer while rendering a 2 hour 60 FPS 4K video and rendering a 3d model of a full-size human all at the same time in 3 seconds. Yeah, that's what's going to happen and probably within 2 years.
-----------------------------------------------------------------
check out the story at:

http://motherboard.vice.com/read/behold-the-worlds-first-1000-processor-chip

That's not going to happen in 2 years, and not even in 10 years.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Technically speaking, GPUs consist of numerous cores, commonly past the 1000 range since the Radeon 5870.

IF it's general purpose we're talking about, one could stick 1K ARM11 cores on a chip and call it a day, While it won't be fast, it still technically counts as a thousand core processor. :p
 

HeXen

Diamond Member
Dec 13, 2009
7,831
37
91
Fine if it can be utilized for actual performance gains. Otherwise, to hell with cores.
 
Feb 25, 2011
16,790
1,472
126
I suspect that, with 48->72 core CPUs still extraordinarily rare, maybe the "Next Step" might "only" be 128-core CPUs.
 

Keljian

Member
Jun 16, 2004
85
16
71
x000 core processors already exist in the form of GPUs.

It is theoretically possible that you could have a computer based entirely around a GPU.
 

KTE

Senior member
May 26, 2016
478
130
76
Lol.

This is talking Larrabee and IBM Cell here. Don't awaken the giant...

Sent from HTC 10
(Opinions are own)
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Just going throw it out here. SuperCISC can be used with KiloCore. SuperCISC think FPGA with ASIC units.

5 Cores = 1 FMAC
1000 Cores = 200 FMAC ops

Branch and Threading would only be handled by a few cores.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Lol.

This is talking Larrabee and IBM Cell here. Don't awaken the giant...

Sent from HTC 10
(Opinions are own)

Larrabee did make it to market and is used in Super Computers. They call it Xeon Phi. Though it looks like 61 cores (4 threads per core) is all they have atm.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
guys, its called a gpu, amd has a 4096 "core" processor already.

From the article:
Unfortunately, a 1,000 core chip isn't something that could just be plugged into the next line of MacBook Pros. It wouldn't even really suffice as a graphics processor, where massively parallel computation is the norm. In fact, many GPUs exceed the 1,000 cores of the UC Davis chip, but with the caveat that the individual cores are directed according to a central controller. The KiloCore, by contrast, is built from completely independent cores capable of running completely independent computer programs.

They are talking about something a bit different than a GPU.
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
I wish LTSpice was capable of running on GPUs... Now there is a program that would gladly gobble up as many cores as it wanted.
 

wingman04

Senior member
May 12, 2016
393
12
51
GPU's run in parallel CPU's have to do both seral and parallel computing.
 
Last edited:

jhu

Lifer
Oct 10, 1999
11,918
9
81
I thought the cores in a GPU are all of general purpose computing, but they suck at it because branch misses (and other things) are awful on them.
 
Last edited:

wingman04

Senior member
May 12, 2016
393
12
51
GPU's are programmable with the drivers they do highly parallel possessing. CPU's are X86 they can work on many different program threads one after the other.
 

NTMBK

Lifer
Nov 14, 2011
10,237
5,020
136
A GPU "core" is more like a CPU SIMD lane. In GPU terms, a 4770k is a 64 core processor.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I thought the cores in a GPU are all of general purpose computing, but they suck at it because branch misses (and other things) are awful on them.

Not really; what nVidia calls cores on its GPUs are just SIMD lanes. It's like saying that a 4 core Skylake is really 64 cores because of the 2x256-bit SIMD units on the cores.

GP100 could be called something like 2 cores per SM, or up to 120 cores.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?
 
Feb 25, 2011
16,790
1,472
126
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?
Could be decrypting ISIS transmissions, could be spying on law-abiding American citizens, could be hosting a secure VDI for offsite employees.

Probably all three.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?

Darpa are (I believe) formally part of the DoD and they're specifically tasked with funding all sorts of far out ideas to make sure America doesn't get surprised. This sort of thing would fit very well.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,189
126
That's not going to happen in 2 years, and not even in 10 years.

+1 because we dont even have the I/O thats capable of that type of bandwith.

SSD's wont cut it, neither infact PCI-E 3.0 wont do it.

Were gonna need straight up asgard control crystals from stargate SG1 to support that kind of bandwith.

the writer of that article is a complete moron, i bet he even still think's moore's law is in effect, when the creator himself said its dead now.
 
Last edited: