• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Next Step: 1000 core processor

hackmole

Senior member
The nice thing about this processor is not just the incredible speed bump it will possess by combined usage but the fact that its power usage is so low it can be run on an AA battery. Such a processor may come to market faster than anyone thinks. Imagine playing X-Box games on your computer while rendering a 2 hour 60 FPS 4K video and rendering a 3d model of a full-size human all at the same time in 3 seconds. Yeah, that's what's going to happen and probably within 2 years.
-----------------------------------------------------------------
check out the story at:

http://motherboard.vice.com/read/behold-the-worlds-first-1000-processor-chip
 
Last edited:
The nice thing about this processor is not just the incredible speed bump it will possess by combined usage but the fact that its power usage is so low it can be run on an AA battery. Such a processor may come to market faster than anyone thinks. Imagine playing X-Box games on your computer while rendering a 2 hour 60 FPS 4K video and rendering a 3d model of a full-size human all at the same time in 3 seconds. Yeah, that's what's going to happen and probably within 2 years.
-----------------------------------------------------------------
check out the story at:

http://motherboard.vice.com/read/behold-the-worlds-first-1000-processor-chip

That's not going to happen in 2 years, and not even in 10 years.
 
Technically speaking, GPUs consist of numerous cores, commonly past the 1000 range since the Radeon 5870.

IF it's general purpose we're talking about, one could stick 1K ARM11 cores on a chip and call it a day, While it won't be fast, it still technically counts as a thousand core processor. 😛
 
x000 core processors already exist in the form of GPUs.

It is theoretically possible that you could have a computer based entirely around a GPU.
 
Lol.

This is talking Larrabee and IBM Cell here. Don't awaken the giant...

Sent from HTC 10
(Opinions are own)
 
Just going throw it out here. SuperCISC can be used with KiloCore. SuperCISC think FPGA with ASIC units.

5 Cores = 1 FMAC
1000 Cores = 200 FMAC ops

Branch and Threading would only be handled by a few cores.
 
Last edited:
Lol.

This is talking Larrabee and IBM Cell here. Don't awaken the giant...

Sent from HTC 10
(Opinions are own)

Larrabee did make it to market and is used in Super Computers. They call it Xeon Phi. Though it looks like 61 cores (4 threads per core) is all they have atm.
 
Last edited:
guys, its called a gpu, amd has a 4096 "core" processor already.

From the article:
Unfortunately, a 1,000 core chip isn't something that could just be plugged into the next line of MacBook Pros. It wouldn't even really suffice as a graphics processor, where massively parallel computation is the norm. In fact, many GPUs exceed the 1,000 cores of the UC Davis chip, but with the caveat that the individual cores are directed according to a central controller. The KiloCore, by contrast, is built from completely independent cores capable of running completely independent computer programs.

They are talking about something a bit different than a GPU.
 
I wish LTSpice was capable of running on GPUs... Now there is a program that would gladly gobble up as many cores as it wanted.
 
I thought the cores in a GPU are all of general purpose computing, but they suck at it because branch misses (and other things) are awful on them.
 
Last edited:
GPU's are programmable with the drivers they do highly parallel possessing. CPU's are X86 they can work on many different program threads one after the other.
 
I thought the cores in a GPU are all of general purpose computing, but they suck at it because branch misses (and other things) are awful on them.

Not really; what nVidia calls cores on its GPUs are just SIMD lanes. It's like saying that a 4 core Skylake is really 64 cores because of the 2x256-bit SIMD units on the cores.

GP100 could be called something like 2 cores per SM, or up to 120 cores.
 
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?
 
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?
Could be decrypting ISIS transmissions, could be spying on law-abiding American citizens, could be hosting a secure VDI for offsite employees.

Probably all three.
 
Anybody pay attention to who the primary customer is? I wonder what is the Department of Defense's interest in this?

Darpa are (I believe) formally part of the DoD and they're specifically tasked with funding all sorts of far out ideas to make sure America doesn't get surprised. This sort of thing would fit very well.
 
That's not going to happen in 2 years, and not even in 10 years.

+1 because we dont even have the I/O thats capable of that type of bandwith.

SSD's wont cut it, neither infact PCI-E 3.0 wont do it.

Were gonna need straight up asgard control crystals from stargate SG1 to support that kind of bandwith.

the writer of that article is a complete moron, i bet he even still think's moore's law is in effect, when the creator himself said its dead now.
 
Last edited:
Back
Top