• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

GPUs Will Have 20 Teraflops of Performance by 2015 ? Nvidia Chief Scientist.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Kakkoii
Originally posted by: Idontcare
Originally posted by: scooterlibby
Guys guys, let's not fight. MIMD relies on instruction sets that are massively parallel, yet infinitely unitary. Think of a MIMD as a random walk with drift and time trend model, the command function kaleidoscopes DIMM and SATA units for in-order reprieves, demanding superscalar multithreading on GPGPU + SOI processes. It's similar to the Hessian matrix, except you take the first third derivative to maximize core output. Now, I am an industry insider, and you may be thinking "This is all BS, what do SATA and DIMM have to do with anything ever?" Well, I assure you that my inside knowledge may render the explanation useless, but take comfort in it's indisputable truisms.

Based on insider info, I know this to be true firsthand. The value of the jacobian transformation cannot be understated when it comes to leveraging MIMD to optimal effect.

Think you could explain MIMD in a simplified/dumbed down way for the rest of us? :p

Sorry Kakkoii, we were just being sarcastic. Scooterlibby's post and mine are a bunch of true terms (hessian and jacobian are actual mathematical entities, not fictional) but it's all pieced together to just be a steaming pile of nonsense for humor purposes.

Nothing in those two posts has anything to do with reality of MIMD, or GPU's, or insider info. It's all tongue in cheek.

As for actually explaining MIMD...yeah if I ever figure it out myself then I'd be happy to explain it. At this time though anything I would have to say on the matter would be misinformation at best, or just more first third derivatives of the adjunct to the jacobian matrix at worst.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Well I don't think that is too rediculous, especially with new architectures and improvements in efficiency - and FLOPs doesn't mean anything for performance anyway given how it can be misrepresented, otherwise the PS3 remains more powerful than almost any PC CPU + GPU combination. R700 provides 2.4 TFLOPs on 55nm technology, and could just as well provide the same performance at 65nm. By 2015, 16nm technology should be readily available and I don't think the concept of an 8x perf. boost going from 65nm->16nm is too crazy. Of course saying that R700 provides 2.4 TFLOPs performance is pretty deceptive. In the real world performance likely does not reach that number. But anyway just look at recent years. In 2007 you have R600 @ 477 TFLOPs, a year later you have RV770 @ 1.2 TFLOPs and R700 @ 2.4 TFLOPs.

 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Fox5
My guess is that MIMD applies multiple instructions simultaneously to multiple data packets. Like add, multiply, and move all at once.

SIMD is a single stream of instructions, operating on multiple streams of data.
So with SIMD you get 'pure' parallelism in the sense that you are performing the same operation on a wide range of data. For example, if you're running pixel shader programs, you want to run the same program on millions of pixels. So effectively you just have one program, one stream of instructions, and you have millions of pixels of data to run it on.

MIMD means that you have multiple independent streams of instructions, which operate on multiple independent streams of data.
Given the above example, in theory this could mean that every single pixel could be running a different program. The programs can be independent, and have their own state.
In a way this would be what a regular multicore CPU or multiprocessor system does: each core can run its own thread/program independently of the other cores in the system.

However, as I already said, in practice I don't believe that nVidia goes down to 'single data' level (the example of running a different program for EVERY pixel), because firstly that requires a LOT of extra control logic, I think it's virtually impossible to build a GPU this way with the current technology, and secondly, it isn't that interesting.
So what I think will happen is that you keep your SIMD processors more or less like they are today (grouped in 'multiprocessors', each having a 32-wide SIMD unit), but they are making them decoupled, so you can have a different program per multiprocessor. This way you can have a handful of different programs running at the same time, being a good compromise between the simplicity and efficiency of SIMD, and the flexibility of a multicore CPU.
In practice this is probably far more interesting for GPGPU than it is for graphics... because as I said, you generally have millions of pixels to shade with the same program. So in graphics, the MIMD system will mostly run the same program on all its processors, effectively acting as a SIMD system again.
It may be able to balance load between different kinds of shaders better though. That may be especially interesting when running GPGPU and graphics loads together, such as with accelerated physics.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: Scali
Originally posted by: Fox5
My guess is that MIMD applies multiple instructions simultaneously to multiple data packets. Like add, multiply, and move all at once.

SIMD is a single stream of instructions, operating on multiple streams of data.
So with SIMD you get 'pure' parallelism in the sense that you are performing the same operation on a wide range of data. For example, if you're running pixel shader programs, you want to run the same program on millions of pixels. So effectively you just have one program, one stream of instructions, and you have millions of pixels of data to run it on.

MIMD means that you have multiple independent streams of instructions, which operate on multiple independent streams of data.
Given the above example, in theory this could mean that every single pixel could be running a different program. The programs can be independent, and have their own state.
In a way this would be what a regular multicore CPU or multiprocessor system does: each core can run its own thread/program independently of the other cores in the system.

However, as I already said, in practice I don't believe that nVidia goes down to 'single data' level (the example of running a different program for EVERY pixel), because firstly that requires a LOT of extra control logic, I think it's virtually impossible to build a GPU this way with the current technology, and secondly, it isn't that interesting.
So what I think will happen is that you keep your SIMD processors more or less like they are today (grouped in 'multiprocessors', each having a 32-wide SIMD unit), but they are making them decoupled, so you can have a different program per multiprocessor. This way you can have a handful of different programs running at the same time, being a good compromise between the simplicity and efficiency of SIMD, and the flexibility of a multicore CPU.
In practice this is probably far more interesting for GPGPU than it is for graphics... because as I said, you generally have millions of pixels to shade with the same program. So in graphics, the MIMD system will mostly run the same program on all its processors, effectively acting as a SIMD system again.
It may be able to balance load between different kinds of shaders better though. That may be especially interesting when running GPGPU and graphics loads together, such as with accelerated physics.

Yeah, here's how BSN explained how it's being implemented on the GT300:

Even though it shares the same first two letters with GT200 architecture [GeForce Tesla], GT300 is the first truly new architecture since SIMD [Single-Instruction Multiple Data] units first appeared in graphical processors.

GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.

http://brightsideofnews.com/ne...-revealed---its-a-cgpu!.aspx

Seems to be about the same as what your proposing. That it would be the sets of cores, not each core itself.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Astrallite
Considering it took nvidia about 3 years to go from 500GFLOPS to 900GFLOPS, assuming a hyperbolic curve, we'd be EXTREMELY generous to say 8TFLOPS by 2015. In fact if we stay on a linear path, 4TFLOPS for a single gpu core is more likely.

If the metric here, Teraflops, is taken as simply "peak" flops for some token computation then I suppose it is possible.

The process technology cadence will provide the linear path to 4Tflops with today's architecture. Engineers take the areal density improvements and just pack in more SP's and clockspeed bumps.

But they don't have to do so little, they can also add more performance per mm^2 by improving the architecture (MIMD vs SIMD is just an example).

Maybe that (ISA and architecture improvements) accounts for the extra 5x peak tflop performance increase Dally foresees coming down the pipe to explain the gap between the linear extrapolation to 4Tflops and his expectation of 20Tflops?
 

poohbear

Platinum Member
Mar 11, 2003
2,284
5
81
Originally posted by: dguy6789
Originally posted by: poohbear
i love threads that speculate about the future. I hope my vid card cooks for me and gives me free massages by 2015, but i wont bet on it.

in all honesty, if development continues at the current pace, i doubt that'll be true. We've had refreshes of the 8800gt for the past 2 years, and performance has not doubled at the 8800gt price point (when they were released they were $250, now a $250 graphics card isnt gonna give u double the performance). as the article states they used to double every year in performance.

Is MIMD a fancy way of saying GPUs will be multithreaded like current dual/quad core CPUs?

http://www.newegg.com/Product/...x?Item=N82E16814102809

ah yes but that's 2 cards in one, and some games dont benefit from crossfire.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Is MIMD a fancy way of saying GPUs will be multithreaded like current dual/quad core CPUs?

Only if we go backwards about 13 years in progress. GPUs are FAR more multithreaded then any CPU, to a staggering degree.

On the general increase in computational power the 8800 Ultra is 514GFLOPS, GTX280 is 933GFLOPS. Right now increase in transistor count exceeds linear correlation to computational power in GPUs as the percentage of die dedicated to traditional rasterization is going down a fairly good rate. While the GT2x0 to GT3x0 may be a 50% increase in xtors, it could be closer to a 100% increase in xtors dedicated to pure FP throughput. This doesn't just go for nV, ATi is in the same situation. I think 20TFLOPS by 2015 is very reasonable.
 

Obsoleet

Platinum Member
Oct 2, 2007
2,181
1
0
It's not crazy to wonder if Nvidia will still be around in 2015 (post Larrabee). Both AMD and Intel can push for that scenario. If NV still has a market, it will be in portable devices which probably wouldn't be 20 teraflops. I think it's a little eager to announce they're going to be anywhere in 2015. I'm hoping for their survival now that they picked a fight with Intel. Intel has a use for AMD for antitrust, Nvidia is expendable.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Obsoleet
It's not crazy to wonder if Nvidia will still be around in 2015 (post Larrabee). Both AMD and Intel can push for that scenario.

If there is going to be any pushing, I think AMD will be the first to go. nVidia has a considerably larger marketshare than AMD in the GPU market, has better developer relations, and is an all around stronger brand.
nVidia is also better equipped to fight Intel, with its mature GPGPU solutions, such as Cuda and PhysX, and now nVidia is the first to support OpenCL aswell.
AMD is fighting on both fronts, CPU and GPU, and not really winning on either. For years AMD hasn't had the fastest products in either market, so they only competed on price. This is taking its toll on AMD's financial situation.
 

Nathelion

Senior member
Jan 30, 2006
697
1
0
Originally posted by: Idontcare
Originally posted by: scooterlibby
Guys guys, let's not fight. MIMD relies on instruction sets that are massively parallel, yet infinitely unitary. Think of a MIMD as a random walk with drift and time trend model, the command function kaleidoscopes DIMM and SATA units for in-order reprieves, demanding superscalar multithreading on GPGPU + SOI processes. It's similar to the Hessian matrix, except you take the first third derivative to maximize core output. Now, I am an industry insider, and you may be thinking "This is all BS, what do SATA and DIMM have to do with anything ever?" Well, I assure you that my inside knowledge may render the explanation useless, but take comfort in it's indisputable truisms.

Based on insider info, I know this to be true firsthand. The value of the jacobian transformation cannot be understated when it comes to leveraging MIMD to optimal effect.

rofl
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It's not crazy to wonder if Nvidia will still be around in 2015 (post Larrabee). Both AMD and Intel can push for that scenario.

If AMD stock keeps trending down nV will be able to buy them outright with straight cash. You clearly don't understand much about the business world if you don't understand how weak AMD is in their market compared to nV atm.

I'm hoping for their survival now that they picked a fight with Intel.Intel has a use for AMD for antitrust, Nvidia is expendable.

Have you considered the possibility that nV may be waiting for Larrabee so they can buy out AMD in its' entirety when they no longer have anti trust issues? That would lock up pretty much the entire console market for them, give them what will end up being well over 90% of the desktop GPU market and a x86 license they can use. Even by Intel's own optimistic claims they are hoping they can compete with mid range parts from nV when Larrabee hits.

Intel and nVidia aren't really going to be fighting for the GPU market, Intel doesn't stand a chance there- they are fighting for the HPC market where Intel stands to lose a ton of money and margins(and nV stands to lose, oh yeah, nothing ;) ).
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: BenSkywalker
Have you considered the possibility that nV may be waiting for Larrabee so they can buy out AMD in its' entirety when they no longer have anti trust issues? That would lock up pretty much the entire console market for them, give them what will end up being well over 90% of the desktop GPU market and a x86 license they can use.

Unless some favorable negotiations with Intel are undertaken to restructure the existing x86 license, the license is non-transferable even under M&A activities so NV would not gain a license by buying AMD.

They would gain all the AMD IP that Intel needs to lawfully produce and sell their own CPU's, so favorably negotiations might very well be plausible. But its not a foregone conclusion by any means. I haven't seen the IP licenses to know for sure but it would not be beyond Intel's practice to have structured the licenses such that they explicitly perpetuate beyond M&A so as to remove risk of continuity from the equation.

This is the same fatal flaw with the reoccurring rumors of NV buying Via for their x86 license, the license would not transfer there either.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
How close are we to having these GPU's act as general purpose processors anyways? I could see if they do Nvidia working with Microsoft to write a version of Windows that runs entirely on the GPU. No need for an x86 license.
 

geoffry

Senior member
Sep 3, 2007
599
0
76
Originally posted by: BenSkywalker
It's not crazy to wonder if Nvidia will still be around in 2015 (post Larrabee). Both AMD and Intel can push for that scenario.

If AMD stock keeps trending down nV will be able to buy them outright with straight cash. You clearly don't understand much about the business world if you don't understand how weak AMD is in their market compared to nV atm.

I'm hoping for their survival now that they picked a fight with Intel.Intel has a use for AMD for antitrust, Nvidia is expendable.

Have you considered the possibility that nV may be waiting for Larrabee so they can buy out AMD in its' entirety when they no longer have anti trust issues? That would lock up pretty much the entire console market for them, give them what will end up being well over 90% of the desktop GPU market and a x86 license they can use. Even by Intel's own optimistic claims they are hoping they can compete with mid range parts from nV when Larrabee hits.

Intel and nVidia aren't really going to be fighting for the GPU market, Intel doesn't stand a chance there- they are fighting for the HPC market where Intel stands to lose a ton of money and margins(and nV stands to lose, oh yeah, nothing ;) ).

I can't see either US or Euro regulators letting the #1 and #2 players in GPU merge...even if NVDA could buy them for cash.

INTC would need to be seen as a solid competitor in GPUs before antitrust regulators let AMD/NVDA merge.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Unless some favorable negotiations with Intel are undertaken to restructure the existing x86 license, the license is non-transferable even under M&A activities so NV would not gain a license by buying AMD.

If AMD holds their license under the current suit by Intel, nVidia will be able to get around that one, which is why I think Intel is pushing hard for them to lose their license due to the changed company.

I can't see either US or Euro regulators letting the #1 and #2 players in GPU merge...even if NVDA could buy them for cash.

INTC would need to be seen as a solid competitor in GPUs before antitrust regulators let AMD/NVDA merge.

Intel owns ~50% of the GPU market world wide. You going to try and explain to a lawyer the difference between discrete and integrated? Right now, nV has ~30% and ATi ~15%, their merger would be seen as increasing competitive pressure on Intel(we would know better, but the average consumer and some idiot regulator wouldn't).
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Genx87
How close are we to having these GPU's act as general purpose processors anyways? I could see if they do Nvidia working with Microsoft to write a version of Windows that runs entirely on the GPU. No need for an x86 license.

That isn't going to work anyway. You need x86 for more than just the OS alone. All the applications wouldn't work either.
Aside from the fact that you'll probably always want to have a CPU core, even in a system on a chip, such as nVidia's Tegra. It combines an ARM core with an nVidia chipset and GPU, for mobile devices.
Since GPUs and CPUs are designed for completely different tasks, I don't see either one replacing the other anytime soon.
The Cell in the PS3 has the same problem... It can run PowerPC code, so you can run linux on it... but it's way too slow for most tasks. Cell is mainly interesting for lots of vector processing, such as in 3d games with graphics, physics, AI and such. As a regular CPU running a conventional OS and applications, it's nowhere near as good.
 

geoffry

Senior member
Sep 3, 2007
599
0
76
Originally posted by: BenSkywalker
Unless some favorable negotiations with Intel are undertaken to restructure the existing x86 license, the license is non-transferable even under M&A activities so NV would not gain a license by buying AMD.

If AMD holds their license under the current suit by Intel, nVidia will be able to get around that one, which is why I think Intel is pushing hard for them to lose their license due to the changed company.

I can't see either US or Euro regulators letting the #1 and #2 players in GPU merge...even if NVDA could buy them for cash.

INTC would need to be seen as a solid competitor in GPUs before antitrust regulators let AMD/NVDA merge.

Intel owns ~50% of the GPU market world wide. You going to try and explain to a lawyer the difference between discrete and integrated? Right now, nV has ~30% and ATi ~15%, their merger would be seen as increasing competitive pressure on Intel(we would know better, but the average consumer and some idiot regulator wouldn't).

I would hope logical people are in charge of those decisions...not often the case but I would hope they understand.

My thinking was once (if?) Larrabee gains significant traction in the mkt an NVDA/AMD merger would be allowed.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Originally posted by: Scali
Originally posted by: Genx87
How close are we to having these GPU's act as general purpose processors anyways? I could see if they do Nvidia working with Microsoft to write a version of Windows that runs entirely on the GPU. No need for an x86 license.

That isn't going to work anyway. You need x86 for more than just the OS alone. All the applications wouldn't work either.
Aside from the fact that you'll probably always want to have a CPU core, even in a system on a chip, such as nVidia's Tegra. It combines an ARM core with an nVidia chipset and GPU, for mobile devices.
Since GPUs and CPUs are designed for completely different tasks, I don't see either one replacing the other anytime soon.
The Cell in the PS3 has the same problem... It can run PowerPC code, so you can run linux on it... but it's way too slow for most tasks. Cell is mainly interesting for lots of vector processing, such as in 3d games with graphics, physics, AI and such. As a regular CPU running a conventional OS and applications, it's nowhere near as good.

Nvidia has already worked with Microsoft to GPGPU accelerate many things in Windows 7, using DirectX11's native GPU computing.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Originally posted by: Kakkoii
Nvidia has already worked with Microsoft to GPGPU accelerate many things in Windows 7, using DirectX11's native GPU computing.

Yes, but that's something different from what Genx87 said.
He wanted to eliminate the CPU altogether, and run the entire OS on a GPU alone.
 

dunno99

Member
Jul 15, 2005
145
0
0
Originally posted by: Scali
Originally posted by: Kakkoii
Nvidia has already worked with Microsoft to GPGPU accelerate many things in Windows 7, using DirectX11's native GPU computing.

Yes, but that's something different from what Genx87 said.
He wanted to eliminate the CPU altogether, and run the entire OS on a GPU alone.

Not arguing against anyone's points, but x86 isn't going to die anytime soon. However, saying that Intel can deny nVidia from making x86 CPUs is a bit misleading. The fact is that the patent on the 32-bit x86 design has long expired (1985, even with the updated 20-year patent coverage), and nVidia can go ahead an make a 32-bit x86 CPU. However, SSE, MMX, and what not is what's preventing nVidia from making a good x86 CPU.

Furthermore, by buying AMD and with Via more or less out of the picture, if Intel stops (the new) nVidia from making x86 CPUs, then nVidia can certainly sue Intel for anti-competitive practices.