Poll/discussion: do you think multi-GPU/multi-die is the future?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Brunnis

Senior member
Nov 15, 2004
506
71
91
I don't think multi-chip GPUs or SLI/Crossfire is the future. Although this approach does alleviate the yield problem, another showstopper remains: power consumption. Power consumption is largely dependent on technology and transistor count. Splitting a large chip into two smaller ones won't have any positive effect on the power consumption. The effect of this is that it still won't be practical to have a multi-chip solution with considerably more computational resources than the largest single-chip solutions. This, in turn, means that the potential for performance enhancements using this approach is rather small.

In the end, we are very much limited by the progress of transistor manufacturing technology and multi-chip/multi-GPU solutions are not in any way a solution to this. Granted, it can possibly allow for lower prices (if the manufacturers decide to pass the savings to us), but that's pretty much it.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
Originally posted by: Extelleron
Originally posted by: Aberforth
No, the processor splits the task whether the application is multi-threaded or not. A single core processor can also manage multiple threads, this is done by a process called thread-scheduling.

And this is why we see 0% performance improvement from multi-core processors in single-threaded apps.....

It's not completely 0. There is a small improvement. Single core would have to handle all the running processes. Multi core can provide more cpu time, more cache, ... to the single threaded application. You can kind of say that single thread application can get a single core to itself, it does not have to share it with the rest of the world.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Extelleron
Originally posted by: Aberforth
Originally posted by: Extelleron
Originally posted by: BFG10K
Intel doesn't need any software to sychronize the two die of Kentsfield/Yorkfield; they work together exactly as if they were a single die containing four cores. Why can't GPUs be the same? I don't see any reason why not.
Yes but in order to take advantage of multiple cores the software that runs on them has to be multi-threaded and not have too many inter-thread dependencies. If it isn't it'll run no faster on quad-core than it does on a single core. I think this is the point you're missing.

Now if Intel shipped a single core four times faster than previous single cores all CPU limited software would automatically run four times faster regardless of whether it was threaded properly or not. That is the point I?m making when I state single core is more robust than multi-core.

With multi-GPUs the same applies but instead of the games it's the driver that does most of the heavy lifting to enable multiple-GPUs to scale to higher levels of performance than a single core. Without proper scaling you?ve basically got a working single core with the rest of the cores acting as paperweights, so you gain absolutely nothing from having them there.

That's different though; you are talking about multi-core CPUs, which do need to be programmed in software.

AMD's quad-core Phenom has all four cores on one die, and it needs software to support multi-core. Intel has four cores spread out two connected die, and they need software to support multiple cores as well. It has nothing to do with the fact that there are two die, that is just the nature of multi-core processing.

I think you are not quite clear of how threading works, any multi-core cpu will act as if they are independent processors, the controller will split the task into a set of parallel threads. In fact when you are on windows, it does make use of multiple cores whether the application is optimized for multi-threading or not. Windows is a virtual layer to interact with hardware, it's memory addressing is virtual so the multi-threading model is also virtual (not directly connected to hardware layer). So in Windows, an application can create as many threads as it wants, the processor decides how threads are to be scheduled over multiple cores- for example a quad core can have 4 physical threads but the application can use like 100 threads, the processing stack is scheduled over 4 cores. Now you can override this default process by initiating a dedicated thread which is very useful for special applications that make use of compression, video/audio decoding, physics in games etc.

---

Also with the GPU's - Vista introduces a new driver model (which NV has a hard time to learn), this model introduces Video Paging so when the GPU memory is low, it can use system ram or hdd to page the data. To counter the slowdown they make use of thread scheduling between shader programs, however DX9 class apps cannot make use of it. Also resources are shared across many processes, so DX10 class GPU's are heavily dependent on Multicore. These are not done by gpu drivers.

And this changes what I said how? Applications cannot take advantage of multi-core processors if they are not coded to spawn multiplie threads. An application not coded for multi-core will not utilize multi-core. Other apps, like Cinebench, will spawn as many threads as your CPU supports. So if you have a dual-core CPU, 2 threads. Quad-core, 4 threads. Quad-core w/ 2-way SMT, 8 threads.

Not exactly sure what you are trying to say here.

The key here is that on a CPU, they are both available, and extra cores can be utilized @ 0% (single threaded apps, although technically they will be running all the background processess, so there will be a slight benefit) to 100% (distributed computing... scales perfectly across infinite amount of cores).

With a GPU the same thing happens, the various GPUs are available for software to take advantage of from 0 - 100%.
CUDA type scientific software often reaches 100% by running parellel tasks that are not related and require no inter-device communication. With such a thing you can actually run 4 nvidia video cards on a 790FX board with 4 pcie slots, because there is no need for SLI or CF interconnect you are platform agnostic, and they are each operating independantly.

With games and rendering of video the game sends commands to directX or openGL, which sends commands to the driver, which is the software running on the multiple GPUs, AND it requires them to communicate with eachother via special proprietary communication methods (you need the right motherboard).

Having the GPUs display as individual processors to the OS like the multi-core CPUs is the absolute WORST thing that you could want for a video game. The game CAN however work WITH the driver to optimize useage in multi-GPU scenario, but in the end it is the driver that is the "software" running on the multiple processors.
 

AzN

Banned
Nov 26, 2001
4,112
2
0
I think as fab process become bottleneck then multi GPU design would take over in the long haul.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Multi-GPU has yet to make it past the enthusiast market and it's even got a small share there. In fact a single GPU is far more than is needed for most games outside the niche market of people with 30 inch monitors. A single GPU will outperform any console and that is the benchmark by which most people judge games.

I think focusing on a single powerful GPU is still the best plan for either company, while developing multi-gpu for the niche market and proof of concept type things.

CF/SLI using 2 cards is even a better idea than 2 on one card as it at least makes for a viable upgrade option.

GPUs are already essentially "multi-cored" so putting 2 GPUs on one chip is not really practical or even very useful.
 

BurnItDwn

Lifer
Oct 10, 1999
26,349
1,860
126
Currently a single GPU chip can have hundreds of processors/cores within.
I don't know if the future means multiple chips will be normal, but multiple cores has been the norm for a while now in the graphics world.