AMD, Intel, and nVidia in the next 10 years

WelshBloke

Lifer
Jan 12, 2005
33,327
11,477
136
Given that Pixar films still require 5 to 6 hours to render a single frame on large supercomputer clusters, the answer is no, graphics have not reached the point of diminishing returns yet.

Wow, Pixar films look good but I'm shocked it takes that sort of computational power to make.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Given that Pixar films still require 5 to 6 hours to render a single frame on large supercomputer clusters, the answer is no, graphics have not reached the point of diminishing returns yet.

I'm assuming this is a translated article because that statement contradicts even itself.

We are well into diminishing returns- the article maps this out on several different fronts(budgetary constraints, computational power required, development time, asset development). We haven't come close to hitting a graphics peak yet, but we have been in the 'DR' spectrum for visuals for many years now.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Very nice read, thanks. What will happen in 2020. A GPU on the CPU chip. Yes a GPU and CPU on same chip Intel style in 12nm fabrication lol Ive read about it and its going to happen but not anytime soon. Soo imo nothing has changed over last couple years and the next years ahead.


So when you run 3dmark CPU test it wont be 1fps like it is now. It will be 60fps. Since now the CPU has a GPU and can understand and render in realtime.

The new tech will be CPU and GPU on single chip. Ive read about Intel doing it. But this is way way in the future. hmm :)
Then bye bye nvidia and ati ? maybe no ?
 
Last edited:

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
Very nice read, thanks. What will happen in 2020. A GPU on the CPU chip. Yes a GPU and CPU on same chip Intel style in 12nm fabrication lol Ive read about it and its going to happen but not anytime soon. Soo imo nothing has changed over last couple years and the next years ahead.


So when you run 3dmark CPU test it wont be 1fps like it is now. It will be 60fps. Since now the CPU has a GPU and can understand and render in realtime.

The new tech will be CPU and GPU on single chip. Ive read about Intel doing it. But this is way way in the future. hmm :)
Then bye bye nvidia and ati ? maybe no ?

AMD is doing this next year with Llano. There is already a whole bunch of information about it already out for you to peruse. There are even multiple threads about it in these forums.
 

davidrees

Senior member
Mar 28, 2002
431
0
76
Given that Pixar films still require 5 to 6 hours to render a single frame on large supercomputer clusters, the answer is no, graphics have not reached the point of diminishing returns yet.

I'm assuming this is a translated article because that statement contradicts even itself.

We are well into diminishing returns- the article maps this out on several different fronts(budgetary constraints, computational power required, development time, asset development). We haven't come close to hitting a graphics peak yet, but we have been in the 'DR' spectrum for visuals for many years now.

That first statement is pretty ridiculous for several reasons.

First, the rendering of a movie is usually done with ray tracing and second, "large super computer clusters" could be a a couple dozen dual or quad core PCs using cluster software - its a completely worthless "fact".

It also ignores the fact that a CPU is not designed for efficient rendering - an array of proper GPUs is far more efficient.

All that said, lets remember that 10 years ago, we were running Voodoo 3s and GeForce 2 cards.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
AMD is doing this next year with Llano. There is already a whole bunch of information about it already out for you to peruse. There are even multiple threads about it in these forums.

some people are just born to be ignored, eh?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I thought the comments about Eyefinity made a lot of sense.

From page 2 of the linked article said:
On the other extreme, the AMD Radeon HD 5970 is so fast that even with a 30" monitor and everything set to ultra-high quality, you're still seeing greater than 100 FPS in many games. Pause for a moment and think about what AMD’s Eyefinity technology signifies: AMD's performance is so high that it has to come up with creative (and almost frivolous) ways to utilize that power by rendering two or three times that number of pixels."
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
DR here = "diminishing returns" which implies that the cost going forward of computational hardware, algorithms, and artistic ability do not contribute much to making a real-time render any more or less realistic than what we can do now with current hardware and techniques.

it may be true, but personally I think it's bullshit as an argument against development. the diminishing returns argument is older than HDR, older than shader model 3 (let alone 4, let alone 5), and it's older than GPGPU physics; all critical technologies that are necessary for meeting the modern standard of how we approximate reality and all hilariously quaint and bygone compared to how it will be approximated 5 years from now. A big piece that's missing would be a hemispherical video device and the numerous corresponding technologies required to make that happen. eyefinity and high-res panels do not even begin to step in the right direction here (a hemisphere would ideally have an inner surface area of hundreds of megapixels if the resolution is to be convincing).

ben's point is not that we have hit a wall, but that we are in the region where returns are indeed diminishing. For instance, you can run crysis on a 7800GTX or a 5870 and it will look the same, even though the render speed will vary by orders of magnitude, nothing architecturally prohibits the 7950GT from rendering a similar or even the exact same scene as the 5870. You're still talking 5 years of GPU evolution and all the financial transactions and engineering mojo required to make it happen, but they both can run crysis. However, going from SNES to n64 (also a 5 year interval) is comparatively a much larger leap than going from dx9 hardware to dx11. There is still a return, but along similar paradigms you can see that it gradually diminishes. I view "graphics" development as a form of stepwise evolution like punctuated equilibrium. Returns diminish over the life expectancy of a paradigm, then we move into new paradigm territory, and things change for a while until the next "step." There is small evolutionary behavior within each step, when returns diminish, and dramatic evolutionary behavior between the steps, when returns do not diminish.

Diminishing returns can also be defined, not by how large or small the return is, but by how costly it is compared to past returns of equal size. For instance, a 3-teraflop GPU is not 3 times faster than a 1-teraflop GPU when it comes to rendering complex scenes, and why is that? If there is no linearization between "cost" and "benefit," then you could argue that humankind has been in a state of DR ever since we learned how to bury and water seeds and harvest wheat. DR, therefore, is our only reason to work harder; it is not a reason to stop. Indeed, there IS much, much more out there for us to learn and do, and each step we take inevitably, repeatedly and necessarily requires more people, more money, and more work to get it done than the last step. better nut up.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
A big piece that's missing would be a hemispherical video device and the numerous corresponding technologies required to make that happen. eyefinity and high-res panels do not even begin to step in the right direction here (a hemisphere would ideally have an inner surface area of hundreds of megapixels if the resolution is to be convincing).

I think everyone wants better display technology, but what you are talking about is probably very difficult to program for.

What is to prevent theft of the programmers work?

As a consumer I can't imagine these types of technologies coming together unless there is strong anti-piracy measures in place.

In order of importance:

1. Anti-piracy
2. Software (Advanced Graphics APIs,etc)
3. Hardware (if the software is written the hardware builders will follow)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
ben's point is not that we have hit a wall, but that we are in the region where returns are indeed diminishing. For instance, you can run crysis on a 7800GTX or a 5870 and it will look the same, even though the render speed will vary by orders of magnitude, nothing architecturally prohibits the 7950GT from rendering a similar or even the exact same scene as the 5870. You're still talking 5 years of GPU evolution and all the financial transactions and engineering mojo required to make it happen, but they both can run crysis.
.

I might even argue that less hardware is needed in certain scenarios.

At the same time LCD pixel size has decreased the need for "edge anti-aliasing" has decreased.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
the dome display is inevitable. there are primarily hardware milestones to overcome; not software in building something like this. worrying about who will steal whose programs is counterproductive not what is stopping us from producing such a display. companies sue each other over IP every day and this is no different.

It is the cost of thin film displays like AMOLED that would be required for such a technology, the need for a technique of segmenting the display output over numerous AMOLED partitions which need not be continuous, a video device which can neatly (a relative term) output the video signal(s) to these partitions and a video processor capable of rendering at near-gigapixel resolutions quickly. These are the main hardware limitations we are facing and the limiting reactant is money. Getting software to run beautifully on such a display is a comparatively small accomplishment, but do not think that this is some "holy grail" technology that everyone is trying to steal because nobody has it. Companies will continue to protect their IP no matter how good or bad it is, and beyond these key points, development of a dome display is so far away that the particulars are not at all worth speculating.

I might even argue that less hardware is needed in certain scenarios.

If you mean to compare a 8800 GTX to a radeon 4850, you should realize the transistor densities are dramatically different, and though the 4850 contains MUCH more hardware, it is the slower card. It's interesting to note that the 8800 GTX was the first unified architecture and ever since then we have been getting less performance-per-shader. That is what a diminishing return is.
 
Last edited:

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
If you mean to compare a 8800 GTX to a radeon 4850, you should realize the transistor densities are dramatically different, and though the 4850 contains MUCH more hardware, it is the slower card.

The 4850 contains less transistors than the 8800gtx. Also, I'm not sure the 8800gtx is faster, and even if it is, it was a massive chip (90nm production versus 55nm) and thus has a much bigger memory bus and more memory bandwidth.

Just checked actually, the 4850 is faster than the 8800gtx, and even the 8800ultra. The 8800gtx does have significantly more memory bandwidth though.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
RV770 has over 250 million *more* transistors than G80, and I'm sorry, but with high resolution and filters, the 4850 is slower than the GTX (except in doom 3). The differing manufacturing processes are exactly the point I'm driving at. Smaller transistors means you can fit more of them in a given area, and any way you cut it, RV770 has more hardware than G80.

Even the 2900XT had more transistors than G80, and more bandwidth, and it was a slower card.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
the dome display is inevitable. there are primarily hardware milestones to overcome; not software in building something like this. worrying about who will steal whose programs is counterproductive not what is stopping us from producing such a display. companies sue each other over IP every day and this is no different.

Does this display have uses besides playing games? What other types of software could be used with it?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It is the cost of thin film displays like AMOLED that would be required for such a technology, the need for a technique of segmenting the display output over numerous AMOLED partitions which need not be continuous, a video device which can neatly (a relative term) output the video signal(s) to these partitions and a video processor capable of rendering at near-gigapixel resolutions quickly. These are the main hardware limitations we are facing and the limiting reactant is money. Getting software to run beautifully on such a display is a comparatively small accomplishment, but do not think that this is some "holy grail" technology that everyone is trying to steal because nobody has it. Companies will continue to protect their IP no matter how good or bad it is, and beyond these key points, development of a dome display is so far away that the particulars are not at all worth speculating.

I'm not talking about hardware IP.

I am talking about lack of game development (for this dome display) because of piracy. If game development is discouraged then such a display needs to have other uses to warrant its existence. Like you said the limiting reactant is money. The more uses such a device has the less of an issue money would be.

I am not in the IT industry. However, I can understand a certain amount of profit needs to exist in order for the game developers to be interested. Consoles ports on the other hand, I am guessing are a much easier job (so even if piracy exists a profit can still be taken). Ultimately the bottom line is probably effort vs risk.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
such a display would be ubiquitous (think about "feelies" from brave new world). you just aren't imagining the interfaces decades from now. something like this specifically for gaming, even if purposed as a "console" display, is absurd and would never happen. but if you go arbitrarily far enough into the future you should be able to imagine most surfaces to possess the ability to display at least a projection of any image of arbitrary fidelity that you want. a dome is kind of just an archetype but the "surround display" capability is eventually trivial.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
such a display would be ubiquitous (think about "feelies" from brave new world). you just aren't imagining the interfaces decades from now. something like this specifically for gaming, even if purposed as a "console" display, is absurd and would never happen. but if you go arbitrarily far enough into the future you should be able to imagine most surfaces to possess the ability to display at least a projection of any image of arbitrary fidelity that you want. a dome is kind of just an archetype but the "surround display" capability is eventually trivial.

So we are talking about a form of display where pre-recorded real life images can be displayed?

Originally I was thinking of something like a holographic projector where a GPU would render programmed images in 3D.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
and i'm talking about a display with an input that can display either depending on the kind of machine it's getting a signal from.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
RV770 has over 250 million *more* transistors than G80, and I'm sorry, but with high resolution and filters, the 4850 is slower than the GTX (except in doom 3). The differing manufacturing processes are exactly the point I'm driving at. Smaller transistors means you can fit more of them in a given area, and any way you cut it, RV770 has more hardware than G80.

Even the 2900XT had more transistors than G80, and more bandwidth, and it was a slower card.

You're right that RV770 had more transistors (found an article early that put rv770 at far too low), but the 9800GTX was generally faster than g80, the 9800gtx+ is even faster, yet the 4850 is faster still. I guess you could make the 4850 bandwidth limited enough that g80 surpasses it, but framerates will dive at those settings anyway, and the 4870 is the same chip but with way more bandwidth.

R600 had 700 million transistors, so it just barely had more transistors than G80, but performed much worse.

Well, I don't know if this demonstrates diminishing returns, since r600 and g80 have almost equal transistor counts (and equal bandwidth at the high end), yet g80 performs much better. Still, I can't find any evidence of a card that performs linearly better with transistor count over even the lower end g80 variants with crippled memory buses, let alone the fastest ones. The GTX285 will about double the performance of the 8800gt at roughly double the transistor count, but that's the highest end sku versus the low end, cut down sku of g80.