• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[chiphell] rumor control: hd 8900 & 8800 series spec's leak, the can of whooptitan@$$

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
My point was less about power, and more about something completely new.

When we first started seeing Dx9 games, things were very impressive. Real looking water, bump mapping and all that stuff was completely impressive. Now, we are getting really close to realism, to a point it would take a lot to impress us like then.

The Oculus Rift is something radically different, though it may still have similar issues as 3D does, in that a lot of people can't handle it. I suppose a staggering amount of power might allow us to get realism high enough to impress us, but I doubt it would happen fast enough for us to be in awe when it happens. We'll just see small improvements, which won't wow us like in the past, when we saw major advancements in visual IQ.

I'm guessing the next phase is going to be higher and higher resolutions, which means we won't see much visual improvements, other than sharper images for a while, because we'll be trying to catch up to the additional power requirements for higher resolutions.

I think what most people were disappointed about the xbox360 and PS3 "Next Gen" was that those two consoles single-handedly stifled significant increases in real geometry amounts and real texture quality/resolution.

The reason we're wasting the huge amounts of computing power right now on what doesn't look revolutionarily better is because of the exponentially increasing expense in real dollar terms that hiring artists to fully exploit higher resolution textures and higher polygon counts.

Even something like Crysis 3 doesn't look revolutionarily better when you turn off the blur shaders and show what's actually rendered before the photo-shop filters.

A lot of this has to do with the fact that increasing in the manner I discribe requires exponentially more memory bandwidth as well as exponentially more pixel fill rate, texture fill rate, and geometry rates.

The entire reason for Deferred rendering is to abstract the simulation one level higher so as to try to slow down the exponential hardware requirements to actually render those higher quality things.

If you watched all the videos the developers made of Battlefield 3's deferred renderer, you'll see that the optimization to make it run on today's hardware and abstraction layer to make that happen is by applying all effects as lower than screen resolution and then use a normal map or some other way to make that look less bad.
 
I don't think any of that would impress us today like what advancements we saw in DX9 and before. It just didn't take nearly as much to impress, as we came from sprites to real looking water in 3D.

The more we have, the harder it is to impress.
 
I don't think any of that would impress us today like what advancements we saw in DX9 and before. It just didn't take nearly as much to impress, as we came from sprites to real looking water in 3D.

The more we have, the harder it is to impress.

😛 Maybe I'm just different then.

If the average user understood what was going on and had more acute eyesight then it would be more obvious.

BTW, cool WebGL thingy Imouto found.

http://www.acko.net/files/fullfrontal/fullfrontal/webglmath/online.html

Early 3d efforts really didn't impress me, as Age of Empires II was the best looking game IMO until at least 2007.
 
@F2F 🙂

That photo of the J.S. awhile back was great/funny! I even used it myself in another thread regarding a similar question.
 
If a breakthrough in fabbing technology allowed a new node to have massively less defects per wafer, we may see Nvidia cranking up a 750mm^2 chip with AMD doing a 450mm^2 chip.

I highly doubt something like this would happen though.

I think a 750 mm^2 chip would be beyond the reticle limit of current technology. Unless the optics, masks and/or steppers/scanners have changed significantly - we are limited to sub 30x30 mm or 600 m^2 chips. The problems is that yields start falling off rapidly as optical aberrations begin to distorts the image, IIRC.
 
I think a 750 mm^2 chip would be beyond the reticle limit of current technology. Unless the optics, masks and/or steppers/scanners have changed significantly - we are limited to sub 30x30 mm or 600 m^2 chips. The problems is that yields start falling off rapidly as optical aberrations begin to distorts the image, IIRC.

Looks like we will either see 300-350w+ TDP chips as the norm and/or be stuck with middling performance increases for the foreseeable future....
 
@Raghu

Not sure if you seen this per numbers fwiw.

At such pace, TSMC should start 20nm systems-on-chip production by the end of Q2 2013 at a rate of 5000 12-inch wafers per monthTSMC will ramp up production to over 10 000 wafers in the third quarter.The internal goal for 20nm chips is a 30 000 to 40 000 wafer starts per month by the end of Q1 2014.Around the same time last year, TSMC announced it will start mass producing 20nm chips at Phase 6 in the Fab 12 facility in Hsinchu this year, and add another fabrication facility in early 2014.

http://www.phonearena.com/news/TSMC-20nm-chip-manufacturing-goes-ahead-of-schedule_id41451
 
Last edited:
Qualcomm expects their small 20nm SoCs not before Q2 2014 - that tells you something about the state of TMSCs 20nm. GPUs are about much larger than these SoCs.
 
Qualcomm expects their small 20nm SoCs not before Q2 2014 - that tells you something about the state of TMSCs 20nm. GPUs are about much larger than these SoCs.

Yep, not to mention Qualcomm is willing to pay much more than AMD and nVidia.

20nm GPUs looks like end of 2014 or in 2015.
 
I think we just might (imo) see a AMD 20nm by Q4 I guess I'll have to wait to see for myself. 🙂


Fwiw
Last year, a report revealed that AMD’s Volcanic Islands series would be the first to make use of a 20nm process and would arrive sometime in 2014. However, in a recent interview with AMD’s Jim Keller and Chekib Akrout, Rage3d has revealed that the Volcanic Islands GPUs would launch in 2013.

Just think GTX 7x series talk in 2012 /rumor/not going to happen/Maxwell is the next GTX etc today we the GTX 7x series and yes I know it isn't a new 20nm chip.
 
Last edited:
If they want to get something out on 20nm this year, I would think low-end/high volume parts would be more likely than a flagship GPU.
 
Ok this is getting too confusing this will be my last post in this tread and on this subject thank god right🙂

So researching some more came across the below info (not cofirmed yet by AMD) which (if correct) now discourages my thinking on the Volcanic Islands 20nm Q4 to a degree.

Volcanic Islands could be coming in the last quarter and feature the Hawaii Core that replaces Tahiti Volcanic Islands architecture although not confirmed could stick to 28nm. Two additional and possible codenames have been hinted as Reychavik and Honolulu
.
 
Last edited:
If they want to get something out on 20nm this year, I would think low-end/high volume parts would be more likely than a flagship GPU.

If AMD is introducing a new GPU this year, it will not be on 20nm. In which case it will be a redesigned part based on 28nm to better compete with NV.

Apple, supposedly and likely has bought out the first three months of wafer starts @ 20 nm, since Apple was willing to pay more per wafer than QCOMM. Then it will be QCOMM's turn (since both are able to pay more for wafers than AMD/NV). Last I read, it's still looking like GPUs will go to 20nm sometime in 2H2014, unless there are some problems bringing up 20nm on larger dice, then it could take a bit longer.
 
Last edited:
Shader scaling drops off pretty hard when going from Pitcarn to Tahiti. While it might be an inherent issue with GCN, it's much more likely due to architectural tweaks and ROP bottleneck. Can't really gauge the GCN1.1 improvements from the 7790, because it seems pretty memory and ROP bound, but it is quite a bit more efficient than the 7770 despite having memory clocked 50% higher. 12.5% more shaders and 50% more ROPs with the architectural improvements from Pitcarn and GCN1.1 on a mature node with an agressive turbo and you might have something rivaling Titan. Although Nvidia could launch a fully enabled GK110 gaming SKU, so we might have fun times in GPU land ahead of us even before 20nm.
 
Qualcomm expects their small 20nm SoCs not before Q2 2014 - that tells you something about the state of TMSCs 20nm. GPUs are about much larger than these SoCs.

Yep, not to mention Qualcomm is willing to pay much more than AMD and nVidia.

20nm GPUs looks like end of 2014 or in 2015.

That's a stupid argument. You could have said the exact same thing about 28nm, but ATI/AMD produced a 28nm product long before Qualcomm did, and that's a monolithic GPU vs. a tiny SOC. ATI has consistently led the pack in bringing the first volume shipments from TSMC on each node and absent argument why this should stop being the case, the smart money is on ATI leading this node too.

Is it possible that 20nm will slip into 2H 2014? Sure it is, but certainly not by your reasoning.
 
Last edited:
Back
Top