I don't really understand how it works. I did the following calculation on the last 3 node jumps. I took the largest single card, divided the nm of it and the next shrinked big card nm, then I squared the number and multiplied by the number of transistors. The transistor numbers are in millions.I thought it was quite obvious. Instead of looking at die size, look at transistor count. Guess which one effects the performance of a GPU? Or do we not care about performance anymore?
card......nm......transistors......next.........nm....size diff............expected.....actual
8800.....90.......681...................280..........65.....1.917159763....1305...........1400
285.......55.......1400.................480..........40.....1.890625..........2647............3000
580.......40.......3000.................Titan........28.....2.040816327....6123...........7080
Titan X..28.......8000.................Titan XP...16....3.0625...............24500.........12000
Info is from here: https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
EDIT: taking the 65m 280 instead of the 55nm 285 makes the 480 look a bit worse, but not by much. The expected number is a bit higher, at 3600. Added the 8800 just because I also checked it.
EDIT: well, when editing the above looks like a table, but the formating is ruining it for me :\
EDIT2: tried to improve it
However, I'm not sure I did it correctly, please correct me if I'm wrong. If you want to compare it in other ways, the 260/285->480 node shrink more than doubled the amount of transistors. The 580->Big Kepler more than doubled the number of transistors. The node shrink from Titan X to Titan XP resulted in only 50% more transistors. So you're right - the number of transistors has increased, which is not very surprising given that it happens in every generation. However, it actually changes less than in previous generations. So, it does not refute my point at all. Note that the 580 to Titan transition is about in line with the 285 to 580 transition. size is actually is also an option 280 and not the 280 as it makes the 580 actually
Transistor count has always increased, see above. It is a straw man. Your statement about die sizes is also not quite true, obviously it is very important as otherwise we wouldn't have 500-600mm^2 high end cards generation after generation.But as 96Firebird stated, transistor count has obviously increased. What this tells me is that die size isn't as important for determining performance as you think. Performance per watt, has increased TREMENDOUSLY since Fermi, whilst die size obviously hasn't.
This goes to show where NVidia is focusing the brunt of their R&D effort; into increasing performance per watt, or performance per mm^2. Compared to AMD, they have a massive lead in those two areas.
Otherwise, everything you've said is true. Nvidia's engineering has been exceptional, and they're essentially wiping the floor with AMD (What does this say about AMD? That they're essentially only providing low end cards, IMO. However, as we've been stuck at 1080p for quite some time, it doesn't really show). It is not relevant to the discussion, however.
The only reason that anyone is considering the 1080 and Titan XP as high-end and ultra-high-end and not mid-range and high-end are mainly thanks to a lack of competition from AMD, and Nvidia using their great marketing machine (let's face it, people would react very negatively to a $1000 x80 It's much easier to introduce a "new" lucrative super-high-end "Titan" lineup, which is just actually the old x80 in rebranded form), brand recognition and loyal fanboys to jump on the opportunity and push their lower-end cards up the stack.
Anyway, I think this is enough... I think I've already made my point clear several times, including posting actual hard facts.
Last edited: