The one thing I've learned from watching ATI/Nvidia over the years is that when either company has a genuinely great product in development that is running along smoothly they take every opportunity to show it off. They don't keep it hidden from the press.
The fact that Nvidia has been so hush hush about this product likely means they are having major issues.
The same thing happened with the R600 and the NV30.
that's the problem, neither nvidia nor tsmc was ready to pump out such a huge chip on 40nm ...they should have launched at 55nm then later gone to 40nm.
I agree with you, I think that even if the die reached 800mm2 in size, it would be more feasible to make, cheaper and would have better yields that it currently has with the buggy 40nm. They didn't do their job to counter attack the problems that the TSMC 40nm brought like the variable transistors.
fermi may or may not be revolutionary in architecture. the main thing they've done to distinguish themselves on dx11, is to push tessellation. but the way they've implement it is to give each shader block it's own tessellate module. this is a fine way to boost performance, but not if it sacrifices shader compute power that would normally go to something else. If it is only impacting blocks that wouldn't be doing anything else at time, then yes it's a nice boost. If it occupies blocks that would be doing postproc/AA/AF/displacement then it is parasitic.
we won't know until a true dx11 only game comes out. and that probably wont be for years.
I agree with you, I heard that shader power would be sacrifice when tesselation was used since the stream block that processing it wouldn't be able to do anything else during the operation. I think that they did it for the sake of die space saving.
If AMD's earlier GPU's never had a tesselator, only supported DX10, used DDR3, and AMD never built a 40nm part before, than the 58xx parts might seem more revolutionary as well. But, AMD did their leg work with earlier designs so their 58xx part seems more evolutionary to their prior parts. But, that's just because AMD took those steps as they went along. Nvidia has to take a lot of these steps at once.
Like the Tesselator, DX10.1, 40nm experiment aka HD 4770, GDDR5, everything in steps.
They make good profit from the HPC market (e.g. the margins are MUCH better), but overall it's just a niche at the moment. I think I've read something like 2-4%.
It might be a measly 4%, but every card is sold for three times or higher in price than it's manufacturing costs.
DX11, 40nm, etc were the obvious next steps. I'm actually referring more to the GPGPU part of it. NV's push to make the GPU usable for more then video, and working with developers to use it, is a break from the past (although it should have happened a long time ago). Of course, given that AMD/Intel are going for integrated CPU/GPU, I guess they had no choice.
Still, my point is that even if Fermi is a dud, NV will come back and they will come back strong. NV and ATI both have smart, dedicated people.
Probably if ATi didn't concentrate on the small die strategy, it would had done better in regards of GPGPU than they did now, but since they have a CPU division, they have the concept of GPU working together with the CPU, since there's some tasks that are better done in the CPU and others in the GPU, nVidia lacks of a CPU division, so they will try harder to diminish the importance of the CPU and will work harder to boost GPGPU performance, specially with their rivalry with Intel.