AMD is adopting a smart compute approach. Graphic Core Next is a true MIMD (Multiple-Instruction, Multiple Data) architecture. With the new design, the company opted for "fat and rich" processing cores that occupy more die space, but can handle more data. So comparing 2048 SPs of GCN architecture to VLIW-4 is probably meaningless.
In general, comparing cards on paper can work if you are comparing across the same architecture/generation, but it's almost always meaningless across 2 entirely different architectures imo. What if GCN is 20-30% more efficient than VLIW-4? What about vastly improved geometry/tessellation performance? NV cleans up in Lost Planet 2, Hawx 2, Crysis 2, partly due to superior Tessellation performance. Also, what if AMD introduces a multi-threaded DX11 driver for HD7900 series allowing it to surprass NV in games like CIV5?
All I am saying is these specs appear to be "only" about 50% faster than HD6970 (outside of ROPs), but the performance improvement might be far greater since the specs don't tell us anything about how much better (or worse) GCN is vs. VLIW-4.
how do you get 64 and 60 ROPs with 384 bit?
yes we all know exactly how much better it is but we decided not to let you know...Ugh...anyone know how much better the 7950 is over the 6950 or how much more it'll cost or when it'll release? I have a month to return this 6950 2gb if necessary.
I think we have to wait and see. I personally don't expect too much from GCN because it's a new architecture. Could be good or bad it really is a mystery until it gets reviewed. Frankly, I am more concerned about it under performing as GCN is AMD's answer to FEMI on the compute side. For pure graphics it can be argued that VLIW was the right way. And when you build something that has to be good at 2 things typically it ends up being less good at either then a chip that's designed just for 1 kind of operation. I really have this feeling that the new GCN chips are going to lose the kind of power efficiency that the older VLIW chips had over nvidia. It's a lot more hardware to do basically the same job.
Looks enticing. I wouldn't mind AMD shipping me a 7950. Wonder what the 7800's will be like.
But can it run Crysis?
2x 6970 runs 1080p x 3 monitor BF3 fine if you turn off MSAA, fine as in ~50 fps.
If a single 7970 is ~180% of a 6970, a single 7970 is already overkill for pretty much everything UNLESS you game using 3 monitors, then you'd need CF 7970 3gb and enjoy games at over 100 fps on 3 monitors.. thats some serious power.
I just hope they don't make them longer than 10.5 inch cos it aint going to fit my case. lol
Depends what you mean by "run."
Minimum fps in Crysis is heavily cpu dependant, whereas average is gpu dependent. Even a GTX580 can hit over 35fps average now.
Minimum fps is pretty much linear with clockspeed and the game only effectively runs 2 cores, so until Intel comes out with a Cpu that's clock-for-clock 4 times faster than i7-9xx, then no, nothing will ever "run" Crysis if you want a minimum fps above 30.
Not sure how you arrived at this conclusion. When I swapped out my GTX470 @ 760mhz GPU for an HD6950 2GB @ 6970 speeds, my min fps in Crysis improved tremendously. The game is almost entirely GPU limited on a modern i3/i5/i7 CPU. Lowering clock speeds from 3.9ghz to 2.8ghz on my Core i7 860 had no effect on performance.
You can see below that Crysis responds extremely well to a faster GPU in regard to min frames:
![]()
FWIW, that's gamer quality, not that ultra requires more cpu though.
I just hope they don't make them longer than 10.5 inch cos it aint going to fit my case. lol
The chip just might be much larger in size, as in 430-480mm^2. Through architectural improvements of ROPs, TMUs and SPs, the efficiency might be just as good as VLIW-4. But I fully expect the 7970 card to have a TDP of 190-200W. That's fine with most of us, since we are talking about a high-end card. Honestly, if they need to go with 250W TDP and deliver 80% more performance than a 6970, that's cool too. High-end cards should prioritize performance over efficiency. 28nm shrunk 6970 in the form of 7870 should address efficiency concerns for those who want a 120W or so card. Also, Fermi didn't have a lot of problems providing excellent compute and graphical performance. I think AMD cards will do just fine in this regard. I just hope the performance is at least 30% faster than a GTX580. If Kepler is delayed until late 2012, AMD can release a refresh of HD7970 series and add another 15% performance increase to combat Kepler.
This. When I was watching a video about my case before I bought it, they gave an example of how much space was needed for the AMD cards. I was shocked, and the reviewer had to take out a fan so it would fit. I haven't seen one in person, so I was a little suprised. And I know a lot of cases don't have as much room as my case, but then again I suppose many enthusiest cases have that room.
I think we should all wait and see.
I am more of the personal unsubstantiated opinion that the gcn chips are going to be similar to whatever nvidia puts out this gen. And it's likely that nvidia is keeping pace with amd on the new 28 nm chip.
Saying this because of the rumored apple switch back to nvidia. If nvidia has slower chips or if they are having issues with delivery of their 28nm part I doubt one of the most discerning computer maker would pick them over amd.
There might be something to the CPU improving minimums. Per [h]'s CPU scaling in tri-sli comparison.
My motherboard is finally arriving tomorrow so I should have my results down by the end of the weekend comparing scaling with my i7 920 and my SB-E.
I think we should all wait and see.
I am more of the personal unsubstantiated opinion that the gcn chips are going to be similar to whatever nvidia puts out this gen. And it's likely that nvidia is keeping pace with amd on the new 28 nm chip.
Saying this because of the rumored apple switch back to nvidia. If nvidia has slower chips or if they are having issues with delivery of their 28nm part I doubt one of the most discerning computer maker would pick them over amd.
It appears, writes Feeney, that there has been not a technical slip but a cooling of the relationship between Apple and AMD :
"Contacts confirm that AMD GPU performance is not the issue this is some sort of a customer-supplier tiff and Apple is known to be very difficult with its suppliers."
