Originally posted by: Jax Omen
I still stand by that anyone who can afford high-end GPUs shouldn't be affected by the power costs associated with said GPUs. It's pocket change by comparison. And if they are? Turn off your damn AC/heat! Those consume more power than everything else combined in the average home. Next-most power-hungry is the fridge/freezer. PCs are pretty far down the list.
Originally posted by: Jax Omen
Eh, I'm just not enough of a hippy to care about the power consumption of my computer, I guess.
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.
As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.
This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.
As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.
This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.
Originally posted by: chizow
After seeing more firm specs on the RV770 I don't think there's too much good news here. The only good news for ATI is that they'll have the fastest single-gpu card for a month or two until GT200 releases, at which point they'll get lapped again in terms of performance. Than an X2 version might put them in a competitive position again at which point NV will respond with a die-shrink or SLI-on-a-card solution of their own or both. All while maintaining a comfortable lead at the high-end with a $2000 GT200 Tri-SLi solution.
As for 4870, I don't think it'll be much faster than 9800GTX/8800GTX/Ultra in terms of performance. Maybe 15-25% faster, max. 16 > 32 TMUs seem to be the biggest gain here and specifically mentioned as a major bottleneck for ATi R600 parts. Still, that only puts ATI's texture fill-rate equivalent to a 9600GT, not counting any advantages from different vendor design. The rest of the specs seem rather unspectacular with questionable gains, although shaders may also scale well as that seemed to be another weak point of R600. Going from 64 to 96 real shaders, or 320 to 480 super scalar along with unlinked shader clocks should help close any gaps in shader performance in unoptimized games where NV held a lead previously.
This part would've been a great answer to G80/G92 6 months ago when RV670 released, or even a year ago when R600 released. But at this point I think it'll be obvious that its too little too late, mostly competing with G80/G92 and made obsolete again when NV fires back with GT200 later this quarter.
Originally posted by: Rusin
If we use simple mathematics:
G92b: Should be like 230mm^2 minus architectural updates
RV770: Should be like 250mm^2 minus architectural updates
They are saying that GT200 would have 1000-1100 million transistors if they would do it with 55nm there would be even chance that GT200 would be smaller chip than G92 [305-335mm^2 (G92: 324mm^2)]. With 65nm it should be around 430-470mm^2. and of course.. there can be updates on architecture that could make it smaller.
-------
If these rumours are true then AMD would take steps in performance/watt-ratio that have never ever seen before..and all this by using the same 55nm production architecture and not even implementing new GPU architecture? Also rumours would indicate that Nvidia would almost go backwards; GT200 would be basically 9800 GX2 on single chip, but GT200's TDP would be over 50W higher?
HD4870 X2 would be more than twice as fast as HD3870 X2 and no powerconsumption increases?
Looking at the numbers to back up what I said:
In terms of shader performance:
HD 4870 (480 * 2 * 1.050) = 1008
HD 3870 (320 * 2 * 0.775) = 496
4870 = 2.03X 3870
In terms of texture performance:
HD 4870 (32 * 0.850) = 27.2
HD 3870 (16 * 0.775) = 12.4
4870 = 2.19X 3870
In terms of memory bandwidth:
HD 4870 (3880 * 0.032) = 124.2 GB/s
HD 3870 (2250 * 0.032) = 72.0 GB/s
4870 = 1.725X 3870
Originally posted by: bryanW1995
@extelleron: more and more rumors point to 850 core and 1050 shader, so...
(480*2*.85) = 816
816/496 = 1.65X . still a good improvement, but not nearly as impressive.
Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?
Originally posted by: bryanW1995
@extelleron: more and more rumors point to 850 core and 1050 shader, so...
(480*2*.85) = 816
816/496 = 1.65X . still a good improvement, but not nearly as impressive.
Originally posted by: thilan29
Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?
Is G92b on the 55nm process or the 65nm process? That would explain the difference. Or maybe they've included some added functionality?? (ie. like the NVIO chip that was separate on the G80 but part of the die I think on G92.)
Not all structures are the same. Some things (e.g. cache) are bigger than other things (e.g. ALUs) when made out of the same number of transistors.Originally posted by: biostud
The RV770 is supposed to have +800M transistors the g92 has ~686M but the G92b should be larger than the RV770, why is that?
Originally posted by: Ketherx
These are going to have crossfire right? If these cards are really good, I'll make another AMD build instead of intel since crossfire's a bit cheaper on an AMD system (that I've found anyway).
Originally posted by: Bakku
unless they revamp their current inefficient stream shaders, i dont see how the 4k series would be groundbreaking in performance. as long as the shader architecture stays the same, it doesnt matter if it's using GDDR3 or GDDR5. just my 2c.
Actually I did run the numbers and while I don't think 2x performance is realistic given the released specs of RV770, I do think a 50% increase over RV670 is achievable, which puts me at my 15-25% increase estimate over existing G80/G92 parts. Considering a 3870 in CF or in X2 often fails to beat the 8800/9800 GTX/Ultra in games that don't scale particularly well, I'm not sure why you're so confident RV770 will approach doubling RV670's performance. Personally I think the improvements aside from the TMU additions are unnecessary and that ignoring ROPs is a mistake.Originally posted by: Extelleron
If you look at the specifications vs. the performance of the 3870, then you are dead wrong.
Looking at pure numbers, the HD 4870 is a solid ~2X improvement in just about every area of the GPU over HD 3870. The only area where performance hasn't been improved much is the ROP area; the ROPs are not much of a bottleneck, and with a faster core speed, ATI already has a significant advantage in that area over nVidia.
Point 1 is true but doubling TMU only brings their FP16 capabilities in-line with NV's G80 and G92. As for Point 2, GT200 would be the monster on the older process, similar to G80. NV has more closely followed Intel's tick-tock approach as of late and I think they've seen great success with it. Considering both AMD and NV both use the same fabs for their chips I don't see how you consider this an advantage for one over the other. Lately they've just used alternate optical shrinks (G80@90nm/G92@65nm and R600@80nm/RV670@55nm). Sure AMD might have a few months advantage on an X2 part, but if it doesn't convincingly outperform a single-GPU part and still gets destroyed at the ultra high-end by multiple GT200 it'll still be lost in the overall GPU landscape.Originally posted by: munky
A few points I'd like to add:
1. The individual tmu's in the r6xx series are beefier than the ones found in the g80 and g9x cards. Each one works on FP16 data at full speed, while Nvidia's are based on INT8 data formats.
2. Ati would have a huge die size advantage if these specs are true, and could roll out an X2 card way before the competition can respond, because Nvidia would likely have to wait until the refresh cycle to make a dual-gpu card viable using theit much bigger gpu.