ch33kym0use
Senior member
- Jul 17, 2005
- 495
- 0
- 0
Spec comparison: https://docs.google.com/spreadsheet...LTJKX0h5RGVCdXc&single=true&gid=0&output=html
Performance: http://www.techpowerup.com/reviews/Powercolor/HD_6850_SCS3_Passive/27.html
Assuming the specs are accurate, we can pretty much compare the 7870 to the 5850 using the current 6970 as a bridge. The 7870 should be approximately 5-10% faster than the 6970. The 6970 is currently approximately 40% than the 5850. So it should end up about 50% faster than the 5850.
Does maths even work like that, multiplication of percentage and not additive.
IF the 7870 is 10% faster than a 6970.
So if 6970 gets 100 fps, 7870 gets 110fps.
The 6970 is 50% faster than a 5850 in modern dx11 games. No point including older dx9 games.
So 5850 gets ~66fps compared to 6970 100fps.
66 fps vs 110 fps = ??
Confusing
Overclocking always changes the math.If those specs are true. Even 5850 - 7870 may not worth the upgrade. 7870 has the same specs as a 6970 which performs more like a slightly overclocked 5870 in non tessellated benchmarks. Those people with reference 5850s easily clock beyond 5870 speeds.
Guys,
Let me be blunt here, THERE IS NO XDR2 IN SI/HD7000/GCN. Trust me on this, the spec list floating is complete bull, and you can tell by who is re-posting it and who is not. Some people know, and they are being VERY quiet on the subject.
This is not meant to knock XDR2 and/or Rambus, it is just a statement about what is in and what is not in the next GPU.
-Charlie
Depends on price, but this is pretty much what I've been waiting for. The people who need >580GTX speed are also looking at astronimcally expensive displat setups, huge power bills, etc... These people are dumping loads of cash into their computers.
People in the 1920x1200 or 1080p resolutions don't need faster than current cards for very high quality, they just need cheaper and lower power... at least until software starts catching up.
I never really expected my 5770 to be usable for as long as it has been. At the rate things have slowed to, a 5850 was a pretty good midrange buy for a good 2 years, and if purchassed at launch, before AMD pumped up the price, it's really not far from being price competitive with current offerings at similar performance.
It's possible a 7850 will have me happy for 3 years?!? That's just wack given how fast I was swapping cards in and out in the 9700 pro --> 7900gt timeframe.
Does maths even work like that, multiplication of percentage and not additive.
IF the 7870 is 10% faster than a 6970.
So if 6970 gets 100 fps, 7870 gets 110fps.
The 6970 is 50% faster than a 5850 in modern dx11 games. No point including older dx9 games.
So 5850 gets ~66fps compared to 6970 100fps.
66 fps vs 110 fps = ??
Confusing
8000mhz and above XDR2 has been out at least 5 years so surely costs have become reasonable by now. and where else can they go with gddr5 on a 256bit bus? the fastest available is 7000mhz so even if they ran it at full speed that would only mean 27% increase in bandwidth. and realistically they would run it at least 200-300mhz below that resulting in less than 25% bandwidth increase for a next gen product.XDR2 is costly next gen stuff. AMD will stick to tried and proven 256bit GDDR5 which has worked fantastically well for them the past 4 years.
XDR2 is costly next gen stuff. AMD will stick to tried and proven 256bit GDDR5 which has worked fantastically well for them the past 4 years.
8000mhz and above XDR2 has been out at least 5 years so surely costs have become reasonable by now. and where else can they go with gddr5 on a 256bit bus? the fastest available is 7000mhz so even if they ran it at full speed that would only mean 27% increase in bandwidth. and realistically they would run it at least 200-300mhz below that resulting in less than 25% bandwidth increase for a next gen product.
what? 40%? they are using 6000mhz memory now and running at 5500mhz. even 7000mhz memory running at full speed would only be 27% faster than 5500mhz.I have thinking along that line sometime ago. That memory bandwidth would be a bottleneck. But things are not as simple as that. The 5870 have only 30% more memory bandwidth as the 4870 but almost 2X as fast. Tests already shown that the card is not really bandwidth starved as many would have believed. If they run samsung's 7Gb/s chips for next generation that will be an increase of 40% memeory bandwidth even if they don't clock it to the max.
The 6970 is 50% faster than a 5850 in modern dx11 games. No point including older dx9 games.
8000mhz and above XDR2 has been out at least 5 years so surely costs have become reasonable by now. and where else can they go with gddr5 on a 256bit bus? the fastest available is 7000mhz so even if they ran it at full speed that would only mean 27% increase in bandwidth. and realistically they would run it at least 200-300mhz below that resulting in less than 25% bandwidth increase for a next gen product.
Of course HD6970 hardly oveclocks beyond 880mhz on reference design, while HD5850 has 25-30% overclocking headroom. It only takes 850mhz on the HD5850 to match an HD5870. And beyond that, you are only closing the gap on the HD6970.
Just out of curiosity, where do you get this? I bought a 6970 a week after its release and it overclocked to 950/1450 without a blink of an eye, default voltage. I've run it at that clock nearly 24/7 since I got it with no issues in games.
Just out of curiosity, where do you get this? I bought a 6970 a week after its release and it overclocked to 950/1450 without a blink of an eye, default voltage. I've run it at that clock nearly 24/7 since I got it with no issues in games. Have a couple buds with similar lack of problems. Only thing is I have to manually set the fan, since CCC's fan throttling is garbage.
I've seen stories at overclock.net of people getting to 1000 GPU without any problems, and i'm pretty sure mine could push further If I tried. are others having issues (just curious). Maybe its a problem with a specific brand? hmm
Not even close!!!I can't even believe this was taken seriously in this thread.
a more recent review: july 2011
http://www.guru3d.com/article/his-radeon-6970-iceq-mix-review/18
crysis 2 and metro.
around 50%.
Bottom line is, if HD7870 is a refreshed 6970 with 70mhz higher GPU clock speeds, it won't make any game more playable by a lot more than 1 filter stage (i.e., 2AA --> 4AA or 4AA --> 8AA). Sorry, but that's simply not worth the $200 price for most people with an overclocked HD5850.
a more recent review: july 2011
http://www.guru3d.com/article/his-radeon-6970-iceq-mix-review/18
crysis 2 and metro.
around 50%.
8000mhz and above XDR2 has been out at least 5 years so surely costs have become reasonable by now. and where else can they go with gddr5 on a 256bit bus? the fastest available is 7000mhz so even if they ran it at full speed that would only mean 27% increase in bandwidth. and realistically they would run it at least 200-300mhz below that resulting in less than 25% bandwidth increase for a next gen product.
Im not discussing the merits of it, it was to work out how maths should be done properly on percentages.
Your point is very true. It doesn't make sense to release a 28nm mid-range that performs near 6970 speeds since 6xxx series was a compromise. Thus, the true 28nm mid-range should be compared to 6970 if it was made on 32nm.