bryanW1995
Lifer
- May 22, 2007
- 11,144
- 32
- 91
Thread is long and well off-topic.
Can anyone tell me whether or not the numbers in this BSN article are considered "accurate"?
http://www.brightsideofnews.com/new...lands-vs-nvidia-fermi-die-sizes-compared.aspx
If Cayman will be double the Barts (Shaders, Texture Units) except the memory controllers and the ROPs it could be around 400mm2.
Cayman ~ 400-420mm2
2240 Shaders with 96 Tex Units with 256-bit memory controllers and 32 ROPs
Barts = 255mm2
1120 Shaders with 48 Tex Units with 256-bit memory controllers and 32 ROPs
If Cayman has ECC, it could be going up against Fermi in HPC in a big way. Also explains why the memory controller remains 256-bit at the larger die size.
Thread is long and well off-topic.
Can anyone tell me whether or not the numbers in this BSN article are considered "accurate"?
http://www.brightsideofnews.com/new...lands-vs-nvidia-fermi-die-sizes-compared.aspx
Doesn't Cypress have a memory controller with ECC?
Fudzilla, the other most accurate rumor mill, is saying it's 255. There is also plenty of reports saying it's ~240. Since most of the rumors/leaks are in the same ballpark, it's likely right in this neighborhood, and we'll all probably know for sure on Friday.
I have to admit that I never would have thought I would hear the words "Fudzilla" and "most accurate" in the same sentence. Even though you are obviously being facetious, it is still very strange seeing that sentence written.
Or:
Cayman ~ 400-420mm2
2240 Shaders with 96 Tex Units with 256-bit memory controllers and 64 ROPs
how do we know that is correct because they don't even have the correct bandwidth listed for the 5850?Barts is 255mm2
Why wouldn't Cayman be an extension of the Barts philosophy ---> decide what performance is needed to beat a known target and squeeze it into the smallest die size possible.
That 'known target' is the key.
HD6000 GPU design was likely close to final before GTX480 even saw the light of day in March 2010. GPU designs take years.
No, it has a memory controller than detects errors and resends the data, but it's not ECC. http://www.anandtech.com/show/2841/12
I'm not sure what graphical settings (beyond what is shown) this leak is at, but here is a huge graph of all current cards with Unigine running at a slightly higher resolution on "high" settings. http://ixbtlabs.com/articles3/i3dspeed/0710/itogi-video-h2-wxp-1920-pcie.html I searched for a few minutes and could not find equal comparisons with several cards in the mix.
At a slightly higher resolution, the hd5870 is still faster than this leak shows the hd6870 to be at the lower resolution indicated. If this holds true among other DX11 benchmarks, Silverforce's statements about the hd6870 being faster in DX11 than hd5870 will turn out to be false. Improved performance? Yes. But not faster than Cypress.
If xbit labs uses an i7-920 @ 4.0ghz then no doubt that wouldn't be a fair comparison to this benchmark using a PhII.
Also, hopefully games aren't using the absurd overkill method of tessellation which doesn't really improve graphics, but causes a huge drain on performance.
http://ixbtlabs.com/articles3/i3dspeed/0910/itogi-video-h2-wxp-aaa-1920-pcie.htmlTestbed configuration
- Intel Core i7-975 3340 MHz CPU
- ASUS P6T Deluxe motherboard on the Intel X58 chipset
- 6GB of 1600MHz DDR3 SDRAM from Corsair
- WD Caviar SE WD1600JD 160GB SATA HDD
- Tagan TG900-BZ 900W PSU
- Windows 7 Ultimate 64-bit, DirectX 11
- 30" Dell 3007WFP monitor
- NVIDIA Drivers 260.63 beta
- ATI CATALYST 10.9
According to TV's link the 5870 got 34.5 versus the leaks 34.3
Thinks to factor in:
TV's link has no AA, the leak has 4xAA
TV's link has no ANIS, the leak has trilinear
I can't find the processor but chances are it isn't a Phenom II, X4, probably a faster Core i7 with an OC.

 
				
		