• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Nvidia castrates Fermi to 448SPs

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kakkoii

Senior member
Jun 5, 2009
379
0
0
LOL, you live quite far from me. I live in Surrey. Already 2 people from BC in Anandtech. :)
About a 3 hour drive at good speed :p. It's a beautiful drive from here to Vancouver. (Except when you pass by Chilliwack & Abbotsford... All that cow manure from the copious amount of farms is a real hit to the nose haha.)
 
Last edited:

Soleron

Senior member
May 10, 2009
337
0
71
Where did you hear that? Besides, this is a new arch. We really don't know what the relationship is for performance/shader ratio yet. It could be that even a 384 shader Fermi could outperform a 480sp GTX295. We don't know yet.
If this was a revolution in shader architecture, they would have hyped that up at the conferences. My impression was that the big changes are on the memory and cache side: ECC, L2, support for more languages, and other GPGPU changes. Even RV770 didn't get much % increase in performance per shader.

I think it's safe to assume no more than 5-10% increase per shader at the outside.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
If this was a revolution in shader architecture, they would have hyped that up at the conferences. My impression was that the big changes are on the memory and cache side: ECC, L2, support for more languages, and other GPGPU changes. Even RV770 didn't get much % increase in performance per shader.

I think it's safe to assume no more than 5-10% increase per shader at the outside.
I wish they could explain the whole ECC thing. If it is implemented like system memory then these things will be bad at overclocking, I hope it is some new type of system.
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
I wish they could explain the whole ECC thing. If it is implemented like system memory then these things will be bad at overclocking, I hope it is some new type of system.
From what I've seen, ECC happens by reserving a portion of the on-card RAM for ECC purposes.
Rather than desktop RAM which has an extra chip to make 9 instead of 8, with the 9th being for ECC, they just reduce the available capacity, meaning the 6GB card is something like 5.2GB, and the 3GB is more like 2.5GB.

http://www.nvidia.com/object/io_1258360868914.html
Footnote i:
i With ECC enabled, user available memory will be 2.625GB for a C2050 and to 5.25GB for a C2070
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
From what I've seen, ECC happens by reserving a portion of the on-card RAM for ECC purposes.
Rather than desktop RAM which has an extra chip to make 9 instead of 8, with the 9th being for ECC, they just reduce the available capacity, meaning the 6GB card is something like 5.2GB, and the 3GB is more like 2.5GB.

http://www.nvidia.com/object/io_1258360868914.html
Footnote i:
i With ECC enabled, user available memory will be 2.625GB for a C2050 and to 5.25GB for a C2070
There is an extra chip AFAIK with desktop memory. SORT of like a RAID5 setup. Use 3 200GB drives, but you still only have 400GB of usuable storage, and the 3rd drive if for parity bits which is usually spread out across the 3 drives. For every 3 bits of space, there is 2 bits of actual data, and one parity bit.

I don't really know the exact implementation on Fermi though. Is it in fact only 5.2GB of usuable memory? Or is there that extra parity bit chip? Dunno. But I guess we'll know fairly soon.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
If this was a revolution in shader architecture, they would have hyped that up at the conferences. My impression was that the big changes are on the memory and cache side: ECC, L2, support for more languages, and other GPGPU changes. Even RV770 didn't get much % increase in performance per shader.

I think it's safe to assume no more than 5-10% increase per shader at the outside.
Posts like these are going to be fun to go back and read again later. I've seen so many posts of varying degree and each one of them were spoken matter of fact style. Or, "It can't be any other way". It could be 5 to 10% of course.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
998
126
Personally I don't care if the GPU I buy runs at 1MHz and has one shader, so long as the performance is there. If Fermi delivers on the performance front, I don't think it should really matter what it's specs are. I think all this shows (if true) is that Nvidia may have over shot what is feasible on current technology (with prices people are willing to pay). But if it still clobbers the 5870 and is priced right, who cares what the specs are?

With that being said, I think Nvidia slipped up this round with a very late part compared to the competition. Of course we haven't seen a single bench yet, so we'll have to wait and see.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,078
1,217
126
At this point what does it matter. There is nothing 5870 can't handle at 1920x1200 and a lot it also handles at 2560x1600.

There is nothing 5970 or 5870 crossfire cannot handle at 2560x1600.

NV's card is still 3 months away, and once it releases what improved perceptible performance is going to offer any gamer who already has a 58XX or 59XX series ?

As well, it has to contend with ATI releasing their 6XXX series in September. NV dropped the ball this time round.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Personally I don't care if the GPU I buy runs at 1MHz and has one shader, so long as the performance is there. If Fermi delivers on the performance front, I don't think it should really matter what it's specs are. I think all this shows (if true) is that Nvidia may have over shot what is feasible on current technology (with prices people are willing to pay). But if it still clobbers the 5870 and is priced right, who cares what the specs are?

With that being said, I think Nvidia slipped up this round with a very late part compared to the competition. Of course we haven't seen a single bench yet, so we'll have to wait and see.
IINM, this is the first time Nvidia will release a new architecture on a new man. process. Trying to think back, but it's all blurry now. LOL. Egg Nog and stuff.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
If this was a revolution in shader architecture, they would have hyped that up at the conferences. My impression was that the big changes are on the memory and cache side: ECC, L2, support for more languages, and other GPGPU changes. Even RV770 didn't get much % increase in performance per shader.

I think it's safe to assume no more than 5-10% increase per shader at the outside.
How is comparing it to the RV770 relevant? RV7XX isn't a new architecture, so obviously your not going to see much of a % increase in the performance per shader. It's just a new configuration and with Eyefinity added.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
998
126
IINM, this is the first time Nvidia will release a new architecture on a new man. process. Trying to think back, but it's all blurry now. LOL. Egg Nog and stuff.
Haha... happy holidays Keys.

I'm sure Nvidia will bring a good part out, but with it being so many months late compared to their competitor, I don't know how relevant it'll be in the gaming world. I have my eyes set on a 5850, it'll chew up my 1920x1200 res and give me DX11 capability. I'm sure Nvidia will bring out a good part, something that probably matches the competition, or maybe even surpass the 5870. But with the AMD parts so readily available right now, and Nvidia's new cards no where in sight (we haven't even seen a bench mark) I just don't know how much it'll matter. This seems so much like the FX5800 launch to me.

This is just my gut feeling, but I think Nviida is more focused on the HPC market then the gaming market. I guess we'll just have to wait and see...

Oh yea, sorry for any ramblings or poor sentences... I have a few slipped discs in my spine and took some pain meds and muscle relaxers about 20 minutes ago...
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
With above statements I agree . But man this thing with Intel /NV and the FTC is going to get ugly read the above patent.
That patent is for a hardware emulator of portions of a rasterizer, an idea so stupid I don't think Intel will ever have to worry about defending it :)
 

tviceman

Diamond Member
Mar 25, 2008
6,733
513
126
www.facebook.com
http://www.patentstorm.us/patents/6989838/claims.html

With above statements I agree . But man this thing with Intel /NV and the FTC is going to get ugly read the above patent. You think maybe that NV has infringed on this patent . As I understand it that is the case. I wish NV would close that can of whoop ass as its going to stall the industry.
I think intel's piss poor excuse of an IGP in a majority amount of PC's has stalled the industry more than anything the FTC or Nvidia can do at this point.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Nemesis, why bring the patent discussion into here when there is a very comprehensive thread going on about it? Your first post was already moved to the relevant thread.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
I think intel's piss poor excuse of an IGP in a majority amount of PC's has stalled the industry more than anything the FTC or Nvidia can do at this point.
Because businesses, by far the largest segment of the PC market, buys cheap, reliable machines to play Crysis in their offices amirite.

And while we at IGPs, Both Nvidia & ATI parts are trash compared to their own budget standalone cards.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
Because businesses, by far the largest segment of the PC market, buys cheap, reliable machines to play Crysis in their offices amirite.

And while we at IGPs, Both Nvidia & ATI parts are trash compared to their own budget standalone cards.
Their pretty good actually considering the power envolope they have to fit in. It's also that OEM's tend to go with the shitty versions. Alienware is one company that uses Nvidia's best integrated chips. Like the 260/280m chips. 280m has 126 of cores. Can pump out some pretty good graphics for a laptop.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Nemesis, why bring the patent discussion into here when there is a very comprehensive thread going on about it? Your first post was already moved to the relevant thread.
Ya I new you moved that . Why isn't it relavent here . Isn't NV doing something with the memory on fermi. I think this is relavent. I am sure they did it on other models also . But now that war has broke out we may find that NV was using Intel IP or parts there of. Do you know differantly . In post you said you new nothing of Fermi . After all this is a 2003 rastershell that Intel can't have because Ben says all rasterizing belongs to NV .

ECC memory was talked about above. So this does belong here as well as the other thread, Do you really think anyone knows more about caches and memory allocation than Intel . Maybe IBM .
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
That patent is for a hardware emulator of portions of a rasterizer, an idea so stupid I don't think Intel will ever have to worry about defending it :)
Is that what you got out of that read . Heres a hint for ya. In that link is another link that gives full details. Take a couple hours and read it . Your a collage kid right . It shows,
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I think intel's piss poor excuse of an IGP in a majority amount of PC's has stalled the industry more than anything the FTC or Nvidia can do at this point.
Hay why are you confused, IGP are not for gaming . They are for cheap graphics and low power. You have a bit of a shock coming real soon . You guys are confusing as all hell . Ya talk about netbooks like they should be gaming Pcs LOL. I think mainly NV has alot of cheerleaders. Were there is NO game. But puff it up . But all of us have a right to opinion . and Ihave a strong hunch that NV is using Intel IP on fermi. I also believe NV had right to use it . But I think maybe that will change shortly. Once the courts clear the nehalem bus claims . Because if Intel wins that part. Than NVs countersuite works in Intels favor and Intel can legeally remove from NV all of its IP . Just as NV is claiming about intel . The winner her is going to be a winner nothing more . The loser tho may get smashed be it Intel or NV. Intel will not settle this out of court if their claims are infact in the contract . Intel may very well if they win the Nehalem ondie memory thing . Sue on the same grounds that NV is using now to counter intels move. NV may have sold the farm .
 

Soleron

Senior member
May 10, 2009
337
0
71
How is comparing it to the RV770 relevant? RV7XX isn't a new architecture, so obviously your not going to see much of a % increase in the performance per shader. It's just a new configuration and with Eyefinity added.
I meant, GT200 -> Fermi is the same kind of transition as R6xx -> R7xx. They didn't change the shader's architecture, so the best-case performance gain is going to be something like R7xx was to R6xx. I'm using R7xx as best-case because I can't think of a transition that got more % out of each shader. The only one that did was where they changed what a shader meant, with G80 or R600.

Eyefinity wasn't until Evergreen, not R7xx, but that doesn't affect your point.
 

tviceman

Diamond Member
Mar 25, 2008
6,733
513
126
www.facebook.com
Hay why are you confused, IGP are not for gaming . They are for cheap graphics and low power. You have a bit of a shock coming real soon . You guys are confusing as all hell . Ya talk about netbooks like they should be gaming Pcs LOL.
I didn't read past what I just quoted because I could not get over the lack of good punctuation and grammatical mistakes. Perhaps English isn't your native language - if so I understand.

Anyways, I never said anything about nvidia in my post you are quoting, nor did I say anything about netbooks. BUT ---- last time I checked the ION could do gaming and was still extremely affordable.

If someone goes out and spends $300-400 on a netbook or laptop, in my opinion, it should still be able to run (if only in a bare essential mode) software that is on the isle next to it. It should still be able to play high definition streaming video without severe slowdowns or lock ups, and it should still have industry support from the majority of software developers currently in business.

Intel's IGP fails at all the things I just mentioned. Netbooks powered with Intel IGP would be more truthful if renamed "mobile word processors with internet and picture storage."
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
61
Their pretty good actually considering the power envolope they have to fit in. It's also that OEM's tend to go with the shitty versions. Alienware is one company that uses Nvidia's best integrated chips. Like the 260/280m chips. 280m has 126 of cores. Can pump out some pretty good graphics for a laptop.
Unfortunately, they're based on the G92 chip, and there's a RV770 based mobility chips like the HD 4870 1GB, which can smoke it with no issues. You can select it under the Alienware M17x customization page.

Anyways, I never said anything about nvidia in my post you are quoting, nor did I say anything about netbooks. BUT ---- last time I checked the ION could do gaming and was still extremely affordable.
Gaming performance of the ION is comparable to a GeForce 8200/8400GS card which is crappy as hell, enough to play most games on ultra low settings, considering the horrible CPU bottleneck that the Atom is.
 

ASK THE COMMUNITY