Nvidia castrates Fermi to 448SPs

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Tesla is supposed to be the most expensive SKU with the best GPU cherry picked cores, so considering that it may have only 448 SP, will mean that the consumer card will have the same SP count, I doubt that nVidia will use the best cores for consumer cards when the Tesla cards can be more profitable and mission critical. After all, nVidia can scale down and disable SP for cheaper SKU's and get some profits selling it for the regular consumer, pretty much the same bussiness tactics done before.

Unless for Tesla units they need to be within a 225 watt power envelope. Dont those boxes come in 4 and 8 GPU setups?? A single 225+ part part in a consumer machine is doable. 8 of them missing their power envelope can increase thermals greatly.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You sound quite sure of this. Mind if I quote you and save it for later? :)

Jen-Hsun himself has said (in a press release) that 2/3 of Nvidia profits come from HPC market.....but 2/3 of the gross revenue come from gamers.

So it does make sense to me if yields were really poor HPC market would get first priority for cores.
 

nemesismk2

Diamond Member
Sep 29, 2001
4,810
5
76
www.ultimatehardware.net
Nvidia managed to survive the GeForce 5800 Ultra which would of killed most companies so even if Fermi isn't too great Nvidia will just release a better version. Sort of like what ATI did with the 2900 making the 3870.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Depending on how powerful the new SPs are compared to the old ones, having 448 of them could still offer very competitive performance. It’s all about the work per clock cycle.

:thumbsup: At the end of the day it's not about the # of cylinders under the hood, but the total package. If Fermi gets us good price performance ratio / great performance, I dont really care if it has 448 SP or 512, etc.

At this time Fermi is nowhere near launch time. So it's too early to discuss specifics. Until the next future generation game comes out (i.e., as was the case with Doom 3, Far Cry and Crysis), there is not a lot to be excited about. What we want is to use this beast in a game that shatters the current graphics standards once more.
 
Last edited:

Soleron

Senior member
May 10, 2009
337
0
71
Also in the PDF: clockspeeds. 1.25-1.4GHz for shaders; 1.8-2.0GHz for memory [5870 = 2.4GHz]. The shader clocks are a little lower than the existing Tesla shader clocks (which is 1.5GHz for the C1070).
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Also in the PDF: clockspeeds. 1.25-1.4GHz for shaders; 1.8-2.0GHz for memory [5870 = 2.4GHz]. The shader clocks are a little lower than the existing Tesla shader clocks (which is 1.5GHz for the C1070).

Gone are the days of seeing Shaders clocked above 1.80GHz, higher amount of shaders + more complexity = Low shader clocks
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
So how come nobody seems very shocked at the core clock here?

1.25-1.4GHz. That's roughly double the clock speed of existing Nvidia cards.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
So how come nobody seems very shocked at the core clock here?

1.25-1.4GHz. That's roughly double the clock speed of existing Nvidia cards.

If anything, those clocks (if accurate) are probably for the shader speed. Not the core.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Edit: I really need to refresh before replying. Doh.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well the good news is in a thread@ Xs forums it seems the yield problems for ATI gpus has been resolved and there ramping up. Good news for ALL.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I thought you would be happy . I thought The Problem was The FAB if ATI is ramping on 40nm . So should NV Fermi. So now we can't point fingers at the Fab saying its all there fault . This is much better . For you I would think . Clearly this means NV is ready to go now that the 40nm issue is resolved. No dead horse to kick around . Is this not a good thing . Time will show all. Now that the Fab problems are resolved
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
It specifically says in the PDF:
"Processor core clock: 1.25 GHz to 1.40 GHz"
"Memory clock: 1.8 GHz to 2.0 GHz"
its the processor cores aka shaders. if you actually look at the PDF closely instead of just picking something out you would have noticed that. ;)

 Number of processor cores: 448
 Processor core clock: 1.25 GHz to 1.40 GHz
 
Last edited:

Kakkoii

Senior member
Jun 5, 2009
379
0
0
its the processor cores aka shaders. if you actually look at the PDF closely instead of just picking something out you would have noticed that. ;)

 Number of processor cores: 448
 Processor core clock: 1.25 GHz to 1.40 GHz

Oh yes, because I'm so blind, stupid and lazy that I didn't read the line right above it in the PDF <_<.

Obviously I don't know the difference between the actual GPU clock and the individual core clock. So please enlighten me.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Oh yes, because I'm so blind, stupid and lazy that I didn't read the line right above it in the PDF <_<.

Obviously I don't know the difference between the actual GPU clock and the individual core clock. So please enlighten me.
what? are you not even aware that nvidia has different clock speeds for the processor cores(shaders)? its been like that for years and considering you are on a tech forum where you have 300 posts, I figured you knew that. anyway the 8000 series cards are the ones that started having separate clocks.

this is how my gtx260 would be listed:

Cores 192
Graphics Clock 576 MHz
Processor Clock 1242 MHz
 
Last edited:

Kakkoii

Senior member
Jun 5, 2009
379
0
0
what? are you not even aware that nvidia has different clock speeds for the processor cores(shaders)? its been like that for years and considering you are on a tech forum where you have 300 posts, I figured you knew that. anyway the 8000 series cards are the ones that started having separate clocks.

this is how my gtx260 would be listed:

Cores 192
Graphics Clock 576 MHz
Processor Clock 1242 MHz

I've probably read about it before. But I have to deal with a very bad memory. I read lots of articles on various subjects, but I retain very little of the information. It's sad =/

So what is the point of having them separate?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I've probably read about it before. But I have to deal with a very bad memory. I read lots of articles on various subjects, but I retain very little of the information. It's sad =/

So what is the point of having them separate?

If the performance is bound by shader cores rather than other parts of the GPU like the ROP/TMU, then it makes sense to clock the shader cores high and keep the rest similar. It would also be more power efficient than increasing clock overall.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
If the performance is bound by shader cores rather than other parts of the GPU like the ROP/TMU, then it makes sense to clock the shader cores high and keep the rest similar. It would also be more power efficient than increasing clock overall.

So wouldn't stating Nvidia's shader clocks be more relevant to the actual performance? Especially when comparing to ATI. Or are ATI's like that also? I don't see ATI's shader clocks stated, only core clocks.

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units


(Also, I see you live in BC. What city if ya don't mind me asking? I live in Kelowna.)
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
So wouldn't stating Nvidia's shader clocks be more relevant to the actual performance? Especially when comparing to ATI. Or are ATI's like that also? I don't see ATI's shader clocks stated, only core clocks.

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units


(Also, I see you live in BC. What city if ya don't mind me asking? I live in Kelowna.)
AMD's shader units are clocked at the same speed as the rest of the core. This is because AMD and NV use very different designs; the AMD design is slow & wide, while the NV design is fast & narrow. The closest comparison would be something like the P4 compared to the Core 2 Duo.
 

Kakkoii

Senior member
Jun 5, 2009
379
0
0
AMD's shader units are clocked at the same speed as the rest of the core. This is because AMD and NV use very different designs; the AMD design is slow & wide, while the NV design is fast & narrow. The closest comparison would be something like the P4 compared to the Core 2 Duo.

Yeah, I've read Anandtech's in depth comparison of their current architectures.

I wonder why Nvidia and other manufacturers don't use the shader clock in the title instead of the core clock. Would be a lot more attractive to the average consumer haha.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
I thought you would be happy . I thought The Problem was The FAB if ATI is ramping on 40nm . So should NV Fermi. So now we can't point fingers at the Fab saying its all there fault . This is much better . For you I would think . Clearly this means NV is ready to go now that the 40nm issue is resolved. No dead horse to kick around . Is this not a good thing . Time will show all. Now that the Fab problems are resolved

I meant TSMC. It remains to be seen if the 40nm fab problems are TRULY resolved.
 
Last edited:

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
Doesn't increase in no. of SPs subjected to diminishing returns on the real-world performance side?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Doesn't increase in no. of SPs subjected to diminishing returns on the real-world performance side?

Where did you hear that? Besides, this is a new arch. We really don't know what the relationship is for performance/shader ratio yet. It could be that even a 384 shader Fermi could outperform a 480sp GTX295. We don't know yet.