• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Oct 19th AMD (ATi) and Nvidia news

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
To answer those questions you have to realize that Nvidia does very likely have an answer and an upgrade path with Fermi on 40 nm. Their *usual* bump and refresh at the very least. Will it be enough? Well, we shall all know very soon.
You say "very likely". Do you have anything to back that up besides opinion and your obvious affection for Nvidia?
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Short answer: "No"
"But that won't stop me extolling the awesomeness of NVDA".
"I'll even do the rounds of popular tech forums other than the one I work at"
"Only way to be sure the true message gets out"


Personal insults and character attacks are not acceptable.

Moderator Idontcare
 
Last edited by a moderator:

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
You say "very likely". Do you have anything to back that up besides opinion and your obvious affection for Nvidia?
Yes, of course i do. You need to reread what i wrote, for starters.

And i have very obvious affection for my site's *other* media Partner, AMD

Again .. it depends on your definition of "mid life kicker" .. we saw AMD do it on the same process and JHH was talking about it again very recently.
-- and of course, it depends on "need" .. Nvidia may decide that the 6000 series needs no answer except on pricing. As i said, 'it's AMD's move'
 
Last edited:

CitanUzuki

Senior member
Jan 8, 2009
464
0
0

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
each time the GeForce gets further ahead of the Radeon
-so, you're right .. it's time for a new comparison ;)

Hahah, what a general statement to make... You know the fastest card is made by AMD; been that way for a year now, and it looks like 'Radeon' will continue that lead for the next year or two into the future.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
A more mature process has a better yield and with less leakage. Small improvements are made to the silicon in increments. The GPUs tend to overclock better as something anyone can notice as the process matures.

We saw this happen with every CPU and GPU so far. Cypress certainly did it;all of a sudden, we saw a lot of overclocked HD 5870s to attempt to catch GTX 480
. . . . why should not Fermi's process mature and why should not Nvidia allow their partners the same latitude?

Hahah, what a general statement to make... You know the fastest card is made by AMD; been that way for a year now, and it looks like 'Radeon' will continue that lead for the next year or two into the future.

Haha yourself :p
- you are taking my statements out of context to change my meaning in a very poor attempt at a strawman argument.

Mine was a very specific statement referring ONLY to GTX 480 vs. HD 5870 .. the performance gap widens between those flagship (single-GPU) two cards, hence the *need* for AMD Graphics to respond with HD 6000 series.

ANYWAY, it was nice talking with you all again. i don't care to convince anyone. You are practice for me; thanks. i have to get back to my benching .. and then i have to leave tomorrow on a small road trip and i need to make preparations for it now.

i'll check back to see who turn out to be right. You could be. Perhaps Nvidia has decided to play dead this year.
:D
 
Last edited:

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
What are you his dad?:rolleyes:


Personal attacks are not acceptable.

Moderator Idontcare
 
Last edited by a moderator:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
If anyone is unbiased it would be apoppin. It's quite silly for people to be calling him out.

Mark isn't blatantly biased, and his reviews aren't necessarily biased either. But he has not demonstrated beyond doubt a willingness to go against reviewer guidelines, as evidenced by his site using Far Cry 2 as one of the benchmarked games even though it's old and nobody plays it anymore... but that is ENORMOUSLY skewed towards NV hence why NV put it in their reviewer's guide recommendation list. But a lot of sites are like that, including Hardwarecanucks and even TPU (though in that case I think it's out of sheer intertia, since W1zzard is still benching UT3 for crissake).

AFAIK, there are only three major English-language sites that completely rejected Far Cry 2 benching when reviewing the GTS 450: Anandtech, [H]ardOCP, and Bit-Tech. Honorable mention to TechReport since they didn't bench FC2 per se. That demonstrates their independence and vaults them into the first-tier review sites, in my opinion.

To be clear, I am not saying that if a site benched FC2 during a GTS 450 review that the site is necessarily biased. I'm also NOT saying that Mark or his site is biased. What I am saying is that if a site did NOT bench FC2 during a GTS 450 review, then that is evidence to me of reviewer integrity.
 

Ovven

Member
Feb 13, 2005
75
0
66
It's funny reading that Nvidia's high end will be better than AMD's due to more mature 40nm process. However, that works both ways too, and AMD will certainly use this fact to its advantage with 6000 series.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Considering NVIDIA's recent track record of smoke and mirrors and following through with less than impressive products, the $500 I have set aside here will be getting a Cayman/6970 whenever it's available in November (presumably). If NVIDIA can muster something concrete and impressive before then, I'll consider it, but I'm not holding my breath.
 

amenx

Diamond Member
Dec 17, 2004
4,528
2,864
136
Cheerleading about totally unannounced,non existent hardware just to give the impression that NVDA has a competitive answer to the new AMD 6 series is lame.
Probably no lamer than an ANNOUNCED top end card that performed no better than the competitions mid-range, despite arriving at the scene several months late (ATI 2900xt vs 8800gts). At least fermi did better than that. Just remember where ATI was 2 or 3 years ago. No side is immune to dramatic reversals of fortune.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Not sure if I really want to get involved in this thread, but anyway, I'd like to comment on two things:
The 'disabled functional units' that Anandtech refers to, aren't some kind of 'magical' or 'hidden' features of the Fermi architecture. It's just a reference to the disabled Cuda cores, which everybody is probably well aware of. Namely, the GF100 design has 512 cores, but only 480 enabled on current cards at most. And the GF104 has 384 cores, but only 336 enabled.
This amounts to 6.7% or 14.3% extra processing power respectively, in theory, assuming perfectly linear scalability.
A while ago, a 512-core GF100 card was tested, and the practical speed improvement was more along the lines of 5%.

So, yes, there is room for improvement with Fermi, and yes, as the manufacturing process matures, they will be able to enable more cores (and perhaps even bump the clockspeed a bit)... but no, it's not going to be a dramatic improvement.

As for the other point:
The 'b' versions of nVidia's cores, G92->G92b and GT200->GT200b, signify a die-shrink (in this case 65nm -> 55nm).
Since TSMC is nowhere near introducing 28nm, we will not see a die shrink from AMD or nVidia anytime soon (the 6000-series was originally planned to be 28nm, but instead it was cut down to be more of a tweaked 5000-series on 40nm). So we won't be seeing any Fermi 'b' GPUs soon, at least not in this sense.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Not sure if I really want to get involved in this thread, but anyway, I'd like to comment on two things:
The 'disabled functional units' that Anandtech refers to, aren't some kind of 'magical' or 'hidden' features of the Fermi architecture. It's just a reference to the disabled Cuda cores, which everybody is probably well aware of. Namely, the GF100 design has 512 cores, but only 480 enabled on current cards at most. And the GF104 has 384 cores, but only 336 enabled.
This amounts to 6.7% or 14.3% extra processing power respectively, in theory, assuming perfectly linear scalability.
A while ago, a 512-core GF100 card was tested, and the practical speed improvement was more along the lines of 5%.

So, yes, there is room for improvement with Fermi, and yes, as the manufacturing process matures, they will be able to enable more cores (and perhaps even bump the clockspeed a bit)... but no, it's not going to be a dramatic improvement.

As for the other point:
The 'b' versions of nVidia's cores, G92->G92b and GT200->GT200b, signify a die-shrink (in this case 65nm -> 55nm).
Since TSMC is nowhere near introducing 28nm, we will not see a die shrink from AMD or nVidia anytime soon (the 6000-series was originally planned to be 28nm, but instead it was cut down to be more of a tweaked 5000-series on 40nm). So we won't be seeing any Fermi 'b' GPUs soon, at least not in this sense.

I agree with Scali for the most part, with the minor exception of how NI was intended to be 28nm... I think the next generation (used to be SI, now NI) was supposed to be 32nm until TSMC cancelled it.

A 512-core GTX 480 with modest clockspeed bump on a more mature process might be able to get close to a Cayman XT, but the large difference in die sizes makes it unlikely that NV will actually try to make many such chips. Maybe a few golden chips for a GTX 480 "PE" or something. ;)

A fully-enabled GTX 460 is more likely. With all cores and a modest clock bump, it should be able to take on Barts XT. Again, though, it'd be a larger, more expensive chip than Barts, so I'm not sure NV would want to enter into a price war. NV may simply want to take the full-GF104 parts it has on hand and make a limited edition "GTX460 core 384" and call it a day.

More likely in my book is a dual-GF104 part, call it the GTX 495. It simply makes sense, because anyone who wants Surround who doesn't have an SLI-capable mobo is forced into the AMD camp right now. A GTX 495 would address that problem, as well as giving Cayman XT a run for the money.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
I will have to agree with apoppin,

A full shader GF108 with 240 Shaders and 192-bit memory will probable be at 90-95% of GTX460 (768) level of performance. At 240mm2 will be on par with BARTS 230mm2. Of course they will have to raise the clocks too.

A full shader (or Cuda Core if you like) GF104 with 384 shaders and raised clocks will be able to have a GTX470 performance with 360mm2. Here we have a new $220-250 market from NV that AMD will not have a counterpart because 6000 will be at the 400mm2. CAYMAN 6950 (5850) will have GTX470 performance at 400mm2.

About the full GF100 and GTX480, in that article they benched the full 512 shader card, the card took a 5% raised in performance with the same 700MHz clocks as a 480 shaders GTX480. If you raise the clocks to 800MHz the performance difference is at 15-20%. IF and I say again IF NV release a full 512 shader GF100 it will raise the clocks too.
 

nemesismk2

Diamond Member
Sep 29, 2001
4,810
5
76
www.ultimatehardware.net
Exciting news, can't wait to see what AMD and Nvidia have planned. I used to be a big Nvidia fan but decided to stick with ATI for for the past few years. Recently decided to upgrade my Radeon 4770 and picked the palit geforce gts 450 sonic. I have found it to be a great upgrade, very quiet, very cool and great performance. With my main upgrade the Powercolor Radeon 4870 is showing it's age so I am looking to upgrade it. Will I buy AMD or Nvidia this time so interested in seeing what they have to offer? :)
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Yeah, nVidia is just been playing possum. They can activate those castrated shaders, and super secret stuff too nobody knows about, of the GF100/104 anytime they want to. Just trying to lull AMD into a false sense of security (while blocking all their punches with their face. :rolleyes:) Any day now nVidia is going to decide they've bled enough and come back swinging and KO AMD. Just not today. Maybe tomorrow, because they could do it anytime they wanted to. </sarc>

Like others have said, Fermi is maxed out. Some driver maturing is not a mid life bump. These "SOC" cards draw more power than the original 480, which was already horrible. The suped up 5970's we see, Toxic and the XFX 4Gig black edition, draw less power than the reference 5970 while offering a substantial performance boost. When are we going to see this type of "refinement" from Fermi? I'm not trying to justify the prices of them, though. They are pitifully expensive. The price premium for the fastest cards in the world, I guess.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Yeah, nVidia is just been playing possum. They can activate those castrated shaders, and super secret stuff too nobody knows about, of the GF100/104 anytime they want to. Just trying to lull AMD into a false sense of security. Any day now nVidia is going to decide they've bled enough and come back swinging and KO AMD. Just not today. Maybe tomorrow, because they could do it anytime they wanted to.

Agree with this part of your post.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Agree with this part of your post.

Do you really think Nvidia has a KO planned for AMD or at least the AMD 6xxx cards coming soon-ish?

I guess we're at best taking stabs in the dark here... talking about rumored, but uncomfirmed, performance of upcoming Radeon 6xxx parts with fully enabled and higher clocked Fermi parts that exist only in Fanboy dreams and messageboards.

But, what can Nvidia do with what they have right now? An A4 or B spin and higher clocked with all shaders enabled? Or even what they have now just simply being more mature and able to enable all parts/clock higher? Let's say they get an 850MHz 512SP shader Fermi, what can we guesstimate the performance would be? Another 25&#37; performance on top of the GTX480 roughly?

Isn't Cayman *rumored* to be something like 20% more performance than the GTX480? I guess it could be close. It'll be interesting to see how things shake out. I just have a feeling AMD has the upper hand in all of this, but maybe not. I wouldn't be too surprised if Nvidia was waiting for AMD's parts to be released so they know where they have to clock their next part to beat it.

Hopefully this go around we have a bit more of a price war.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Like others have said, Fermi is maxed out. Some driver maturing is not a mid life bump. These "SOC" cards draw more power than the original 480, which was already horrible.

You are wrong here. Look at the reviews of the galaxy gtx480 SOC and gtx470 SOC. They both are clocked higher and draw less power than the original vanilla versions.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
I will have to agree with apoppin,

A full shader GF108 with 240 Shaders and 192-bit memory will probable be at 90-95% of GTX460 (768) level of performance. At 240mm2 will be on par with BARTS 230mm2. Of course they will have to raise the clocks too.

No. The full GF106 has 192 shaders which is exactly half of a fully enabled GF104. It does have a 64-bit memory controller and the associated ROPS that have been disabled though. Enabling those parts and upping the clocks should give GF106 enough power to beat HD5770 (and probably tie with HD6770 which is looking to be a higher clocked juniper itself), but GTX 460 and Barts are way out of GF106's ability to compete with.

A full shader (or Cuda Core if you like) GF104 with 384 shaders and raised clocks will be able to have a GTX470 performance with 360mm2. Here we have a new $220-250 market from NV that AMD will not have a counterpart because 6000 will be at the 400mm2. CAYMAN 6950 (5850) will have GTX470 performance at 400mm2.

Assuming Cayman is 400mm2 and delivers similar performance/transistor as Barts is supposed to, then one would be foolish to think that both Cayman SKU's have anything other than over GTX 480 performance. Definitely at 400m2 these parts would not be competing at the GTX470 level. Since Barts is supposed to beat GTX 460, then it makes sense that a fully enabled GF104 would compete with Barts, and I wouldn't be surprised if the fully enabled GF104 did win this one. With a ~50-60% larger die it had better, or Nvidia is in some deep shit!

About the full GF100 and GTX480, in that article they benched the full 512 shader card, the card took a 5% raised in performance with the same 700MHz clocks as a 480 shaders GTX480. If you raise the clocks to 800MHz the performance difference is at 15-20%. IF and I say again IF NV release a full 512 shader GF100 it will raise the clocks too.

I can only see a fully enabled and overclocked GF100 beating Cayman if they decide to skimp on the die size. Eg. if Cayman is another ~330mm2 part like Cypress and not the 400mm2 everybody seems to be expecting. Even then, it is highly questionable how many of these parts Nvidia would be able to produce.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
You are wrong here. Look at the reviews of the galaxy gtx480 SOC and gtx470 SOC. They both are clocked higher and draw less power than the original vanilla versions.

I think the answer to the is three-fold.

1. yes, 40nm is likely improving all of the time. But that's a bit of a wash since the same is true for ATI. NV benefits a little bit more because of their larger dies.

2. These are most definitely cherry-picked chips that are going into the 'super special' SKUs.

3. Better cooling leads to lower operating temperatures which leads to less leakage with leads to lower power consumption.