kit guru 8970/50 in JUNE ???

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Charlie's articles almost always have some truth to them. In this case, he makes two claims. A release date and performance expectations. At least one of those claims is probably true...
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It could be that AMD is focusing entirely on perf/mm^2 with the refresh, a la barts, so it's possible they could come back with the 8970 die being smaller and 15% faster than the hd7970GE. If that is the case, they would be able to improve their gross margins while simultaneously maintaining price pressure on Nvidia. 15% faster than the hd7970GE at <=$400 would be an attractive product.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
I think AMD is on record that they are going to expand on the compute capabilities with Sea Islands. Whether that means the top tier parts or other ASICs I am not sure.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
TLDR - he says sea islands is coming in March and performance improvements aren't currently reaching what AMD had previously told AIB's.

That's as close as we can get to concrete proof that AMD is going to be early and performance will be quite a bit higher then what everyone thought.

Every time AMD's graphics division flopped, we got rapped by NV. 8800GTX was $599 and 8800GTS 640mb was $449 at launch. NV happily waited 1 year to launch 8800GT. Why? Because they could.

In the very same paragraph that you try and say nV gouges when they can you bring up the part they released- without competition- that *decimated* AMD and did so for significantly less money then anything remotely close?

Luckily for us all, Charlie has given us great news on the AMD front, early and much faster then expected :)
 

Spjut

Senior member
Apr 9, 2011
933
163
106
Meh, waiting for Maxwell/Volcanic islands anyway...Hopefully they'll give us DX12 and rectify the complaints of DX11

With the next-gen consoles likely coming in late 2013, it doesn't feel that wise getting a GPU before that
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
He has no clue about anything. Saying that GK104 would be power and size limited is hilarious. GK114 should be 15% faster? How is that even possible? Going from 4 to 5 GPCs gives GK104 25% more compute and texture performance. Add 1 memory controller and we have 25% more bandwidth and pixel performance, too.

His articles are worthless since last year.
 

iMacmatician

Member
Oct 4, 2012
88
0
66
youtube.com
Feb 19, 2009
10,457
10
76
He has no clue about anything. Saying that GK104 would be power and size limited is hilarious. GK114 should be 15% faster? How is that even possible? Going from 4 to 5 GPCs gives GK104 25% more compute and texture performance. Add 1 memory controller and we have 25% more bandwidth and pixel performance, too.

His articles are worthless since last year.

And you KNOW for a fact that NV is targetting a particular TDP? Or for a FACT whats even in GK114?

Charlie be not be always accurate, but at least he has inside sources.

Everything is speculation until its launched.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
In the very same paragraph that you try and say nV gouges when they can you bring up the part they released- without competition- that *decimated* AMD and did so for significantly less money then anything remotely close?

I am not sure you followed what I said then. With less competition, it doesn't mean NV will stop releasing faster GPUs or stop improving price/performance but the pace of that innovation will slow down. How was NV able to sell 8800GTS 320/640 for such high prices then? Because AMD was uncompetitive that generation. This is not what any of us wants to see with HD8000 vs. GTX700 series. We have also seen AMD ride the market with high pricing of HD7000 series until NV launched their cards. Competition is necessary as it drives both companies to lower prices and release faster GPUs at a quicker pace than if only 1 of them had a competitive line.

BTW, HD3850/3870 cost less than 8800GT so not sure how NV decimated AMD for significantly less $ that generation. Now you are just altering history.

"Here's what's really interesting, on average the Radeon HD 3870 offers around 85% of the performance of the 8800 GT, and if we assume that you can purchase an 8800 GT 512MB at $250, the 3870 manages to do so at 87% of the price of the 8800 GT." ~ AnandTech


You seem to have forgotten that GTX280 launched at $649 and 9 months later HD4890 delivered that performance for $259. It's this type of close competition that allows gamers to buy faster GPUs for lower prices time and time again. You also seem to have conveniently ignored how AMD's HD4870 forced NV to launch a GTX260 216 and lower prices. Why did this happen? Competition.

Speaking of GTX780, weren't you one of the members here that jumped on the bandwagon that GTX780 should be at least 50% faster and 75-100% is "not unrealistic" based on some GTX580 vs. GTX460/560 comparisons when I was talking about 30-40% faster as being more realistic?

Looks like NV will not even drop GK110 and will use GK114 and maybe also go for only 15-25% more performance.

"Like AMD, NV is putting out a minor update to GK104,..., NV is also power and size bound."

Didn't I bring these 2 limitations over and over in the GTX780 threads, telling you that NV is not immune to the same laws of physics that AMD has had to deal with on 28nm with its 365mm2 die size; and yet you continued to imply that NV will have no problem dropping a 2880 SP 500-600mm^2 die if they wanted to?

NV is a business too and they need to make $. If HD8970 is 15% faster only, NV will be more than happy with 25% faster 780 no GK114/GK114-GX and manage to earn a healthy profit without needing a 7 billion transistor chip with a watercooling kit. These claims of 8800GTX vs. 2900XT style scenario never sounded believable to me since unlike 2900XT, HD7970 is not a slow GPU, and actually it's faster at high resolutions than 680 is.

I think GTX780 will beat 8970 but neither will deliver the performance increase people are hoping for.

Saying that GK104 would be power and size limited is hilarious.

Kepler will be just as power limited and size contrained if it grows to 400mm^2. GTX680 already peaks well into 185W range. They could deliver 25-30% faster but a 365mm^2 Kepler at 1.05ghz with GPU boost would use more power than a GTX680 and approach 210-220W I bet. That's not magic, but physics. The difference is NV has more room to allocate transistors for TMUs, CUDA cores, etc. but AMD already added the 384-bit bus and NV is still to do that. A larger bus will penalize the die size a little bit but not as much as the fat 8970 will have for compute that will penalize AMD more.

We know looking at HD7870 vs. GTX660 (GK106), on a per mm^2, in pure gaming form, GCN is about 7-12% faster than Kepler is per mm^2 but G106 is more power efficient per mm^2. It looks like Tahiti has 60-70mm^2 die being wasted on double precision, dynamic scheduler, compute fat etc. that GK114 won't have to deal with. This same theme continues as HD7970 GE is about 5-12% faster than GTX680 is. Since Pitcairn XT 7870 is slightly faster than GTX660 on the same die size, that means GCN in pure gaming form is at least as efficient as Kepler is in its pure gaming form (GK104/106/107). The reason Tahiti XT is 365mm^2 is because of compute, 384-bit bus and double precision. That gives NV an extra 60-70mm^2 die space room to use for TMUs, CUDA cores and larger bus - that's a huge advantage that HD8970 doesn't have. In other words, HD8970 may need to be 60-70mm^2 larger to match the performance of a GK114 since it's unlikely that AMD will drop compute functions as it's their strategy for HSA.

On the positive side for AMD, their chips have higher transistor density. 2.8B in 212mm^2 for 7870 vs. 2.54B in 214mm^2 for GK106. I still think 780 will end up faster than 8970 because they can add 60-70mm^2 of wider bus and functional units and even at 365mm^2 already end up way faster than HD7970 GE is. It could be enough to squeeze up to 20% faster over 8970 since Kepler GK104 is also more memory bandwidth starved than Tahiti XT is, which would aid GTX700 a lot more for the next round.

However, the expectations for a 2880 SP 50-100% faster GTX780 sound like wishful thinking the more we hear about GTX700 series.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Since we're with Charlie here's what he says about "GK114"
http://semiaccurate.com/2012/10/12/what-is-going-on-with-nvidias-gk114/

Honestly I don't see how Nvidia can even squeeze 15% more performance out of GK104 without increasing the memory bandwidth by at least 25%. And that doesn't add up with 7ghz vram on the same 256-bit bus size. So whatever Nvidia has going on, if the GK114 is spec'd the same as GK104 (insofar as bus size and core count) then I hope they aren't relying on that being their next flagship geforce card. If they add another memory controller and two more SMX units (along with 1 more GPC), then we'd be talking 25-35% performance improvement at the same clock speeds.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
And you KNOW for a fact that NV is targetting a particular TDP? Or for a FACT whats even in GK114?

Charlie be not be always accurate, but at least he has inside sources.

Everything is speculation until its launched.

Charlie may have inside sources, but they may not always be accurate.

All depends on the order of wording. And do you know what I would call a source that isn't accurate? I'd call it no longer a source.
 

moonbogg

Lifer
Jan 8, 2011
10,734
3,454
136
It makes no sense to put something 50-75% faster out only a year after the 680. Thats the kind of jump that happens with a whole new generation, not with a refresh and not that soon. Especially when there is no reason to do so. Besides, we will be treated well by the current gen for a good while. I can't see myself forking over tons of cash anytime soon, but thats just me I guess.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I think GTX780 will beat 8970 but neither will deliver the performance increase people are hoping for.

If the gtx780 is based only on a tweaked/refined GK104 - as in no major architectural additions - then I do not see it beating the hd8970. There has to be something in between GK104 and GK110 (OBR's rumor?) if Nvidia wants the performance crown outright without having to use GK110 in a Geforce product. I'm still not ruling out a $599-649 Geforce based GK110 at some point next year, though.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
GK110 is reportedly huge and probably doesn't get as good of yields as smaller GPUs, and NV was still 28nm-capacity-constrained recently, so I suspect they are capacity constrained on GK110s.

Meanwhile, margins are much thicker on professional (HPC, graphics) cards than on consumer GeForce cards.

I just don't see NV using precious GK110 GPUs on low-margin GeForces unless they are forced into doing so, and based on rumors so far, they will not be forced to do so due to HD8xxx not being much of an improvement over HD7xxx. I do think NV may release a tweaked version of GK104 or maybe build a version in-between a GK104 and GK110 and release that as the GTX 7xx series. It should be enough to recapture the single-GPU crown but not hurt their GK110 supplies.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Charlie may have inside sources, but they may not always be accurate.

All depends on the order of wording. And do you know what I would call a source that isn't accurate? I'd call it no longer a source.

BUT GK104 has a dedicated physx block! And it looses to Pitcairn sometimes!

EDIT: Seriously though, it's like Charlie gets valid information from his sources then purposely tries to exaggerate or litter it with garbage.
 
Last edited:

Crap Daddy

Senior member
May 6, 2011
610
0
0
OBR said something about the 780 being not GK110 but a different chip, Charlie says it's gonna be GK114 something. They have one thing in common, GK110 will not be in GTX form. Willingly or forced Nvidia did a great move with Kepler in the face of this whole PC weakness and downfall stuff. They managed to separate the gaming cards from professional GPUs. They also managed selling more laptop discrete chips than ever so they have some escape routes. Right as we type AMD is going down hard. I'm afraid to think of the future of video cards.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Kepler will be just as power limited and size contrained if it grows to 400mm^2.

Sure it will. But GK104 is not limited by power or die size. :rolleyes:
BTW: K10 with two GK104 and 4,5 TFLOPs is using 225 Watt. That's 50% more compute performance for 21% more power. But yeah, it will very hard for nVidia to deliver more than 15% performance with the next High-End Geforce product.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
BTW, HD3850/3870 cost less than 8800GT so not sure how NV decimated AMD for significantly less $ that generation. Now you are just altering history.

I am altering history to a place called "reality".

When the 8800GT launched the only place you could buy a 3870 was Neverland, and from what I understand the Leprechaun strike combined with the Unicorn and Fairy distribution SNAFU made it hard to come by even there.
 
Feb 19, 2009
10,457
10
76
Sure it will. But GK104 is not limited by power or die size.

There is no arbitrary power limit, it doesnt exist for enthusiast cards as seen from prior generations.

Would it make a difference if a top card was 350W, 400W? Not at all if its got the performance to match it.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
There is no arbitrary power limit, it doesnt exist for enthusiast cards as seen from prior generations.

Would it make a difference if a top card was 350W, 400W? Not at all if its got the performance to match it.

And we know that nVidia will use 250 Watt (>300 Watt in Furmark) for the High-End. So - how is GK104 power limited with 170/190 Watt? :confused:
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Meh, waiting for Maxwell/Volcanic islands anyway...Hopefully they'll give us DX12 and rectify the complaints of DX11

With the next-gen consoles likely coming in late 2013, it doesn't feel that wise getting a GPU before that

The next gen consoles are completely irrilivent.All the rumors are pointimg to them using existing technology that's already considered old by PC gaming standards.
 

cplusplus

Member
Apr 28, 2005
91
0
0
The next gen consoles are completely irrilivent.All the rumors are pointimg to them using existing technology that's already considered old by PC gaming standards.

But, in theory, the next consoles will set a new baseline, which should mean more demanding games and games that use the newer DX features. While the baseline right now is DX9, if the next consoles use parts available right now, then the baseline probably becomes a mixture of DX10 and DX11, leaning more towards the latter than the former. It's kind of like the improvement of IGPs meaning PC games can start using more features because it's more likely that everyone has graphics parts capable of running them.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
But, in theory, the next consoles will set a new baseline, which should mean more demanding games and games that use the newer DX features. While the baseline right now is DX9, if the next consoles use parts available right now, then the baseline probably becomes a mixture of DX10 and DX11, leaning more towards the latter than the former. It's kind of like the improvement of IGPs meaning PC games can start using more features because it's more likely that everyone has graphics parts capable of running them.

It may set a new baseline for games, but won't make a difference for video cards, so there's really no reason to wait for next gen consoles before buying a video card.