[Semiaccurate] GK104/Kepler/GTX680 Next Week?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
There have been so many rumors about kepler and the 104 that many of them end up contradicting each other. Where AMD's pre-launch was relatively quiet and unassuming, nVidia's Kepler is absolutely filled with rumors and crap. Frankly, until we have solid clues from reputable reviewers I'm not holding my breath. There hasn't even been an engineering sample and all we've gotten was a shot of the PCB yet people are claiming they've got performance estimates... ffs, they can't even nail down a concrete release date yet.

Market analysis isn't that hard to do. In fact, the graphics card market is very predictable when it comes to performance.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Market analysis isn't that hard to do. In fact, the graphics card market is very predictable when it comes to performance.

I'm staring at my 5870 through the window of my case. Care to reiterate your point? lol :)

Alright, I'll concede the 5870 was one of those "magic" cards that offered some pretty outstanding performance in relation to its competition, but the fact is there are curve balls. There's been a 20-40% increase with each generation? GCN didn't jump by 20, but with some light and easy OCing the 7970 performs as well as a crossfired 6970 setup. That's not predictable, that's insane.

And if we're going by market trends then expect Kepler to come in hot, require a thermonuclear reactor to power and show up about a year from now.

My point is we don't know anything. nVidia have been relatively tight-lipped and with so many rumors flying around that we can't pin down anything at all. The notion that we can estimate its performance from only a PCB shot is even sillier. The fact is Kepler doesn't even have a solid release date and yet we have people here talking performance estimates. If we're to deduce anything at all it should be that it can't get here soon enough, but outside of that neither you nor I can state anything factual and to claim otherwise is a outright lie.

Just sit and wait people, let nVidia take their time and get it right this time. I don't think anyone wants to see another Fermi release :/
 

mkmitch

Member
Nov 25, 2011
146
2
81
Sad that folks get this berserk about a GPU being available or not.

Nah it's funny as can be, same folks spewing the same stuff in every similar thread. I don't think they can help themselves. :p You can predict almost every comment they will throw out there.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I'm staring at my 5870 through the window of my case. Care to reiterate your point? lol :)

Alright, I'll concede the 5870 was one of those "magic" cards that offered some pretty outstanding performance in relation to its competition, but the fact is there are curve balls. There's been a 20-40% increase with each generation? GCN didn't jump by 20, but with some light and easy OCing the 7970 performs as well as a crossfired 6970 setup. That's not predictable, that's insane.

And if we're going by market trends then expect Kepler to come in hot, require a thermonuclear reactor to power and show up about a year from now.

My point is we don't know anything. nVidia have been relatively tight-lipped and with so many rumors flying around that we can't pin down anything at all. The notion that we can estimate its performance from only a PCB shot is even sillier. The fact is Kepler doesn't even have a solid release date and yet we have people here talking performance estimates. If we're to deduce anything at all it should be that it can't get here soon enough, but outside of that neither you nor I can state anything factual and to claim otherwise is a outright lie.

Just sit and wait people, let nVidia take their time and get it right this time. I don't think anyone wants to see another Fermi release :/

Sounds to me like you need to follow or read about the history of AMD vs NVIDIA from the HD 3000 series to now. It's getting so predictable it's not even funny.

Again, for each new process + node, engineering at NVIDIA has a set goal of delivering 50-65% more performance moving from one GPU to its replacement. In this case, the target at NVIDIA is 50-65% higher performance from GK104 compared to its predecessor, GF114.

Seems relatively simple to understand to me.

But hey, I want to hear more naysayers say GK104 will deliver a beating to Tahiti even though when you look at the engineering behind it, it won't be possible. 20% faster than the HD 7970, the GTX 680 will be not. Now obviously, GK110 is too powerful compared to Tahiti, hence Tenerife.

Everything is very predictable. Based on market positioning and common sense I had been saying for about a month now that the 7870 and 7850 would both be slightly faster than the 6950 and 6970, respectively, and the 7870 would be clocked at or near 1GHz. Look what happened.

Study the market a bit and you'll see most things are easily predictable now.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
That dude from OBR is a huge nvidia fanboy and is completely biased in 99% of his articles. He has a hard on for nvidia.

Charlie seems to be the AMD fanboy.

Charlie vs OBR, war of the fanboys?

Charlie isn't an AMD fanbios. He simply hates nVidia. It sometimes looks like the same thing.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Everything is very predictable. Based on market positioning and common sense I had been saying for about a month now that the 7870 and 7850 would both be slightly faster than the 6950 and 6970, respectively, and the 7870 would be clocked at or near 1GHz. Look what happened.

Study the market a bit and you'll see most things are easily predictable now.

That's a bit easier when we have the clock speeds, # of transistors, #of ROPs and the perf-per-ROP from the 7970+7950, isn't it? Or their OC limitations? Or just how much performance you'd gain from OCing? That can all be derived from the slides we've seen before the cards were released. If you want a pat on the back you're not getting one.

What do we have from Kepler? Right. Absolutely nothing other than a PCB shot. It didn't take a genius to figure out what the 7870/7850 would deliver since we've had the hard info for nearly a month now. If you can give me the price + performance on Kepler then you're taking a shot in the dark at best or have sources at nVidia.

This notion that if I studied the market I'd be able to predict it is also a farce. Like I said, if we were to predict we'd say that Kepler will be a year late and it's power source would be a nuclear power plant and that it would double as a space heater. I don't think anyone in their right mind would have predicted the 7970 would outperform the 6970 by 80%+ unless we had some concrete info. In the case of Kepler, the only thing written in stone is that we have absolutely no idea how it performs or even when it'll be released. Quit patting yourself on the back for taking blind shots with the lights turned off, it's getting annoying.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
That's a bit easier when we have the clock speeds, # of transistors, #of ROPs and the perf-per-ROP from the 7970+7950, isn't it? Or their OC limitations? Or just how much performance you'd gain from OCing? That can all be derived from the slides we've seen before the cards were released. If you want a pat on the back you're not getting one.

What do we have from Kepler? Right. Absolutely nothing other than a PCB shot. It didn't take a genius to figure out what the 7870/7850 would deliver since we've had the hard info for nearly a month now. If you can give me the price + performance on Kepler then you're taking a shot in the dark at best or have sources at nVidia.

This notion that if I studied the market I'd be able to predict it is also a farce. Like I said, if we were to predict we'd say that Kepler will be a year late and it's power source would be a nuclear power plant and that it would double as a space heater. I don't think anyone in their right mind would have predicted the 7970 would outperform the 6970 by 80%+ unless we had some concrete info. In the case of Kepler, the only thing written in stone is that we have absolutely no idea how it performs or even when it'll be released. Quit patting yourself on the back for taking blind shots with the lights turned off, it's getting annoying.

Read, study the market, and stop naysaying. You'll be making another excuse when it turns out I was right next month.

I've never said anything about power consumption, die size, or any other factor that's not performance. Insinuating that I have is is disingenuous.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Read, study the market, and stop naysaying. You'll be making another excuse when it turns out I was right next month.

I've never said anything about power consumption, die size, or any other factor that's not performance. Insinuating that I have is is disingenuous.

Sorry, then correct me. You're estimating performance based on previous generation-to-generation numbers while knowing nothing else at all about the graphics cards, correct?

In that case, keep estimating while I and everyone else with a bit of sense passes that off with the remaining baseless rumors revolving Kepler.

:thumbsup:
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
1726009-shut_up_and_take_my_money_super.jpg
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't think anyone in their right mind would have predicted the 7970 would outperform the 6970 by 80%+ unless we had some concrete info.

HD7970 is nowhere near 80% faster than HD6970. It's roughly 42-45% depending on the source. To be 80% faster it needs a 30% overclock. And actually, in the past almost all major AMD generations were accompanied by 60-100% performance increases (once you include modern games at that time), at stock speeds. No overclocking required. The same for NVidia.

You can use Voodoo Power Ratings that nicely summarizes the performance increases. This is very accurate despite looking like an amateur's work. You can look up launch dates using GPUreview.com.

Based on historical cadence of NV, Kepler's high-end should be about 60-70% faster than GTX480 and 40-50% faster than GTX580. We don't know if that happens, but it is easily in-line with the past of NV's own GPU generational leaps.

GPU evolution (last couple generations).

GTX580 > GTX280 by 73% (which means about 60% faster than GTX285)
GTX480 > GTX280 by 51%
GTX280 > 8800GTX by 63%

If you take more modern games, the difference is even more.

There's been a 20-40% increase with each generation? GCN didn't jump by 20, but with some light and easy OCing the 7970 performs as well as a crossfired 6970 setup. That's not predictable, that's insane.

Actually that's not true. AMD has never increased performance from 1 major generation to the next by only 20-40% aside from HD5800--> HD6900 (but 6000 series shouldn't have been labeled as a 6 in the first place).

Your very own HD5870 spanked HD4870, often by 2x, at stock.

Here is a review.

From that review, I quote:

"When compared to the Radeon HD 4870 the results are staggering, as we saw an 83% performance gain on average, with the Radeon HD 5870 delivering more than twice the average frame rate at 2560x1600 in games such as Far Cry 2, Company of Heroes and Wolfenstein. Just 2 of the games tested dipped below a 60% performance gain, and those games were World in Conflict and Supreme Commander.

Commonly the Radeon HD 5870 appeared to be between 65% and 85% faster than the Radeon HD 4870, which was truly impressive."


That's at stock speeds. :p

Here is another review:

"...the single-GPU Radeon HD 5870 actually managed to outperform the Radeon HD 4870 X2 in 6 out of the 15 games tested...The Radeon HD 5870 also beat the GeForce GTX 295 in 5 of the games tested"

In how many games does an HD7970 outperform HD6990 or GTX590? It needs a 30% overclock just to match them.

Finally, before that:

8500 --> 9700Pro = > 100%
9800XT --> X800XT = 75-100% increase
X800XT --> X1800XT = 75-100%
X1800XT --> HD2900XT/3870 = 75-100% (* this one is tricky since if you count X1900XT as a half-generation then it alone brought 30-40% in shader intensive games over the X1800XT, but that diminishes the gain of the HD2900XT to follow)
HD3870 --> HD4870 = 50-75% (at least)
HD4870 --> HD5870 = at least 60-80% faster on average and more or less 2x faster in modern games.

In fairness to HD7970, with overclocking, it does get to 70-100% faster performance over HD5870/6970 in the most demanding games today (Crysis 2, BF3, etc.)
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Everything is very predictable. Based on market positioning and common sense I had been saying for about a month now that the 7870 and 7850 would both be slightly faster than the 6950 and 6970, respectively, and the 7870 would be clocked at or near 1GHz. Look what happened.

Even blind squirrels can still find nuts. Waka waka! ;)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
...In fairness to HD7970, with overclocking, it does get to 70-100% faster performance over HD5870/6970.

^This. There's way more performance left on the table this gen than the previous 4, at least. The 7900, and now 7800, perform way above the previous gens. They've just clocked them well below their potential. Hell, they've even clocked them below their sweet spot perf/W.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I think the thing that people forget is that as GPU's become more and more advanced, it's harder to improve them. For example, everyone remembers how awesome the 8800 GTX was at release, and I still see it referenced now, but what people seem to forget is that it was the first unified shader architecture. This removed a tremendous bottleneck from previous architectures and drastically increased GPU efficiency. Now with that bottleneck removed, it's also one less way to make next year's model that much better. Each generation the returns will get smaller and smaller until something radical changes the currents state of things.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
I think the thing that people forget is that as GPU's become more and more advanced, it's harder to improve them. For example, everyone remembers how awesome the 8800 GTX was at release, and I still see it referenced now, but what people seem to forget is that it was the first unified shader architecture. This removed a tremendous bottleneck from previous architectures and drastically increased GPU efficiency. Now with that bottleneck removed, it's also one less way to make next year's model that much better. Each generation the returns will get smaller and smaller until something radical changes the currents state of things.

The 8800ultra was my first true love,card wise.:wub:

Had a 8800gts 512mb and i decided i was gonna buy a 8800ultra come hell or high water and i walked into the frys in oxnard,ca to find the rumored gtx280 to be on the store shelf for $650 and being cheaper and from rumours a beast i picked it up,going home i found out the card was not for sale on newegg and i guess someone at frys made a boo boo selling it prior to the nda lift,so i waited nearly 3 days for evga to release drivers.:colbert:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think the thing that people forget is that as GPU's become more and more advanced, it's harder to improve them. For example, everyone remembers how awesome the 8800 GTX was at release, and I still see it referenced now, but what people seem to forget is that it was the first unified shader architecture. This removed a tremendous bottleneck from previous architectures and drastically increased GPU efficiency. Now with that bottleneck removed, it's also one less way to make next year's model that much better. Each generation the returns will get smaller and smaller until something radical changes the currents state of things.

Good points. 8800GTX vs. 7900GTX shouldn't even count as the norm since 7900GTX tanked in modern games while 8800GTX excelled at them. The gap is easily 2-3x in speed. I still think they have a lot of tricks to keep improving performance (i.e., virtual memory space between CPU + GPU, improving tessellation, etc.). Will we get next generation games though? 2012 looks dry.

I think the diminishing returns will come very quickly because of next generation consoles still far away.

1) Wii U doesn't appear to support DX11. :thumbsdown: It will also probably take 1-2 years after it launches before developers start to take advantage of its capabilities (and thus we get more graphics intensive ports).

2) Xbox next and PS4 are not due until 2013 and 2014, respectively. That means unless someone steps up real soon on the PC, 2012 will go by the wayside without a single demanding new game (maybe GTAV by year end).

We are already seeing massive diminishing returns to upgrading, ironically with the most anticipated PC games of 2012. Look at Mass Effect 3:

MSAA%201920.png


I am sure your HD7970 @ 1340mhz can hit 80fps+ at 2560x1600 4x MSAA.

The next GPU wave will allow us to hit 200 fps in console ports and 100 fps in BF3 and Crysis 2. 2012-2013 might be very painful years simply because the gap between console hardware and GPU hardware will be at an all-time high.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Good points. 8800GTX vs. 7900GTX shouldn't even count as the norm since 7900GTX tanked in modern games while 8800GTX excelled at them. The gap is easily 2-3x in speed. I still think they have a lot of tricks to keep improving performance (i.e., virtual memory space between CPU + GPU, improving tessellation, etc.). Will we get next generation games though? 2012 looks dry.

I think the diminishing returns will come very quickly because of next generation consoles still far away.

1) Wii U doesn't appear to support DX11. :thumbsdown: It will also probably take 1-2 years after it launches before developers start to take advantage of its capabilities (and thus we get more graphics intensive ports).

2) Xbox next and PS4 are not due until 2013 and 2014, respectively. That means unless someone steps up real soon on the PC, 2012 will go by the wayside without a single demanding new game (maybe GTAV by year end).

We are already seeing massive diminishing returns to upgrading, ironically with the most anticipated PC games of 2012. Look at Mass Effect 3:

MSAA%201920.png


I am sure your HD7970 @ 1340mhz can hit 80fps+ at 2560x1600 4x MSAA.

The next GPU wave will allow us to hit 200 fps in console ports and 100 fps in BF3 and Crysis 2. 2012-2013 might be very painful years simply because the gap between console hardware and GPU hardware will be at an all-time high.


I used to have too much power, then I took high res to the knee.
 

BoFox

Senior member
May 10, 2008
689
0
0
You can use Voodoo Power Ratings that nicely summarizes the performance increases. This is very accurate despite looking like an amateur's work. You can look up launch dates using GPUreview.com.

Thank you!! I also plan on doing the same ratings for notebook GPUs, on a different list. I really appreciated your appreciation! :)
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Nobody even paid attention to my direct-from-Nvidia source saying Kepler was due out in 2 weeks, heh. Well, we'll see, I guess, but I wouldn't bet against it happening, even if it turns out to be a paper launch. I'd rather know the specs and stuff beforehand, anyway.

Good points. 8800GTX vs. 7900GTX shouldn't even count as the norm since 7900GTX tanked in modern games while 8800GTX excelled at them. The gap is easily 2-3x in speed. I still think they have a lot of tricks to keep improving performance (i.e., virtual memory space between CPU + GPU, improving tessellation, etc.). Will we get next generation games though? 2012 looks dry.

I think the diminishing returns will come very quickly because of next generation consoles still far away.

1) Wii U doesn't appear to support DX11. :thumbsdown: It will also probably take 1-2 years after it launches before developers start to take advantage of its capabilities (and thus we get more graphics intensive ports).

2) Xbox next and PS4 are not due until 2013 and 2014, respectively. That means unless someone steps up real soon on the PC, 2012 will go by the wayside without a single demanding new game (maybe GTAV by year end).

We are already seeing massive diminishing returns to upgrading, ironically with the most anticipated PC games of 2012. Look at Mass Effect 3:

MSAA%201920.png


I am sure your HD7970 @ 1340mhz can hit 80fps+ at 2560x1600 4x MSAA.

The next GPU wave will allow us to hit 200 fps in console ports and 100 fps in BF3 and Crysis 2. 2012-2013 might be very painful years simply because the gap between console hardware and GPU hardware will be at an all-time high.

A bit off topic, but what do you think about: http://www.zdnet.com/blog/computers...ionize-the-pc-gaming-industry-and-beyond/7627 Enough to resuscitate PC gaming? Roadkill?
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Nobody even paid attention to my direct-from-Nvidia source saying Kepler was due out in 2 weeks, heh. Well, we'll see, I guess, but I wouldn't bet against it happening, even if it turns out to be a paper launch. I'd rather know the specs and stuff beforehand, anyway.



A bit off topic, but what do you think about: http://www.zdnet.com/blog/computers...ionize-the-pc-gaming-industry-and-beyond/7627 Enough to resuscitate PC gaming? Roadkill?

because we need solid fact, for your statement to be considered with the member in here.



nah, i think what PC gaming need is an alternative os beside windows, i mean MS already abandoned us for console, and they even don't have studio that develop PC gaming exclusive like we used to have.

so if valve is kind enough to us, they should port several game title to linux and make contract with several publisher to release it.

I'm sick of MS, i hate them after they don't make any sequel to Flight Simulator and replace it with that MS Flight crap.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
In that case, virtually all of the rumors and speculation--which are not grounded in "solid fact"--should be disregarded. Yet y'all are bickering over un-facts anyway. Heh. So far we have some potentially real (or fake) photos and die shots and guesstimation of specs based on potentially real (or false) leaks. Plus an unabashed Nvidia-hater (Charlie)'s estimates and some other guy (OBR) who disagrees with him.

Given the mess above, I'm going to go with my Nvidia contact's estimate. My prediction: GK104 launch by Mid-March, with real availability by April.

because we need solid fact, for your statement to be considered with the member in here.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
In that case, virtually all of the rumors and speculation--which are not grounded in "solid fact"--should be disregarded. Yet y'all are bickering over un-facts anyway. Heh. So far we have some potentially real (or fake) photos and die shots and guesstimation of specs based on potentially real (or false) leaks. Plus an unabashed Nvidia-hater (Charlie)'s estimates and some other guy (OBR) who disagrees with him.

Given the mess above, I'm going to go with my Nvidia contact's estimate. My prediction: GK104 launch by Mid-March, with real availability by April.

Gotta name for that source? :\








Na, I'm having you on. It doesn't sound much different than what Charlie is saying, either. I'd say your source matches Charlie's. Makes it that much more likely, IMO.

Here's hoping nVidia is gonna be gentle to us. I think they'll likely just reuse the grease AMD used on us, but one can hope.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Na, I'm having you on. It doesn't sound much different than what Charlie is saying, either. I'd say your source matches Charlie's. Makes it that much more likely, IMO.

Here's hoping nVidia is gonna be gentle to us. I think they'll likely just reuse the grease AMD used on us, but one can hope.

Charlie uses industry sources; my source was a friend of a friend I tried to pry some information from at my friend's party. :)

Yeah I really hope NVDA helps with price/perf at 28nm, which so far has underwhelmed me.
 

Riek

Senior member
Dec 16, 2008
409
14
76
HD7970 is nowhere near 80% faster than HD6970. It's roughly 42-45% depending on the source. To be 80% faster it needs a 30% overclock. And actually, in the past almost all major AMD generations were accompanied by 60-100% performance increases (once you include modern games at that time), at stock speeds. No overclocking required. The same for NVidia.

snip.snip

This discussion is made for multiple motnsh now... and it has been pointed out multiple times that the performance increase of the past occurred with similar increases in power consumption.
 
Status
Not open for further replies.