Big Kepler news

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I don't believe the kind of installations that use GPGPU are going to want to put their installations on hold until the Fall. The people in charge of them will be trying very hard to get an inkling of when the next Nvidia compute product will show up. If it's a ways out then AMD's GCN arch will be mighty tempting for at least some of them.

I bet behind closed doors Nvidia is promising the 28nm compute card will be glorious and worth waiting for.

This is not really a topic for this discussion, but imo Nvidia is far more concerned about Intel (MIC) than whatever AMD does.

AMD is expecting other people to do the leg work for them, Nvidia is on the ground, beating feet making software, getting their programming into universities, it's just another level of product support. Those people you think may skip out for GCN, are going to have to depend on mostly open source for their programs that they don't actually have yet because the ones the're using are programmed for CUDA which GCN can't obviously use.

It's not even about performance, it's about support, Intel drives their products far better than AMD does. Nvidia created the GPGPU market in HPC as we currently know it. AMD is putting out hardware, Nvidia has hardware and software, Intel is a giant monster of money and while hardware isn't there quite yet, they're working on it. Which is a concern for Nvidia.

IMO, of course :)

All of that is neither here nor there though as it pertains to this discussion. What most of us really want to know, is what kind of performance are the other parts going to bring, and if it's based on GF100 or GF104 we'll start getting a clearer picture of what to expect.
 
Last edited:

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
Really? How can something be so simple when the whole premise is based on conjecture and raw speculation?

Whole of literature, philosophy, theoretical physics (not mechanical physics) is. But of course, not everybody can understand everything, lol.
 

AdamK47

Lifer
Oct 9, 1999
15,782
3,606
136
GK110 is the one I'm waiting for. I just hope there will be enough supply when released. Plan on getting three of them.
 

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
And what I wrote wasn't based on speculation.

AMD 7970 was the king for 3 months before 680. FACT

AMD 7970 and GTX 680 practically trade blows and are a side grade to each other. FACT

GTX 685 > 7970 (Now this is speculation :)) but will release about 8-9 months after it and will be the kind for about 3-4 months. Fact.

What is the speculation here?

If you can't differentiate speculation from facts, then yes, please stay off the above mentioned subjects and stick to ABC and 123 :p
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
People serious about compute use quadro or firepro they don't use a gaming card.It doesn't matter when u start the race it only matters when u finish.Now a days when a 560ti or 6950 is probably enough for most of the console ports this kind cards are getting irrelevant by each day.Also at my place 6990 or 590 is a better buy than a 7970.

First really sensible thing I've read so far in this thread.

I don't see people who are using 100% compute going for high end gaming cards, I think in all likelihood that was just said in order to give AMD at least 1 win, no matter how small, but I'm guessing practical use of this is pretty minor, even people serious about something like bit coin mining are after mid range cards for their better price to performance ratio and lower power usage anyway, it'll always scale better than going for high end.

But more importantly in the realm of gaming which is a large percentage of the justification for this sort of hardware, you really just don't need this kind of power anyway, even when you're running a whopping 2560x1600 resolution (a very tiny number of gamers are) then you literally have no reason to upgrade.

The simple fact is there's nothing out there right now that's too demanding for the average configuration of something like a single 1920x1080 monitor. There's only a handful of games you can't max out in 2560x1600 which is twice the number of pixels, as someone with loads of disposable income I still can't justify the increase in power past a GTX580.

We need some actual, meaningful and significant improvement in games in order to justify these cards, currently that doesn't exist nor does it appear to be likely to ever exist until the next wave of consoles hit in a few years and the stagnation ends or we have some other reason to need that extra power.
 

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
GK110 is the one I'm waiting for. I just hope there will be enough supply when released. Plan on getting three of them.

I read somewhere, which was a pretty accurate website, that GK110 isn't the the big kepler. It is the successor to the big kepler and GK110 will be far more powerful and the one which people actually wanted. But it probably won't release for another 1-2 years I guess because it perhaps the next generation.
 

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
So the rest of us illiterates can understand, and in layman terms, it's BS.

Nope, you got it wrong. You can't understand. In fact, you can barely understand any of the stuff I mentioned. Swear on your heart that you are an expert in all those subjects :) all that is beyond comprehension for you. And yes, you are an average person :)

And you are making it off topic.

My point is that for nvidia to regain the compute king crown, it would be 8-10 months later than AMD, so AMD won in compute for nearly 10 months, nvidia will win for about 4 months and again loose to 8970 after that.

So AMD won for more months in the year. And that is what counts when the release isn't simultaneous.

If you can't understand this, then I feel sorry for you :(

What you need to understand is that we expect honest and polite discourse here. Attacking the intelligence of other posters in this manner will get you in a large amount of trouble in a short amount of time.
-ViRGE
 
Last edited by a moderator:

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
First really sensible thing I've read so far in this thread.

I don't see people who are using 100% compute going for high end gaming cards, I think in all likelihood that was just said in order to give AMD at least 1 win, no matter how small, but I'm guessing practical use of this is pretty minor, even people serious about something like bit coin mining are after mid range cards for their better price to performance ratio and lower power usage anyway, it'll always scale better than going for high end.

But more importantly in the realm of gaming which is a large percentage of the justification for this sort of hardware, you really just don't need this kind of power anyway, even when you're running a whopping 2560x1600 resolution (a very tiny number of gamers are) then you literally have no reason to upgrade.

The simple fact is there's nothing out there right now that's too demanding for the average configuration of something like a single 1920x1080 monitor. There's only a handful of games you can't max out in 2560x1600 which is twice the number of pixels, as someone with loads of disposable income I still can't justify the increase in power past a GTX580.

We need some actual, meaningful and significant improvement in games in order to justify these cards, currently that doesn't exist nor does it appear to be likely to ever exist until the next wave of consoles hit in a few years and the stagnation ends or we have some other reason to need that extra power.

Except that, in my estimation, that's wrong.

So many of these cards get used for compute...

The legions in the [H] and Extreme OC Folding teams would likely dispute this as well, as anyone serious about participating in Seti@Home or many of the BOINC projects.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
Except that, in my estimation, that's wrong.

So many of these cards get used for compute...

The legions in the [H] and Extreme OC Folding teams would likely dispute this as well, as anyone serious about participating in Seti@Home or many of the BOINC projects.

The "legions" there are a fraction of the people using these cards for gaming though, and as I've already said the actual high end cards don't always make a lot of sense for these kinds of projects because if you're serious about this the price to performance ratio and performance per watt ratios are way more important, if you want more crunching power then people tend to go for mid range and scale up appropriately.

Look at anyone bitcoin mining to try and make a profit, they're not dropping money on the high end cards, it doesn't really make any sense.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Nope, you got it wrong. You can't understand. In fact, you can barely understand any of the stuff I mentioned. Swear on your heart that you are an expert in all those subjects :) all that is beyond comprehension for you. And yes, you are an average person :)

And you are making it off topic.

My point is that for nvidia to regain the compute king crown, it would be 8-10 months later than AMD, so AMD won in compute for nearly 10 months, nvidia will win for about 4 months and again loose to 8970 after that.

So AMD won for more months in the year. And that is what counts when the release isn't simultaneous.

If you can't understand this, then I feel sorry for you :(

Define win because I am pretty sure Quadro cards own about 80-90% of the professional market and Tesla is in a market by itself with no AMD counterpart worth mentioning.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Compute doesn't mean only seti@home or folding@home.
People running these are mostly amateurs.I mean no disrespect by that of course.I have used quadro for medical imaging application where u just can't use a 580 or 680 or 7970 for the matter.I said about 99.999% of the gamers 680 or 7970 are pretty pointless anyway but u may of course belong to the other bracket.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
The "legions" there are a fraction of the people using these cards for gaming though, and as I've already said the actual high end cards don't always make a lot of sense for these kinds of projects because if you're serious about this the price to performance ratio and performance per watt ratios are way more important, if you want more crunching power then people tend to go for mid range and scale up appropriately.

Look at anyone bitcoin mining to try and make a profit, they're not dropping money on the high end cards, it doesn't really make any sense.

GTX 590? 580? 560 ti 448? Are these not high end cards? As far as nvidia compute is concerned they are, and there are still plenty people using them.

Performance per watt is important, yes, but so is system density - and its over all effect on Perf/Watt.

I don't see how its different than people who buy video cards to play games... most buy low to mid, and the idea of how many people are buying these halo cards for teh uber gaming is skewed in forums like this one.

It appears that the typical person buying these GPUs for compute is not just buying one, either, but sometimes in quantities are simply unjustifiable for the gaming enthusiast - meaning one compute customer might count for multiple gaming customers.

I think you meant scale out - and you are right about that! The number of folks who have scaled out with GTX 460's is pretty impressive. Again, I think that is probably consistent with folks who buy cards for gaming, though.

Compute doesn't mean only seti@home or folding@home.

Surely not.

It's not were the vast amounts of money comes from for GPU makers, either, w/regards to compute.
 
Last edited:

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Whole of literature, philosophy, theoretical physics (not mechanical physics) is. But of course, not everybody can understand everything, lol.

Man u r confused.Theoretical physics is anything but raw speculation or conjecture and same is true for literature and philosophy as well.But lets stay on topic.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
LOL at the people complaining about GK110 "only" being rumored to be 25% faster than GK104. GK104 was made for gaming, and gaming only. It has almost no compute performance to speak of. GK110 focuses mainly on higher compute performance, so it stands to reason that the performance boost in gaming won't be as big as the GTX 580 vs GTX 560 Ti.

512-bit bus. If that's true, it's gonna be a nightmare when it comes to yields. The die would have a good chance of being even bigger than GF110.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
(Almost) double the die size, ROPs, cuda cores, bandwidth and only 25% more gaming power? That would be...underwhelming.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
And what I wrote wasn't based on speculation.

AMD 7970 was the king for 3 months before 680. FACT

AMD 7970 and GTX 680 practically trade blows and are a side grade to each other. FACT

GTX 685 > 7970 (Now this is speculation :)) but will release about 8-9 months after it and will be the kind for about 3-4 months. Fact.

What is the speculation here?

If you can't differentiate speculation from facts, then yes, please stay off the above mentioned subjects and stick to ABC and 123 :p

I don't agree with your facts or speculation.

HD 6990 or GTX 590 are the performance Kings for a card.

They don't trade blows over-all and GTX 680 clearly offers more performance over-all and there is differentiation as well for a cheaper price.

You don't know what the name will be, how they may perform or what time-line.
 

BoFox

Senior member
May 10, 2008
689
0
0
And what I wrote wasn't based on speculation.

AMD 7970 was the king for 3 months before 680.
FACT

AMD 7970 and GTX 680 practically trade blows and are a side grade to each other. FACT

GTX 685 > 7970 (Now this is speculation :)) but will release about 8-9 months after it and will be the kind for about 3-4 months. Fact.

What is the speculation here?

If you can't differentiate speculation from facts, then yes, please stay off the above mentioned subjects and stick to ABC and 123 :p

FACT: Not 3 months - much more like 2 months, if HD 7970's official availability was January 9. AT did an article on this:
http://www.anandtech.com/show/5312/amd-radeon-hd-7970-now-for-sale

FACT: The majority considered HD 7970 to be a side-grade to GTX 580, sometimes trading blows with it.

My speculation about Big Kepler is that OBR was just talking about the FLOPs performance of the GPU itself being 25% higher than GTX 680. Perhaps that was the "twist" to his blog article. At least he seems certain about it being released in the Sept/October time frame, so let's see..

25% more TFLOPs, and yet another 20% performance due to the doubling of the bandwidth (as long as GDDR5 is 5500MHz or higher) should be at least 45% more than GTX 680.

GTX 560 Ti was still bigger than the new GTX 680, yet GTX 580 performed about 40% faster than GTX 560 Ti (while having only 25% greater GFLOPs processing power).
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
(Almost) double the die size, ROPs, cuda cores, bandwidth and only 25% more gaming power? That would be...underwhelming.

50% more of those. And the memory will probably be clocked lower than the GTX 680 if the bus is 512-bit to save on power consumption. I'd put memory bandwidth at around 75% higher than the GTX 680.

And most things you mentioned, namely the CUDA cores and memory bandwidth, are there to improve compute performance.

The GTX 680 is around 23% faster than the GTX 580. If the GTX 780/GTX 685 is 25% faster than that it would mean an improvement of almost 50%, which isn't bad.

As technology progresses we get to higher points of diminishing returns.
 
Last edited:

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I think you are wrong on this point, it's disappointing to see so far lackluster GPGPU results from GTX 680 but it will be shocking if a larger chip 28nm arrives from Nvidia and lacks GPGPU heft. Nvidia has spent lots of time, money and marketing trying to sell people on GPGPU, pretty radical move to abandon that investment.

What's the feasibility that a big Kepler would consist of all or almost all of those Double Precision workhorse units? How would that effect expected gaming performance? Will it arrive before this Fall?

All of that is neither here nor there though as it pertains to this discussion. What most of us really want to know, is what kind of performance are the other parts going to bring, and if it's based on GF100 or GF104 we'll start getting a clearer picture of what to expect.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
This is not really a topic for this discussion, but imo Nvidia is far more concerned about Intel (MIC) than whatever AMD does.

AMD is expecting other people to do the leg work for them, Nvidia is on the ground, beating feet making software, getting their programming into universities, it's just another level of product support. Those people you think may skip out for GCN, are going to have to depend on mostly open source for their programs that they don't actually have yet because the ones the're using are programmed for CUDA which GCN can't obviously use.

But that argument falls apart when you consider that Intel doesn't, and more importantly can't, use CUDA either.

CUDA's been slowing down lately and open-sourced alternatives are picking up, namely openCL. Unlike CUDA which can only be used on nVidia hardware (which makes it utterly useless for the mobile platform) the open-sourced software can be bridged to nearly all hardware regardless of who makes it.

Don't mistake CUDA's popularity in HPC with CUDA's overall popularity. CUDA was essentially the first to truly support GPGPU in a large fashion and the HPC community ate it up. You're right in that AMD doesn't push their chip's proprietary anything but that's why other companies along with AMD push forward the open-sourced stuff. Hell, the reason openCL was started and is succeeding is due to Apple and in a very un-Apple-like fashion, it's available to everyone.

CUDA's going to fall apart. There's no reason for another x86-like monopoly when you lock yourself out of any competition and charge inflated prices (Tesla). nVidia got hammered by GCN, make no mistake. nVidia locked themselves in a room with their shiny toy and asked everyone to pay a fee to get in but consumers have been realizing they can find much the same toys in other rooms that don't charge anything at all. Even nVidia realized this wasn't the brightest move and that their strategy needed fixing.
 
Last edited:

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
CUDA is free.What are u talking about?

Yes but I'm talking about the hardware, which isn't. You also need to consider that CUDA is mainly for HPC use with some workstation thrown in whereas openCL stretches from top to bottom and is more lucrative from a profit standpoint. If somebody wants to make money they'd opt for openCL to target a bigger audience even if it means they may favor CUDA for being easier.