• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GTX780 will not be based on GK110? (OBR Rumor)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It could be based on G80 for all I care, if it's consistently 50% faster than a 7970/GTX 680 and has decent overclocking headroom I'm getting one.
 
It seems that with nVidia they need custom cores for either compute or gaming. For whatever reason AMD seems able to do both no problem. I wouldn't be surprised at all if Big Kepler is not a good gaming GPU whereas a different variation is. The GTX 680 sucks at compute so this could give credence to the rumor.

except Tahiti is a much larger GPU with a good deal more transistors and a wider memory bus, I don't see how that's being able to do both "no problem"; basically AMD and nVidia just swapped traditional roles, AMD just didn't go all out with ballooning the size of the chip like nVidia had been doing and thus cannot pull away from GK104 in gaming like nVidia has been able to do with their previous monster GPU flagships because of that much extra brute force.

maybe AMD is more efficient at doing both than nVidia has been traditionally, however GK104 is also a good deal more efficient at game performance per transistor than they had been in the past as well - again, this all just points to a reversal of roles.
 
Interesting to say the least. Looks like nVIDIA is sticking with its dedicated gaming chip approach as opposed to an all in one kind of chip. I guess they could just add a GPC or 2 to the current design ending up as a 2304SP/384bit card which would do pretty well against the supposed spec of the 8800 series from AMD.

Also if they aren't going to release a consumer SKU based on the GK110, then why does that chip have 5GPCs + 240TMUs? Maybe they could release it as a GTX790 later down the road I assume.. since some of the hardware (rather large chunks) within the GK110 is useless at HPC oriented tasks.
 
except Tahiti is a much larger GPU with a good deal more transistors and a wider memory bus, I don't see how that's being able to do both "no problem"; basically AMD and nVidia just swapped traditional roles, AMD just didn't go all out with ballooning the size of the chip like nVidia had been doing and thus cannot pull away from GK104 in gaming like nVidia has been able to do with their previous monster GPU flagships because of that much extra brute force.

maybe AMD is more efficient at doing both than nVidia has been traditionally, however GK104 is also a good deal more efficient at game performance per transistor than they had been in the past as well - again, this all just points to a reversal of roles.
Is it larger with more transistors by a factor of 6? Because that's how much better Tahiti is compared to Kepler in some compute tasks.
 
It seems that with nVidia they need custom cores for either compute or gaming. For whatever reason AMD seems able to do both no problem. I wouldn't be surprised at all if Big Kepler is not a good gaming GPU whereas a different variation is. The GTX 680 sucks at compute so this could give credence to the rumor.

That said, this rumor could be AMD viral designed to stop people from getting excited about Big Kepler. *shrug*

Sickbeast the cores are going to likely be functionally the same or very, very similar. It's all the extra compute features built around the GK110 cores that are the "customization" you might be referring to. Anyways, GPC performance in games is highly a questionable feature at this time. There is only 1 game to date that *MIGHT* possibly use GPGPU functionality (Dirt Showdown), so it's not exactly riding a wave of future among game development like T&L did when it came out. And, mind that, GPGPU functionality has been around since the 8800GTX.

Is it larger with more transistors by a factor of 6? Because that's how much better Tahiti is compared to Kepler in some compute tasks.
How many games have these compute tasks and how much faster is Tahiti in those games?
 
Sickbeast the cores are going to likely be functionally the same or very, very similar. It's all the extra compute features built around the GK110 cores that are the "customization" you might be referring to. Anyways, GPC performance in games is highly a questionable feature at this time. There is only 1 game to date that *MIGHT* possibly use GPGPU functionality (Dirt Showdown), so it's not exactly riding a wave of future among game development like T&L did when it came out. And, mind that, GPGPU functionality has been around since the 8800GTX.
Yeah, I agree, GPGPU really has failed to develop anything useful in terms of software tools outside of the scientific community.

That said, it is nice being able to earn bitcoins with AMD hardware.
 
I believe the Tesla K20 has 300W TDP and will get a 1.5Tflops DP theoretical peak at 1/3 rate (or maybe it was 1.2 Tflops). Working backwards at 2880 SPs, that gets us 780 mhz GPU clock speed. K10 is already confirmed to work at just 745mhz. A 2304 SP Kepler at 1000mhz would still be a beast. We can't just look at shaders/cuda cores and ignore clocks. You can have a very fast Kepler chip with even 2304 SPs.


Again, why are we trying to infer anything from the K20 clock speeds?

The GF110 based tesla card runs at 650MHz/1300Mhz. It was released after the GTX 580, and it ran at a slower frequency.

http://www.nvidia.com/object/tesla-servers.html
 
Bitcoin mining has little to do with "Compute" (Double Precision, OpenCL, DirectCompute shaders of GCN) though. That's mainly a function of how fast ALUs in Radeons are. Even HD5770 whoops GTX580/680 in SHA-256 hashing.

"...another difference favoring Bitcoin mining on AMD GPUs instead of Nvidia's is that the mining algorithm is based on SHA-256, which makes heavy use of the 32-bit integer right rotate operation. This operation can be implemented as a single hardware instruction on AMD GPUs, but requires three separate hardware instructions to be emulated on Nvidia GPUs (2 shifts + 1 add)."

I am not really aware of any games in the next 6 months that are going to use DirectCompute for HDAO/post-processed AA or global lighting model unless AMD specifically works with developers on some of those AMD Gaming Evolved titles. I doubt they are actually going to go out of their way and use GCN's DirectCompute functionality otherwise.

Honestly though it has got to be about the games and I think with 90% of games being non-graphically demanding console ports, HD8970 and 780 are more suitable for 2560x1440/1600 gaming at this point. If HD8970 and 780 are 30-40% faster, for 1080P this is becoming mostly a waste now unless you need it for your 120Hz monitor.

I am hoping Metro Last Light and Crysis 3 will make next generation cards worth waiting for 🙂
 
Again, why are we trying to infer anything from the K20 clock speeds?

The GF110 based tesla card runs at 650MHz/1300Mhz. It was released after the GTX 580, and it ran at a slower frequency.

http://www.nvidia.com/object/tesla-servers.html

How is this going over your heads every time I talk about it? K20 TDP is between 250-300W based on everything I read published on it (most sources quote 8+6 pin = 300W). I just backed into the clock speed and it's < 800mhz. I don't know what the Tesla card's TDP was at 650mhz that you are referring to but did it have much higher TDP than the GTX580? The point is if you have an 800mhz 2880 SP card, it's not necessarily better than a 1050mhz + GPU boosted 2304 SP card in terms of performance. The advantage of the 2nd approach is you end up make a smaller gaming chip and save on a lot of die space by not needing a dynamic scheduler, double precision and so forth. The larger the chip gets, the harder and more expensive it is to manufacture. That's another reason a leaner gaming GK11x chip may be preferable.

You guys keep talking about this mythical 1 Ghz 2880 SP chip GK110 chip and it's almost certain it's not going to happen and TDP is a huge reason for it. In fact, I haven't even seen a confirmation that K20 will be a fully unlocked 2880 SP part as opposed to a 2496 or a 2688 part.

This is the simplest way to look at it: 365mm^2 Tahiti XT @ 1050mhz reference card peaks at 238W at TPU. How in the world can NV make a 520-600mm^2 on the same 28nm node and not go way above this? It's not like NV is free from the laws of physics.

This is the whole point I keep telling you guys why clock speed is a critical part of this full GK110 discussion and it keeps getting ignored in every thread like it's not an obstacle. You can't just grow a chip to 520-600mm^2 on 28nm, keep 1Ghz clocks and not have GTX480 scenario. NV went far away from 480 and why would they want to repeat this scenario again?

Generally speaking, in a forecast, you want to look at the low, mid- and high possibilities and assign some probabilities of these coming to fruition. Using the mid-point is often the most reasonable assumption. Using 1Ghz 2880 SP card is nowhere near the mid-point but a top 5% likely outlier, and yet it's discussed as almost the most likely possibility. The most likely possibility is a chip somewhere between 2048 and 2880 SP, which makes 2304 SP part a fairly reasonable assumption since it's not on the extreme edge of being improbable. And actually, as previous rumors and wishful thinking of GTX480/580/680 played out in the tech world, the most optimistic spec assumptions preceding their launches were all wrong. 1Ghz 2880SP part on 28nm assumes no compromises in that assumption. Such an extreme forecast tends to be 99% wrong since companies rarely create a chip that foregoes all other factors in light of performance (strategy, die size manufacturing costs, yields, TDP, noise levels, heat, etc.).

When was the last time NV, ATI or AMD managed to increase performance by 75-100% in 12 months on the same manufacturing node?
 
Last edited:
The GK110 is already in production. This new chip could just be a refresh of that. If the 110 was supposed to be the 680, its refresh would have been and still may be the 780.
 
How is this going over your heads every time I talk about it? K20 TDP is between 250-300W based on everything I read published on it (most sources quote 8+6 pin = 300W). I just backed into the clock speed and it's < 800mhz. I don't know what the Tesla card's TDP was at 650mhz that you are referring to but did it have much higher TDP than the GTX580? The point is if you have an 800mhz 2880 SP card, it's not necessarily better than a 1050mhz + GPU boosted 2304 SP card in terms of performance. The advantage of the 2nd approach is you end up make a smaller gaming chip and save on a lot of die space by not needing a dynamic scheduler, double precision and so forth. The larger the chip gets, the harder and more expensive it is to manufacture. That's another reason a leaner gaming GK11x chip may be preferable.

You guys keep talking about this mythical 1 Ghz 2880 SP chip GK110 chip and it's almost certain it's not going to happen and TDP is a huge reason for it. In fact, I haven't even seen a confirmation that K20 will be a fully unlocked 2880 SP part as opposed to a 2496 or a 2688 part.

This is the simplest way to look at it: 365mm^2 Tahiti XT @ 1050mhz reference card peaks at 238W at TPU. How in the world can NV make a 520-600mm^2 on the same 28nm node and not go way above this? It's not like NV is free from the laws of physics.

This is the whole point I keep telling you guys why clock speed is a critical part of this full GK110 discussion and it keeps getting ignored in every thread like it's not an obstacle. You can't just grow a chip to 520-600mm^2 on 28nm, keep 1Ghz clocks and not have GTX480 scenario. NV went far away from 480 and why would they want to repeat this scenario again?

Generally speaking, in a forecast, you want to look at the low, mid- and high possibilities and assign some probabilities of these coming to fruition. Using the mid-point is often the most reasonable assumption. Using 1Ghz 2880 SP card is nowhere near the mid-point but a top 5% likely outlier, and yet it's discussed as almost the most likely possibility. The most likely possibility is a chip somewhere between 2048 and 2880 SP, which makes 2304 SP part a fairly reasonable assumption since it's not on the extreme edge of being improbable. And actually, as previous rumors and wishful thinking of GTX480/580/680 played out in the tech world, the most optimistic spec assumptions preceding their launches were all wrong. 1Ghz 2880SP part on 28nm assumes no compromises in that assumption. Such an extreme forecast tends to be 99% wrong since companies rarely create a chip that foregoes all other factors in light of performance (strategy, die size manufacturing costs, yields, TDP, noise levels, heat, etc.).

When was the last time NV, ATI or AMD managed to increase performance by 75-100% in 12 months on the same manufacturing node?

Russian no one said GK110 was going to be 1000mhz. You kept referring to this when we had our discussion about it's possible clockspeeds a few weeks ago, and I at that time never said GK110 would be 1000mhz. You seem to be the only person who is saying GK110 will or will not run at 1000mhz.

But that said, second generation 28nm GPU's should be able to run at higher clock speeds and have better perf/watt within the same TDP, all other things being equal. Had GK110 come out this past January, it would have had lower clocks and perf/watt had it not A) come out on a proven, mature process and B) been baked longer like it is now.

Again, no one (or veru, very few people) is saying a 2880 core GK110 is going to run at 1000mhz. YOU are saying other people are saying that.
 
Bitcoin mining has little to do with "Compute" (Double Precision, OpenCL, DirectCompute shaders of GCN) though. That's mainly a function of how fast ALUs in Radeons are. Even HD5770 whoops GTX580/680 in SHA-256 hashing.

"...another difference favoring Bitcoin mining on AMD GPUs instead of Nvidia's is that the mining algorithm is based on SHA-256, which makes heavy use of the 32-bit integer right rotate operation. This operation can be implemented as a single hardware instruction on AMD GPUs, but requires three separate hardware instructions to be emulated on Nvidia GPUs (2 shifts + 1 add)."

I am not really aware of any games in the next 6 months that are going to use DirectCompute for HDAO/post-processed AA or global lighting model unless AMD specifically works with developers on some of those AMD Gaming Evolved titles. I doubt they are actually going to go out of their way and use GCN's DirectCompute functionality otherwise.

Honestly though it has got to be about the games and I think with 90% of games being non-graphically demanding console ports, HD8970 and 780 are more suitable for 2560x1440/1600 gaming at this point. If HD8970 and 780 are 30-40% faster, for 1080P this is becoming mostly a waste now unless you need it for your 120Hz monitor.

I am hoping Metro Last Light and Crysis 3 will make next generation cards worth waiting for 🙂

The next Far Cry sequal is apparently slated to be an AMD:GE title... they'll probably use DirectCompute.
 
Again, no one (or veru, very few people) is saying a 2880 core GK110 is going to run at 1000mhz. YOU are saying other people are saying that.

My response was to Hypertag about why I mentioned clock speeds in the first place. The reason I inferred from those clocks is to get an idea of what TDP would look like for a 1Ghz 2880 SP chip. Since that's very unlikely, it's not as simple as launching an 800mhz 2880 SP chip. 1050mhz is 31% higher clock speeds. That means you need 30% less functional units to achieve a similar level of performance and that's a huge chunk less transistors. That makes the chip less expensive to manufacture as well. A GK110 with neutered clocks is not necessarily the slam dunk everyone makes it to be. Why not go for a 460-470mm2 1.05ghz 2304-2496 SP gaming chip instead?
 
Last edited:
My response was to Hypertag about why I mentioned clock speeds in the first place. The reason I inferred from those clocks is to get an idea of what TDP would look like for a 1Ghz 2880 SP chip. Since that's very unlikely, it's not as simple as launching an 800mhz 2880 SP chip. 1050mhz is 31% higher clock speeds. That means you need 30% less functional units to achieve a similar level of performance and that's a huge chunk less transistors. That makes the chip less expensive to manufacture as well. A GK110 with neutered clocks is not necessarily the slam dunk everyone makes it to be. Why not go for a 460-470mm2 1.05ghz 2304-2496 SP gaming chip instead?

Ok I get what you are saying.

A 12 smx chip (2304 total cores) with a 384 bit bus should still come in at or around 375mm^2.
 
Last edited:
Is it larger with more transistors by a factor of 6? Because that's how much better Tahiti is compared to Kepler in some compute tasks.

And? I mean why should anybody care? I look at DX11 games and see that the GTX680 is winning 17 out of 24 against the 7970.

What can you do with all these "compute task"? And what is not running on Kepler? Because Luxmark is something for what i need only time to progress.
 
And? I mean why should anybody care? I look at DX11 games and see that the GTX680 is winning 17 out of 24 against the 7970.

What can you do with all these "compute task"? And what is not running on Kepler? Because Luxmark is something for what i need only time to progress.
Well right now Bitcoin mining has paid for my 7970 and netted me another $800 profit and counting. I'd say compute performance is pretty important.
 
And? I mean why should anybody care? I look at DX11 games and see that the GTX680 is winning 17 out of 24 against the 7970.

You deliberately ignore the 7970 Ghz ed eventhough its widely available and still significantly cheaper than 680. We had this rubbish of yours disproven several times already, stop spreading fud.
 
Well right now Bitcoin mining has paid for my 7970 and netted me another $800 profit and counting. I'd say compute performance is pretty important.

Bitcoining is using Integer and a special format. AMD has much better integer performance because of that. But that is not the "compute" we talk about.

You deliberately ignore the 7970 Ghz ed eventhough its widely available and still significantly cheaper than 680. We had this rubbish of yours disproven several times already, stop spreading fud.

No problem: The GTX690 winning 20 of 24 DX11 games.
 
I'm not a bitcoin expert, so help me out here a little. I'm reading that overclocked hd7970's get around 650-700 mhash/s - is that accurate? I'm using this calculator here: http://bitcoinx.com/profit/index.php and putting in a $.1247 cents as the USD/kwh electricity rate (that is how much it is for where I live, which according to my searches is relatively low). The calculator is telling me that if I bought an hd7970 at release ($550) I would have needed to run one overclocked 24/7 for 343 days straight (8,232 hours) before breaking even.

Is this accurate?
 
I'm not a bitcoin expert, so help me out here a little. I'm reading that overclocked hd7970's get around 650-700 mhash/s - is that accurate? I'm using this calculator here: http://bitcoinx.com/profit/index.php and putting in a $.1247 cents as the USD/kwh electricity rate (that is how much it is for where I live, which according to my searches is relatively low). The calculator is telling me that if I bought an hd7970 at release ($550) I would have needed to run one overclocked 24/7 for 343 days straight (8,232 hours) before breaking even.

Is this accurate?

That's just to break even? And what of the 800 bucks he said he made on top of that? Is that even possible?
 
I'm not a bitcoin expert, so help me out here a little. I'm reading that overclocked hd7970's get around 650-700 mhash/s - is that accurate? I'm using this calculator here: http://bitcoinx.com/profit/index.php and putting in a $.1247 cents as the USD/kwh electricity rate (that is how much it is for where I live, which according to my searches is relatively low). The calculator is telling me that if I bought an hd7970 at release ($550) I would have needed to run one overclocked 24/7 for 343 days straight (8,232 hours) before breaking even.

Is this accurate?

That's just to break even? And what of the 800 bucks he said he made on top of that? Is that even possible?

It depends on your initial investment. My HD 7970 was completely free too me because of Bitmining. I think I posted my little story some where else, but recap. I mined for about 2 months on a dual HD 5830 system. I had all the parts free (well I traded a GTX 460 SE for one of the hD 5830s, but still had the card regardless.) Anyways, this was...last summer, coins were going for $23 a pop, I made about twenty, sold the parts I had for about $400 (because back then HD 5830s had ridiculous resale value, even going new for $160ish).

This didn't affect my gaming since I gamed on the HD 5870 (but at times I debated even putting that card to work.) If I kept my HD 5870, and mined on that and the 7970 after I bought it, I'd be making at least $200-250 a month in bitcoins. After 6 months or so minus electricity, I'd be sitting on about $900+ profits.

I don't get why people put bitmining down, there are lots of people WHO have made money. I keep kicking myself for stopping, to be honest.

Also remember, when you mine, you don't run your cards are full juice. You should be undervolting the RAM and clocking it as low as possible, unvolting your CPU and clocking it as low as possible, if you use an SSD it's even more efficient, and at the end of the day you are running the GPU max OC using less volts than if you were gaming. If you have free electricity, you've made out like a bandit.



EDIT: Okay, values fixed, haha:

Hardware break even: 241 days
Net profit first time frame: 37.96 USD

With today's price, dual CFX 7970s (or even 7950s) would turn out a decent profit after a year or so. If you already have the equipment, it's all profits 😀 Maybe I will get back into mining.
 
Last edited:
That's just to break even? And what of the 800 bucks he said he made on top of that? Is that even possible?

Most young folks these days are living in apartments which may or may not have included utility expenses you pay for anyway. You can't compare to the KWh rate as that does not apply to many who are (and in fact those who get the most out of) mining.

Using the same calculator and 0 electricity cost you could pay for a 350USD card in about 130 days. Mark never said he made the $800 on his 7970 only. I read it as lifetime, which would be quite easy to do if electricity is included in rent regardless of consumption.

Sigh... I wish I had inclusive utilities again...
 
Using the same calculator and 0 electricity cost you could pay for a 350USD card in about 130 days. Mark never said he made the $800 on his 7970 only. I read it as lifetime, which would be quite easy to do if electricity is included in rent regardless of consumption.

Sigh... I wish I had inclusive utilities again...

For 1 month I had my miner setup at work (actually I started mining at work because I was scheduled a weekend and bored, so did the research saturday, and setup Sunday. Not paying electricity is the cat's meow, but after the thing got too hot (and loud) my boss told me I had to stop doing what I was doing (which I said I was folding - 😀) and thus I brought it home.

Made what I wanted, cashed out (due to fear of people saying the market was going to collapse, which it did) and never looked back - until now.
 
Well right now Bitcoin mining has paid for my 7970 and netted me another $800 profit and counting. I'd say compute performance is pretty important.
That's not possible. The most you would have made is $65/month after electricity costs, and that's being generous, plus it's based on $12 bitcoins which were worth $5 for much of the time you owned your card.

$65 x 9 months = $585

It doesn't even pay for your card. I don't know where you're getting this $800 figure from. Actually you're saying it's $1400 + whatever you paid for the water cooling. It's absurd.
 
Back
Top