[Hardcorp] GALAXY GTX 660 Ti GC OC vs. OC GTX 670 & HD 7950

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Curious what do you have to say about this --> $2,000 GTX690 SLI setup is only 5 fps faster on average and has lower minimum framerates than an $860 HD7970 GE CF setup using triple monitors. The guys who got 7970s and bitcoin mined got them free too. So it's like $2k vs. $0. :cool:

Could they have been CPU limited? i could say soooo much, lets me see.............

I would have to say that there are a few games that bring down the average big time, take those out and you could have a very different outcome. What i am saying from game to games it varies. The 2x 690 wins most of the time but there are a couple of major losses that bring down the average. I also dont like the way they average the fps in a total instead of averaging each individual game win/loss percentage and tallying them up for a final average.

Then i would like to bring up the bandwidth limitations of the gk104. If there was ever a time to really bring out those limitations, it would be in quad SLI with triple monitor resolutions. I also would like to say anyone who buys 2x 690s is pretty crazy. But these kinda people are buying in a way most value oriented people do not. They do not care about performance per dollar at all. To me, the 2x 7970 ghz system would save you a lot of cash and give close to the 690 2x performance in the triple monitor resolutions. But i care about my dollars i am spending.

Its not the same story when the resolution goes down. You could say, "someone who buys 2000$ worth of GPUs surely would surely buy three monitors" but then i would say "i am not sure rationality is a factor in these kind of people's decisions"

Anyway, i could go on for days rambling. A lot can be said about those results. I think most of the people here arent planning 2x 690s quad sli anyway.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Could they have been CPU limited? i could say soooo much, lets me see.............I also would like to say anyone who buys 2x 690s is pretty crazy. But these kinda people are buying in a way most value oriented people do not. They do not care about performance per dollar at all. To me, the 2x 7970 ghz system would save you a lot of cash and give close to the 690 2x performance in the triple monitor resolutions. But i care about my dollars i am spending.

Ya, that's my point. For single monitors via 2x GTX670/680, it's a great setup. Go beyond 2x GTX670/680 SLi or add more monitors and things turn sour fast for NV. This guy is running 5 monitor Eyefinity on 4x HD7970 MSI Lightning cards and they are blasting through Guild Wars 2 with 80% CF scaling and smooth gameplay. When talking about really top-of-the-line systems GTX690 SLI would get creamed by 3-4x HD7970s at multi-monitor gaming since really high loads exposes all of those memory bandwidth limitations of Gk104 and 2GB of VRAM limits. GK104 is great for now but NV knows it needs to bolster the memory bandwidth.

You are being extremely pessimistic with your GTX 780 performance estimates. At first, you were saying just "50%" increase from GTX 680. Now you are saying it will be "40-50%". Look, you could be correct with that estimate if Nvidia is literally incapable of making functional GK110 dies after waiting for the 28nm process to improve for an entire year. This is where I get the "idea" that you are biased to AMD. I think we will see at least a 55-60% increase from GK110 over GK114. If they can get all 15 units, then I expect at least 60-70% more.

VR-Zone - No NVIDIA GeForce GTX 780 before March 2013, Maxwell Only in 2014?

- Kepler will remain a three GPU line-up: GK104 for performance, GK107 for mobile and entry-level desktop and the GK110 for high-end computational and visualization parts. Ultimately, GK104 served as the GTX 660/670/680/690,

- GK110 will make a public appearance only in December as the Tesla K20

- Given that the Maxwell, GM1xx parts won't be available until the first half of 2014, the GeForce and Quadro parts should continue to rely on refreshed/renamed GK104/107/110 parts. The GK110 is planned to expand and become available as the Quadro K6000 6GB.

- Maxwell is a 20nm part, capable of being manufactured in GlobalFoundries, IBM, Samsung and TSMC - 20nm Gate-Last HKMG

- Availability of that part however, will not come sooner than the first half of next year. As far as the GeForce GTX 700 Series is concerned, the parts should repeat the same cadence as this year's lineup. However, the newer revised parts should offer further clock improvements, to the level where they can offer between 25-30% higher performance and power efficiency.

- Maxwell will be the first top-to-bottom GPU architecture, powering everything from Tegra to Tesla. Furthermore, Maxwell should be the first GPU part to integrate the 64-bit ARM core which carries the codename "Project Denver".

- All in all, 2013 will see AMD's Sea Islands fight first versus Nvidia's Kepler refresh, and only then against the Maxwell. Real battle will come only in 2014.

Looks like I am not the only one who expects HD8970/GTX780 to be more like 30-40% faster than current high-end parts, not 55-70% faster.
 
Last edited:

Hypertag

Member
Oct 12, 2011
148
0
0
So? Either their source is correct, and the GK114 can sustain 1300MHz on sane voltage before running into a severe brick wall due to memory bandwidth, or their source is implying that GK110 will be released in some kind of castrated 12 SMX / 320-bit top bin part. Or maybe GK114 will be 320-bit / 12 SMX (which makes absolutely no sense at all)?

I am not the only person to look at the GK110's specifications, and get optimistic at the level of performance it can deliver. You basically said the same thing I said. This chip is capable of approaching GTX 690 performance. About a week after that post, you started dialing back expectations.

nynyk.gif



Nvidia is caring about yields more because TSMC is charging per wafer instead of per good die. The basic theory for "2013 will suck for graphics cards" is that 2013 must suck in order for there to be another boost in performance in 2014. Maybe Nvidia is actually doing that, and will make no attempt to show what a full GK110 can do in 2013 because they need another boost in 2014, and small chips on 20nm can't accomplish that goal. As far as multiple 20nm production sources, how much does that help? I think its safe to assume Global Foundries will not have a remote chance of being competitive. I think Samsung could be extremely capable of providing minimal pricing pressure on TSMC. However, if 20nm is good, then why hedge of 2013 performance. If 20nm is good, then release GM100 with 13 billion transistors in September 2014.


edit: Yes, the GTX 690 is one well designed graphics card. However, it will perform like garbage if you try to use more than 2GB of VRAM. Obviously you can get performance identical to a GTX 690 with no VRAM wall for less money http://www.newegg.com/Product/Produc...82E16814130785 .

Edit: I really, really, really wish Nvidia would have made the GTX 690 an "8GB" video card. They could have priced it at $1200 or $1300 with the extra VRAM. It's honestly just not really useful with 2GB of GPU.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I am not the only person to look at the GK110's specifications, and get optimistic at the level of performance it can deliver. You basically said the same thing I said. This chip is capable of approaching GTX 690 performance. About a week after that post, you started dialing back expectations.

Did you read my post? What I did I say in it?

"With those specs it might have 80% of GTX690's performance."

How fast is a GTX690 vs. GTX680?

52% at 1080P and 60% at 2560x1600.

80% of that performance level = 122% - 128%

My expectation stayed consistent (30, 40-50% tops!). You are the one who keeps saying 55-70%. I never stated that or dialed it back. In May I said if they ship a 3072 SP GK110/780, that I thought at least 50% would be reasonable. However, given the trouble NV has had launching their entire GTX600 series, I don't expect a 1200mhz GTX780 with 3072 SPs at this pace. I am being more realistic now. If NV delivers 50%, great, but I am not counting on it. In your case, you are using 55-70% as almost a given, not even the best case scenario. Based on your prediction of 55-70%, current high-end GPUs would become completely low-end. If that happens, what's NV going to use for their $250-300 products? GTX680 level of performance would need to be $249 if the top-end GTX780 GPU is 70% faster than that by March 2013. That's not happening. You can't possibly believe that in 12 months NV will sell you GTX680 level of performance for $250? Then there is the obvious pricing problem. If GTX780 is 55-70% faster than a GTX680, NV will mop the floor with HD8970 and in that case it'll be priced at $800+ like the 8800GTX was. Based on what you are saying, you pretty much believe GTX680/7970 are low-end products this generation. You actually realistically expect a 2880-3072 SP GK110 by Spring 2013?
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
8800 GTX had a 599 MSRP; it was the 8800 Ultra that had a stratospheric MSRP of around 799-829. Considering there may be strong competition from AMD with their next family --- that kind of MSRP may not be offered! Time will tell.
 

Hypertag

Member
Oct 12, 2011
148
0
0
Did you read my post? What I did I say in it?

"With those specs it might have 80% of GTX690's performance."

How fast is a GTX690 vs. GTX680?

52% at 1080P and 60% at 2560x1600.

80% of that performance level = 122% - 128%

My expectation stayed consistent (30, 40-50% tops!). You are the one who keeps saying 55-70%. I never stated that or dialed it back. In May I said if they ship a 3072 SP GK110/780, that I thought at least 50% would be reasonable. However, given the trouble NV has had launching their entire GTX600 series, I don't expect a 1200mhz GTX780 with 3072 SPs at this pace. I am being more realistic now. If NV delivers 50%, great, but I am not counting on it. In your case, you are using 55-70% as almost a given, not even the best case scenario. Based on your prediction of 55-70%, current high-end GPUs would become completely low-end. If that happens, what's NV going to use for their $250-300 products? GTX680 level of performance would need to be $249 if the top-end GTX780 GPU is 70% faster than that by March 2013. That's not happening. You can't possibly believe that in 12 months NV will sell you GTX680 level of performance for $250? Then there is the obvious pricing problem. If GTX780 is 55-70% faster than a GTX680, NV will mop the floor with HD8970 and in that case it'll be priced at $800+ like the 8800GTX was. Based on what you are saying, you pretty much believe GTX680/7970 are low-end products this generation. You actually realistically expect a 2880-3072 SP GK110 by Spring 2013?

Nvidia will price based upon performance. I consider GK104 to be a mid range part, as that is what it is. If Nvidia has such minimal competition from AMD that it can get away with this, then more power to them. Nvidia will do what it wants. If it wants to release a neutered GK110 that barely gets 30% more performance while using 60% more power, then they can do that. I don't see one thing that would be compelling about that card theoretical card, but whatever.

Nvidia doesn't have any type of maximum price it can launch a card at. It can price a GK110 based upon the exact performance per dollar metric that the GTX 680 offers, and price the first salvage based on GTX 670's performance per dollar. If a full GK110 is able to deliver performance to warrant an $800 price tag, then it is up to AMD to actually compete. If you want me to blind guess GK110 pricing, I would say the first could would be $699-749, first salvage would be $549-699, and second salvage could be $499. That would easily enable the 680 to be $400~ with the 670 at $350.

Do I expect a full 15 SMX chip? That does seem optimistic. Do I expect 16 SMX? No, because the white sheet claims GK110 only has 15 units. If you want to "force" me to guess what the top-bin chip would have, I would guess 14 out of 15 SMX units. I mean Nvidia still got 15/16 out of the horrid GF100 die.

As far as SLI scaling goes, I didn't expect it to be that poor since basically everyone concedes two-way SLI is better than two-way crossfire (yes, I added two-way there on purpose). In any event, I can do better than 60%. This review shows a 69.4% improvement (inverse of 0.59). Frankly, I think the poor SLI scaling makes the case much, much stronger that GK110 can deliver this level of performance. A full GK110 has 87.5% of the shading capability of a GTX 690 with no SLI penalty.


JK4BX.gif
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
8800 GTX had a 599 MSRP; it was the 8800 Ultra that had a stratospheric MSRP of around 799-829. Considering there may be strong competition from AMD with their next family --- that kind of MSRP may not be offered! Time will tell.

There is on ceiling and NV could theoretically charge $1k each. Then there is inflation to consider, anyway.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
A full GK110 has 87.5% of the shading capability of a GTX 690 with no SLI penalty.

Based on current projections, HD8970 is expected to be 25-40% faster than HD7970. What you predict is the greatest performance lead for NV in the last 10 years of GPU competition? NOT happening. There is absolutely no way Kepler refresh "GTX780" or w/e it's called will be 70-80% faster than a GTX680 on average on 28nm node. You are living in dream world right now.

Not only that but it makes no sense looking at generational leaps:

GTX480 (100%) --> GTX580 refresh (115-118%) ---> GTX680 (150%) --> GTX780 (280% using your 87% faster than GTX680 projection).

Compared to the GTX580, your hypothetical GTX780 is 2.37x faster. When was the last time NV or AMD increased GPU speed by 2.37x in barely more than 2 years?

That would mean NV will have increased GPU speed 2.8x in 3 years and 2.37x in slightly more than 2 years. We haven't seen such pace of GPU performance increases for 6 years since 8800GTX replaced 7900GTX (and that was only because 7900GTX was a fixed shader/pixel architecture that got rocked in shader intensive DX9 games). Your 70-80% projection isn't happening since it's getting harder and harder to extract more performance due to power envelope limitations and the fact that it's even harder and harder to shrink transistors to lower node processes makes node manufacturing very expensive.

That K20 chip NV sells is probably running at very low clock speeds to stay under 300W TDP. This is what's probably likely to happen: all those failed GK110 chips that couldn't be sold for $3-5k as Tesla K20/Quadro K6000 cards will be rebadged as GTX780. Alternatively, NV could release a GK1xx version that's some cut-down GK110 part. I could see that happening. Also, NV doesn't want another Fermi GTX480 situation. Thus, it's unlikely they'll release a 1050mhz+ 600mm^2 chip. That's pretty much just taking all the lessons they learned wtih GTX480 and throwing them out the window. A 28nm 600mm^2 1050mhz 780 would be pushing a ton of power and heat. GTX560Ti ran at much faster clocks than GTX580 did with some versions reaching 950mhz. If GK104 = mid-range Kepler, you are not accounting for the clock speed penalty we would need to assume for a chip that's nearly double the size on the same 28nm node. Regardless this is going wayyyyyy off topic from the title of the original thread.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
Compared to the GTX580, your hypothetical GTX780 is 2.37x faster.

the gtx680 is only about 30% faster than the gtx580. with AA and games that need lots of memory bandwidth its only around 20% faster especially at 2560. heck I believe there are a couple of cases where its barely 10-15% faster such as for AvP in some reviews. if a gtx780 is 70% faster than a gtx680 then yeah it would be about 100% faster overall than the gtx580 but in some cases it still would only be about 80% faster than the gtx580.

I am not saying the gtx780 will be that fast but its really not that big of deal since it will have been 2.5 years after the gtx580 release if the gtx780 comes out in March 2013 or so.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Compared to the GTX580, your hypothetical GTX780 is 2.37x faster.

the gtx680 is only about 30% faster than the gtx580. with AA and games that need lots of memory bandwidth its only around 15-20% especially at 2560.

You forgot that GTX580 has the same 192GB/sec memory bandwidth as the 680? So why would you assume that the performance delta between them shrinks to just 15-20%? It doesn't.

GTX680 is 34% faster at 1080P, 33% faster at 1600P @ Computerbase
GTX680 is 32% faster at 1080P, 31% faster at 1600P @ TechPowerUp

That's exactly what I said that GTX680 is 30-35% faster than a GTX580, or about 50% faster than a GTX480.

I can't believe you guys are seriously believing that GTX780 (or w/e the name is) will be 70-80% faster than a GTX680 by March 2013......toyota you can't possibly believe that when GTX670 is clocked at 980mhz, NV will just stamp a 600mm^2 chip with 1000mhz clocks like it's not a problem. You realize a GTX480 ran at just 700mhz and it was ridiculously power hungry. You can't just increase GPU clock speeds 42%, and have a 28nm chip the size of 600mm^2 and not end up in the exact same spot as GTX480. 294mm^2 1058mhz GTX680 is already using 186W of power at load. Please explain how a 1000mhz 600mm^2 Kepler chip can operate at 250W or less?

K10 dual-GPU Tesla card uses 2x GK104 chips. The single-precision throughput is 4.58 Tflops or 2.29 Tflops / GK104 GPU. Reverse engineer back into the clock speeds ==> 2.29 Tflops / 1536 SPs / 2 Ops per clock = 745mhz GPU clock speed.

There is no way that K20 chip is running at 1 Ghz. Assuming K20 runs at the same 745mhz clock of its K10 sister, we get:

(2880 SPs x 745mhz GPU clock) / 1536 SP x 1058mhz (GTX680) = 1.32x or 32% faster.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
You forgot that GTX580 has the same 192GB/sec memory bandwidth as the 680? So why would you assume that the performance delta between them shrinks to just 15-20%? It doesn't.

GTX680 is 34% faster at 1080P, 33% faster at 1600P @ Computerbase
GTX680 is 32% faster at 1080P, 31% faster at 1600P @ TechPowerUp

That's exactly what I said that GTX680 is 30-35% faster than a GTX580, or about 50% faster than a GTX480.

I can't believe you guys are seriously believing that GTX780 (or w/e the name is) will be 70-80% faster than a GTX680 by March 2013......
test MORE games like BFG did and it will be in the 25% range overall. thats because some games are indeed hardly any faster on the gtx680.

here the gtx680 is NO faster than the gtx580 in AvP at 1920 or 2560. http://www.techspot.com/review/565-nvidia-geforce-gtx-660-ti/page3.html

if the 680 was not clocked so high it would be a turd of an "upgrade" from a gtx580. I really think the whole boost thing was a last minute move just to make the mid range gk114 beat the gtx580 and match 7970.
 
Last edited:

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
It depends.The tesla k20 looks like a beast and the general idea is to keep the beast loaded 100% all the time.The games don't work that way, the load varies quite dramatically.So if we really see a k20 desktop port(almost unlikely due to various reasons)it can be way faster than 680 depending on the situation.
You forgot that GTX580 has the same 192GB/sec memory bandwidth as the 680? So why would you assume that the performance delta between them shrinks to just 15-20%? It doesn't.

GTX680 is 34% faster at 1080P, 33% faster at 1600P @ Computerbase
GTX680 is 32% faster at 1080P, 31% faster at 1600P @ TechPowerUp

That's exactly what I said that GTX680 is 30-35% faster than a GTX580, or about 50% faster than a GTX480.

I can't believe you guys are seriously believing that GTX780 (or w/e the name is) will be 70-80% faster by March 2013......
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You are using AvP that has poor scaling for both NV and AMD GPUs to show me that GTX680 is now only just 25% faster overall than GTX580? I don't think it's reasonable to cherry-pick games to start bringing down 680's performance vs. the 580.

bf3_1920_1200.gif


If anything BFG over the years has shown strong bias towards FPS titles, which is fair since he loves that genre. I am not going to tell him to how do his own GPU testing. Having said that, he rarely tests 3rd person games, racing games, strategy games, flight games, RPGs, action-adventure games. So really, if you want to claim 25% faster in FPS games, half of which I bet are ancient titles since he loves old school shooters, that's fair. What's in that test Cryostasis, Far Cry 1, Far Cry 2, Call of Duty games, FEAR games, Return to Castle Wolfenstein, Quake, Doom games? Ya, hardly a representative list of modern titles. 99.99% of people don't care that a GTX680 is only 25% faster in these games. For BFG it's a reasonable test since that's what he plays and I don't criticize that testing but it is hardly relevant for what most people play.

It depends.The tesla k20 looks like a beast and the general idea is to keep the beast loaded 100% all the time.The games don't work that way, the load varies quite dramatically.So if we really see a k20 desktop port(almost unlikely due to various reasons)it can be way faster than 680 depending on the situation.

I never said 780 won't be way faster, but it's the "way faster" definition here that's being discussed. I am saying 35-40% faster and what's being said counter to this opinion is 70-80% faster. Some games will peg the GPU to 99% like Metro 2033. If that happens, what do you think the peak power consumption rate will be for a 1 Ghz 2880 SP 600mm^2 GTX780? It'll be like Fermi 2.0. Even if we ignore power consumption, none of you guys is assigning a clock speed penalty that's attributable to larger chips. You can't just keep adding complexity to a semiconductor chip and continue to expect no penalty for clock speed as the die size grows. Further, none of you is considering that HD7970 GE has full compute functionality and at just 365mm^2 is already pushing above 200W. It's just logical deducation here that a 520-600mm^2 28nm chip with full GPGPU / double precision compute running at 1Ghz is not possible. What do you guys think that NV has some magical 28nm node process that AMD doesn't have? :p
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
I was only giving that as an example of where the 680 did nothing more than what the 580 could do. and perhaps you need to look at that chart again because AMD does just fine with the 7970 and it scales nicely on the ghz too.
 
Last edited:

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
My guess it will be around 45-50% faster in demanding titles.When 780 comes out the process will be more matured but even then it would be hard to retain the 680 clock speeds, I agree.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I was only giving that as an example of where the 680 did nothing more than what the 580 could do. and perhaps you need to look at that chart again because AMD does just fine with the 7970 and it scales nicely on the ghz too.

Performance wise sure, but power consumption wise the performance/watt on a 1050mhz 7970 is worse than on a stock 7970. 7970 is just a 365mm^2 chip. How can NV release a 520-600mm^2 28nm Kepler and not blow past 250W? It's not even just taking a GTX680 and enlarging it to 500mm^2, you have the more power consuming 384-bit bus, more VRAM, and all that GPGPU compute / dynamic scheduler complexity. Think about it, GTX580 peaked at 229W of power and it was clocked at only 772mhz. A node-shrink can only take you so far. An 850mhz 2880SP GK110, a lot more believable.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
Performance wise sure, but power consumption wise the performance/watt on a 1050mhz 7970 is worse than on a stock 7970. 7970 is just a 365mm^2 chip. How can NV release a 520-600mm^2 28nm Kepler and not blow past 250W? It's not even just taking a GTX680 and enlarging it to 500mm^2, you have the more power consuming 384-bit bus, more VRAM, and all that GPGPU compute / dynamic scheduler complexity. Think about it, GTX580 peaked at 229W of power and it was clocked at only 772mhz. A node-shrink can only take you so far.
and the gk110 will be an even more power hungry card. thats is sort of my point as will take the big gk110 aka gtx780 will be the overall proper upgrade from the gtx580.

really Nvidia should have had more of compromise. gk110 looks stupid big and power hungry and its as if Nvidia learned nothing from the gtx480 fiasco. a 2304 sp/ 192 tmu/ 48 rop card with a fixed clock around 950-1000 would have made much more sense than the crazy huge 2880 sp gk110.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
There is no confirmation that GK110 will actually be used as a foundation for GTX780. It could forever be relegated to Server and Workstation parts and NV may use some adapted form of this chip or something else entirely. We aren't 100% sure that GK110 15 SMX chip will actually be a GTX780. Based on VR-Zone's 25-30% claim, NV may even continue using a revamped GK104 for the entirety of Kepler consumer generation.

"...the fact of the matter is that Kepler will remain a three GPU line-up: GK104 for performance, GK107 for mobile and entry-level desktop and the GK110 for high-end computational and visualization parts."
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
There is no confirmation that GK110 will actually be used as a foundation for GTX780. It could forever be relegated to Server and Workstation parts and NV may use some adapted form of this chip or something else entirely. We aren't 100% sure that GK110 15 SMX chip will actually be a GTX780. Based on VR-Zone's 25-30% claim, NV may even continue using a revamped GK104 for the entirety of Kepler consumer generation.

"...the fact of the matter is that Kepler will remain a three GPU line-up: GK104 for performance, GK107 for mobile and entry-level desktop and the GK110 for high-end computational and visualization parts."
well surely they have to be working on something. gk114 is not going to stand a chance against anything faster from AMD. and I dont see how they can revamp gk114 either. its using the fastest memory available and nothing on the gpu is being held back now. just higher clocks will not get the job done.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Curious what do you have to say about this --> $2,000 GTX690 SLI setup is only 5 fps faster on average and has lower minimum framerates than an $860 HD7970 GE CF setup using triple monitors. The guys who got 7970s and bitcoin mined got them free too. So it's like $2k vs. $0. :cool:

Well you just wait until nVidia's real top of the line chip comes out. You'll be sorry then. /sarc
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
NV had no problems selling mid-range GK104 680 for $500.

slidew.png


What incentive does NV have to launch a $500-600 chip that's twice the size with 70-80% more performance? They can roll out a 30-40% faster chip on a smaller die and make more $ since it would cost them less from a manufacturing point of view and they'd still sell every single one at $500+. If you are selling $3k+ GK110s to professionals/server markets and you are capacity constrained, why would you start selling those chips for $500-600 instead and make way less $?

Well you just wait until nVidia's real top of the line chip comes out. You'll be sorry then. /sarc

:biggrin: Ya, but then in 2014 Maxwell comes out. Why would I buy NV's mid-life Kepler tech in 2013 when Maxwell is the real next generation flagship that will pawn a GTX780 by 50%? :p
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
Seeing all this talk of GK110 and looking at GK104, I think we are going to see GK104 for a looooong time. It will wind up the next G92 imo. When we get GTX 780/770, I'll bet no doubt the 760 will be GK104, or GK114. They're going to milk this chip for years to come. :p

I'm trying to remember if GF104 had disabled areas that were enabled in GF114 ? The only thing with GK104 is there is nothing more to unlock in the chip and there is no feasible DDR3 available to give it more bandwidth. So currently I don't see anything better from nvidia until they release a consumer GK110.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Apparently GTX780 will be 60-80% faster than your 680, so you should start preparing dumping those 680s before they drop to $200 in the 2nd hand market ;)
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
Apparently GTX780 will be 60-80% faster than your 680, so you should start preparing dumping those 680s before they drop to $200 in the 2nd hand market ;)

I think the 780 will be 50% faster, 80% is way too much to expect. I think 80% faster than a 580 will make sense for the 780 ;) What the 680 should of been.

I am probably passing on the 780. There are just no games that I cannot run currently where I want them to be. 2x 780s is going to be overkill for 2560x1600. :D Sad times.
 

Hypertag

Member
Oct 12, 2011
148
0
0
Based on current projections, HD8970 is expected to be 25-40% faster than HD7970. What you predict is the greatest performance lead for NV in the last 10 years of GPU competition? NOT happening. There is absolutely no way Kepler refresh "GTX780" or w/e it's called will be 70-80% faster than a GTX680 on average on 28nm node. You are living in dream world right now.

Where did I state it would be 80% faster? Where? I stated a definitive fact. A full GK110 has 87.5% of the shading capability of the GTX 690. I don't have the tiniest clue how you are going to disprove this fact, since GK110 has 15 SMX units and GK104 has 8 SMX units. It isn't rocket science.