• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GeForce Titan coming end of February

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
They sell the vast majority of GK110 as Tesla cards, for a much higher price. As they provide the volume, the only reason to sell them as gaming GPUs at all is to collect the money of the people who are willing to pay more for a GPU.

Actually, that's not the only reason. They have always sold the top chip to both the consumer and pro markets. The reason for charging $900 is supply and demand, should the rumors prove to be true, of course.
 
Nope, they actually scale perfectly - as long as the bandwidth is sufficient. The 680 is a double 650 Ti in every regard and clocked about 10% higher. And it has 220-230% of the 650 Ti's performance. Go figure 😉

This. As long as you can keep them fed, they do. Graphics are in the "embarrassingly parallel" category.
 
Yeah I guess so. I so sleep deprived (wife and I just had first kid, he is six weeks old today) that I don't feel like making any effort for anything.

I have a feeling this price tag won't stick for all that long. If the 8970 is 25% faster than 7970GE, then it will only be 15-20% slower than Geforce Titan. AMD wouldn't price the 8970 at $699, would they?

LOL, welcome to the club...hehe, Mine are 5 & 3, and still we wake up in the night to one of them...
 
Nope, they actually scale perfectly - as long as the bandwidth is sufficient. The 680 is a double 650 Ti in every regard and clocked about 10% higher. And it has 220-230% of the 650 Ti's performance. Go figure 😉

GTX 650 Ti has 768 CUDA cores with 4 SMX , 2GPC, 128 bit memory , 16 ROPs. GTX 680 has 1536 CUDA cores with 8 SMX ,4 GPC, 256 bit memory and 32 ROPs. almost every aspect is doubled and in fact clocks on GTX 680 are higher. GTX 780 has 2688 stream processors, 14 SMX, 5 GPC, 384 bit memory, 48 ROPs. clocks are expected in the 850 - 900 mhz on gtx 780 compared to 1058 mhz on GTX 680.

With 75% more CUDA cores and SMX and just 25% more GPC the GPC/ SMX ratio has decreased from 1:2 to 1:3 on 4 of the 5 GPCs. only the last GPC with one SMX disabled will have 1:2 ratio. with only a 50% increase in bandwidth and ROPs there too its not a doubling of resources. with so many changes even at same clocks getting a perfectly linear scaling is not going to be possible.

anyway the launch is only a month away so we can get a clue as to how efficient GK110 design is and how well it scales performance with respect to GTX 680.
 
Last edited:
GTX 690 is slightly slower than GTX 680 SLI. assuming 1.8x GTX 680 performance , the GTX 780 is 0.85 x 1.8x = 1.53x .

85% of 1.53x GTX 680 is 0.85 x 1.53 x GTX 680 perormance = .85 x 1.53 = 1.3x GTX 680 performance

Why you are multiplying two times by 0.85x?
GTX 780 is 0.85 x 1.8x = 1.53x, i.e. 53% faster than 680
 
Why you are multiplying two times by 0.85x?
GTX 780 is 0.85 x 1.8x = 1.53x, i.e. 53% faster than 680

Because he want's 0.85 of GK110 performance.

Anyway desire has got nothing to do with it 😀
If by now AMD doesn't have mean and lean 400mm2 chip, we can multiply all we want, once launched GK110 will remain class on it's own. Price and performance wise.
 
According SweClockers sources the launch of the GeForce Titanium to resemble that of the GeForce GTX 690th Partner Manufacturers must follow Nvidia's reference design to the letter and can not even put their own stickers on graphics cards. The performance is estimated at about 85 percent of a Geforce GTX 690 .

Seems a bit excessive, doesn't it? They won't even allow branding?
 
This card is surely to be a beast with an appetite. I have a feeling that we will have a high power Radeon right about the same time ready to go. THis should make for a very nice battle for my buck!!!!
 
So I just checked my calendar. Chinese New Year runs February 10th through the 25th. Assuming any of this is at all accurate, the launch date is the least likely part. Either it will launch before everyone goes on vacation, or it won't launch until a few weeks after. Someone has to package and test cards to make a launch happen.
 
This card is surely to be a beast with an appetite. I have a feeling that we will have a high power Radeon right about the same time ready to go. THis should make for a very nice battle for my buck!!!!

Assuming your "feeling" is right, I'll lay odds you buy them both. 😀
 
Welcome to the new Nvidia... that's why I'm so happy that AMD's cracking down on stutter. With how ridiculous Nvidia is being, AMD's my only option.

Soso.
AMD is your only choice? Yeah, if we ignore that we can thanks AMD for the prices of nVidia cards in this generation. :hmm:
 
Just so we are clear here...This WILL be NVDA's top performing part won't it?
None of this"OMG it's only a midrange chip" when it gets beaten BS...right?
 
Just so we are clear here...This WILL be NVDA's top performing part won't it?
None of this"OMG it's only a midrange chip" when it gets beaten BS...right?

Everyone, including you, knows it will be Nvidia's top end chip for 20nm. And what magical single gpu do you think will beat it?
 
Soso.
AMD is your only choice? Yeah, if we ignore that we can thanks AMD for the prices of nVidia cards in this generation. :hmm:

This isn't AMD's fault. This is Nvidia redefining / creating a new halo segment. There is honestly no way this card should be $900, unless it's going to be an extremely limited production run. And if it isn't a limited run, then either we all have the MSRP wrong, it won't stay that high for a significant amount of time, or it won't sell.

If it was $799 for the 6gb version, and $699 for the 3gb version, I would still be calling it over priced but it would certainly be much more palatable. However, since the information at hand is basically a copy and paste of K20X, we don't know for sure what clock speeds and how much vram it will end up with. So, while the model name and approximate release date my be correct, I'm going to hold out hope (albeit faint hope) that Nvidia isn't actually this stupid to price the card that high.
 
Status
Not open for further replies.
Back
Top