• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[SemiAccurate] Tesla K20 specs: 13 SMX, GeForce probably 12-13 SMX

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Until GF100 nVidia always released the full chip. There is no reason to believe that they will change their behavior.

That is not correct. G92 was first released as the 8800GT, which had some fused off cores. The 8800GTS 512 came six weeks after, which was fully the functional G92.
 
Last edited:
I would pay $1k+ for a consumer GK110. What people don't understand is this product isn't made for games, but rather for the HPC market. Even at 3k a pop, this is VERY cheap compared to buying dozens of Intel processors. The features NVIDIA adding (e.g., HyperQ and dynamic parallelism) is worth substantial amount of money to many people. NVIDIA is really driving serious innovation here, I only wished AMD would have some cash to invest this area as well. But NVIDIA is really doing an amazing job especially w. their toolkit support.

I would be very surprised to see them release this product as a consumer gaming card, especially since a significant real estate is in DP which consumers do not need at all.

Many clusters are lining up for this chip, we are as well, even though we are buddy-buddy w NV, we still have to wait in line for awhile 🙁
 
That is not correct. G92 was first released as the 8800GT, which had some fused off cores. The 8800GTS 512 came six weeks after, which was fully the functional G92.

8800GT was a counter to AMD's rv670 and g92 was nVidia's first 65nm product.
 
I only wished AMD would have some cash to invest this area as well. But NVIDIA is really doing an amazing job especially w. their toolkit support.

The problem is AMD has no chance in HPC markets, they are playing catch up in a field thats evolving rapidly. They are years behind. Their software support is pathetic compared to CUDA's wide usage. The only reason for server makers to consider AMD is if they only care about SP ops or bitcoin-esk code.

I don't see AMD having much success in HPC for a long long time. There's other markets they should be pushing their GPU lead for: smartphones, tablets, mobile space and raping intel iGPU. These consumer markets are easy to penetrate if your product is superior perf/w, these users arent going to care what chip is in the phone, as long as its fast and has a long battery life. Wasting R&D to compete with NV in HPC is a wasted effort, IMO.
 
The problem is AMD has no chance in HPC markets, they are playing catch up in a field thats evolving rapidly. They are years behind. Their software support is pathetic compared to CUDA's wide usage. The only reason for server makers to consider AMD is if they only care about SP ops or bitcoin-esk code.

I don't see AMD having much success in HPC for a long long time. There's other markets they should be pushing their GPU lead for: smartphones, tablets, mobile space and raping intel iGPU. These consumer markets are easy to penetrate if your product is superior perf/w, these users arent going to care what chip is in the phone, as long as its fast and has a long battery life. Wasting R&D to compete with NV in HPC is a wasted effort, IMO.

AMD's HSA roadmap with complete architectural CPU and GPU integration by 2014 gives AMD the foundations for their long term HPC strategy. With server APUs having unified memory and maybe even stacked DRAM , Nvidia will face a different challenge. Lets see what Nvidia Denver brings to the table. With support for OpenCL and C++ AMP , AMD is also getting a broad ecosystem support from other companies. Nvidia's long term growth is based on how well they can compete with Intel and AMD in the integration era.

With Samsung and Apple dominating the phone market and going for their own CPU designs AMD should avoid that market. Nokia , RIM, Motorola are all casualties in the consumer phone market because Samsung and Apple have destroyed the competition. the tablet market is worth pursuing.
 
Last edited:
But itt was still cut down and it was still released before Fermi, making your statement as you said it incorrect.

8800gt was released as a counter. But at that time there was not even real supply for g92. So they released g92 with 7 cluster to put out "enough" cards but until 8800GTS there was nearly nothing on the market.
 
8800gt was released as a counter. But at that time there was not even real supply for g92. So they released g92 with 7 cluster to put out "enough" cards but until 8800GTS there was nearly nothing on the market.

I am neither arguing nor disagreeing with your assessment of G92, I merely pointed out that you incorrectly stated "until GF100 Nvidia always released the full chip." G92 came before Fermi, and the first product based on G92 had fused off cores. Plain and simple.
 
I am neither arguing nor disagreeing with your assessment of G92, I merely pointed out that you incorrectly stated "until GF100 Nvidia always released the full chip." G92 came before Fermi, and the first product based on G92 had fused off cores. Plain and simple.

They released the 8800GTS 6 weeks after the 8800GT and only at that time there was enough supply for g92. That is not like Fermi and GTX480.

The 8800gt was always a "get go" product for g92. And we talking here about a 323mm^2 die. The GTX280 had a full GT200 chip and came 7 months after the 8800GTS.

Charlie implies here that the 14 SMX K20s are "special" chips for ORNL (and maybe other supercomputers), and the 13 SMX K20 is the "real" version that will actually be available on the market.

He has no clue from the material. 🙄

nVidia claims that 90% of the 20 PetaFLOPs/s are coming from the 18.688 K20 cards. Which means every card must only deliver ~964 TFLOPs/s DP (real throughput) performance.

So let us do the math:
nVidia said that Gk110 has >80% performance in DGEMM. Using 85% means that they only need 1108,6 TFLOPs/s DP performance. K20 gets that with:
15 SMX and ~570MHz
14 SMX and ~625MHz
13 SMX and ~670MHz
12 SMX and ~720MHz
11 SMX and ~790MHz

Yeah, they were desperate to bin a K20 card with the needed performance...
 
Last edited:
The problem is AMD has no chance in HPC markets, they are playing catch up in a field thats evolving rapidly. They are years behind. Their software support is pathetic compared to CUDA's wide usage. The only reason for server makers to consider AMD is if they only care about SP ops or bitcoin-esk code.

I don't see AMD having much success in HPC for a long long time. There's other markets they should be pushing their GPU lead for: smartphones, tablets, mobile space and raping intel iGPU. These consumer markets are easy to penetrate if your product is superior perf/w, these users arent going to care what chip is in the phone, as long as its fast and has a long battery life. Wasting R&D to compete with NV in HPC is a wasted effort, IMO.

HPC margins are like no other. AMD is probably tempted by the fact that incremental effort on top of GPU can get them into a market with significant profits. Maybe its working a bit, they do have several design wins in Top500. But then NVIDIA seems to have a bigger chunk of the market.
 
The problem is AMD has no chance in HPC markets, they are playing catch up in a field thats evolving rapidly. They are years behind. Their software support is pathetic compared to CUDA's wide usage. The only reason for server makers to consider AMD is if they only care about SP ops or bitcoin-esk code.

I don't see AMD having much success in HPC for a long long time. There's other markets they should be pushing their GPU lead for: smartphones, tablets, mobile space and raping intel iGPU. These consumer markets are easy to penetrate if your product is superior perf/w, these users arent going to care what chip is in the phone, as long as its fast and has a long battery life. Wasting R&D to compete with NV in HPC is a wasted effort, IMO.

Sorry, but what you are saying doesn't make sense. HPC is a lucrative market with very little competition. AMD has the product to compete. That wasn't always true, but Tahiti has changed that. The gaming card market doesn't have a lot of growth opportunity.

The workstation video and animation market is the one that they are going to have the most difficulty with. They need to schmooze Autodesk.

HPC margins are like no other. AMD is probably tempted by the fact that incremental effort on top of GPU can get them into a market with significant profits. Maybe its working a bit, they do have several design wins in Top500. But then NVIDIA seems to have a bigger chunk of the market.

True. Although margins will shrink in HPC if AMD is successful with OpenCL.
 
Sorry, but what you are saying doesn't make sense. HPC is a lucrative market with very little competition. AMD has the product to compete. That wasn't always true, but Tahiti has changed that. The gaming card market doesn't have a lot of growth opportunity.

K20 will easily beat w9000. The opportunity in this generation is over.
 
AMD is going to have to compete with NV.. and NV owns the market. Its worse than the CPU story vs Intel domination.

I disagree about competing with nVidia being worse than competing with Intel. Intel is 14X the size of nVidia. Intel is miles ahead of TSMC and GloFo (AMD/nVidia suppliers) on manufacturing process. nVidia does own the market. That can be changed, though.
 
I disagree about competing with nVidia being worse than competing with Intel. Intel is 14X the size of nVidia. Intel is miles ahead of TSMC and GloFo (AMD/nVidia suppliers) on manufacturing process. nVidia does own the market. That can be changed, though.

Intel is 14x the size of Nvidia but can't compete against Nvidia in the compute arena. Not even with a two node process advantage could they compete. Not even with Knights Corner. So it isn't the behemoth size that makes you superior. It surely makes it easier to "become" superior but the product is key.
Anyway, just thinking out loud.
 
AMD has the product to compete. That wasn't always true, but Tahiti has changed that.

Not really, not even really that close. nV's actual throughput is ~20% higher then the S9000's theoretical peak. They have strong parts for SP throughput to be sure, but SP doesn't get you into the big boy game.

The workstation video and animation market is the one that they are going to have the most difficulty with. They need to schmooze Autodesk.

They need to have a better driver team. People can say what they want about AMD's consumer Windows based drivers, once you step away from that narrowly defined segment AMD's drivers have a well earned reputation for being crap. They don't need Autodesk's support, they need to support Autodesk.
 
Sorry, but what you are saying doesn't make sense. HPC is a lucrative market with very little competition. AMD has the product to compete. That wasn't always true, but Tahiti has changed that. The gaming card market doesn't have a lot of growth opportunity.

The workstation video and animation market is the one that they are going to have the most difficulty with. They need to schmooze Autodesk.



True. Although margins will shrink in HPC if AMD is successful with OpenCL.

Unfortunately OpenCL is miles behind CUDA features. 🙁
 
Intel is 14x the size of Nvidia but can't compete against Nvidia in the compute arena. Not even with a two node process advantage could they compete. Not even with Knights Corner. So it isn't the behemoth size that makes you superior. It surely makes it easier to "become" superior but the product is key.
Anyway, just thinking out loud.

Unless I misunderstood him he was comparing competing against nVidia in HPC is worse than competing against Intel in the CPU market. It's the only comparison that makes sense.
 
Not really, not even really that close. nV's actual throughput is ~20% higher then the S9000's theoretical peak. They have strong parts for SP throughput to be sure, but SP doesn't get you into the big boy game.

Sea Islands will be faster than Southern Islands, as well. Remember that by the time K20 hits the retail market Tahiti will be a year old. We'll have to see if CI can compete.



They need to have a better driver team. People can say what they want about AMD's consumer Windows based drivers, once you step away from that narrowly defined segment AMD's drivers have a well earned reputation for being crap. They don't need Autodesk's support, they need to support Autodesk.

It's a chicken or the egg scenario. As long as autodesk engineers have nVidia cards in their workstations their software is going to run better on nVidia hardware. It makes nVidia's support far easier. If there's a bug, Autodesk tells nVidia, likely supplies them with the code, bug report, etc... and nVidia gets to fix it. While it's real easy to say AMD's driver team sucks, that's an impossible advantage to overcome.

I never said Autodesk would/should support AMD. I said that AMD needs to schmooze Autodesk. That means AMD has to make it worth Autodesk's time to work with them. That's a relatively small market, though. I wouldn't make it a top priority, if I were AMD. It would be on my to do list though. It's just that a lot of people tend to clump it all together as "The Professional Market".
 
Back
Top