mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:
http://www.theregister.co.uk/2012/11/12/nvidia_tesla_k20_k20x_gpu_coprocessors/
The GK104 versus the GK110
The GK104 weighs in at 3.54 billion transistors, and bundles 192 single-precision CUDA cores into what Nvidia calls a streaming multiprocessor extreme (SMX) bundle.
The GK104 has eight SMX units for a total of 1,536 cores. Each SM unit has 64KB of L1 cache and 768KB of L2 cache added for the SM to share. Unlike the predecessor "Fermi" GPUs, the Kepler1 GK104 chip has 48KB of read-only cache memory tied to the texture units. On the Tesla K10 card, all of the 1,536 cores are fired up, and there are two GK104s on the PCI-Express card, each with 4GB of GDDR5 graphics memory and 160GB/sec of memory bandwidth off the card.
The Tesla K10 card has a piddling 190 gigaflops of double-precision math, but a whopping 4.58 teraflops of single-precision. For weather modeling, life sciences, seismic processing, signal processing, and other workloads that don't care a bit about DP floating point, this GPU coprocessor is exactly what the scientist ordered.
![]()
Did you want to buy one ? A K20 ?
http://www.tigerdirect.com/applicati...?EdpNo=7462665
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:
GK114 does exist, and it would be fair game to compare it to GCN's top dog Tahiti (especially considering that Tahiti is used in AMD's best professional card they sell). This thread isn't just about what has been made available for desktop graphics cards; it's about the architecture in general.
The problem with GK114, though, is that hardly any benchmarks for it are available. So while it exists, that fact hardly helps evaluate just how good the Kepler architecture is.
Right now it's only in a tesla card, so gaming benchmarks are out of the question, still it has pretty conservative clocks, it's quite normal in the professional cards but I would like to see if it handles the same clocks as GK104 given proper cooling of course. 732MHz at stock is pretty damn little. Maybe it just can't clock well and that's why we'll never see it in a GF card, although nv greed seems like a more probable explanation.
BTW. But where are the compute benchmarks? We should have seen tons of them by now.
If they have enough to sell as a Geforce card, I'm fairly certain they will. After all, it's just the first chip that cost million$ to make. After that selling them for ~$600 each would make plenty of money. Assuming decent yields and they don't have to take fab resources that would be better for higher volume parts.
In the end, I think what will determine that is if they can compete with AMD's top chip with a smaller chip. If AMD focuses on making a top consumer chip that's also good on compute, nVidia might be able to do that.
Chief Executive of AMD We Are Not Interested in Low-Volume Customers.
But reduction manufacturing costs in many ways causes market share decrease. Many criticized Nvidia Corp. for pumped up OpEx due to implementation costs and other manufacturing-related charges that the company faced during the Kepler GPU family ramp up. As the time has shown, Nvidia is now the No. 1 supplier of notebook GPUs (based on data from Mercury Research provided by Nvidia) because of AMD’s reluctance to help integrate its Radeon Mobility products based on the recent architecture.
![]()
mm...good idea...let's compare some mythical,non existent chip with AMD's freely available production chip...that'll work...:whiste:
The one with all those AMD processors? Yea I can't afford one of those yet.You might want to google "the world's fastest supercomputer"
The one with all those AMD processors? Yea I can't afford one of those yet.
Fail lol.Is this about the most affordable GPU? Stick with integrated.
Kepler. Just look at the new mobile GPUs to see the power efficiency. The GTX 680MX is as fast as the desktop GTX 580.
I can't comment on compute as it doesn't matter to me. But even though the 7970 GE is clearly faster than the 680, I wonder how close it would be if the 680 had a 384-bit memory bandwidth. Sure it would raise the power consumption of the Kepler chip as well, but I wonder if it would still use less and be equal or perhaps faster than the 7970 GE.
Ehh, I don't bother with Best Buy, I hang around Fry's. And who needs Copy and Paste. I R smurt.honestly, Who are we to actually discuss the merits of the architectures? As if the majority really know what any of it actually means.
We just sit back eating Cheetos reading what the smart people wrote, copy and pasting big words and appropriate terminology as we go. A few days later and a few beers deep we can then be found at Best Buy talking to people in the video card isle telling them how we are members of an elite AMD biased video forum and direct them away from that 5200 Ultra they had their unsuspecting fingers on learning them about the promise land we call New Egg.
After a few years of repeating this behavior we then earn the right to call everybody and anybody a Fanboi based on what video card they have, and throw stones at their mothers.
Even if they could make enough of them to release them as GeForce card I'm not so sure that they would do it. Even with slower and more profitable cards for them they still outsell AMD's cards and despite all that the market-share for NV seems to be rising at the expense of AMD. They don't need to have the fastest single GPU card on the market to sell more then the competitor. It's(GTX680) slower and more expensive to boot yet it still outsells 7970sGHz by a lot. For example in my country the cheapest GTX680 costs 25% more then gigabyte 7970(also the cheapest 7970 I could find) clocked at 1GHz(I'm not sure if it's GHz edition or just normal overcloced, but it doesn't really matter) And guess what? nV sells more GTX680s then AMD 7970s and it's not some trivial difference it sells way more.
A lot of people hold out for nVidia's release because they believe they will release something that is faster than AMD. Most people believe that will be true next round, as well. Let the 8970 be faster than the 780 and I think you'll see a lot of nVidia customers jump ship.
Nvidia is lucky that AMD is largely incompetent on the software side.
