BenSkywalker
Diamond Member
- Oct 9, 1999
- 9,140
- 67
- 91
Nine Mega Watts, people still not understand why Tesla has tighter wattage concerns then GeForce? 
Nine Mega Watts, people still not understand why Tesla has tighter wattage concerns then GeForce?![]()
so you're thinking 14/15 SMX for power purposes? It could just as well be yields, no?
The GPU decision was just as simple. NVIDIA has been focusing on non-gaming compute applications for its GPUs for years now. The decision to partner with NVIDIA on the Titan project was made around 3 years ago. At the time, AMD didn't have a competitive GPU compute roadmap. If you remember back to our first Fermi architecture article from back in 2009, I wrote the following:
"By adding support for ECC, enabling C++ and easier Visual Studio integration, NVIDIA believes that Fermi will open its Tesla business up to a group of clients that would previously not so much as speak to NVIDIA. ECC is the killer feature there."
At the time I didn't know it, but ORNL was one of those clients. With almost 19,000 GPUs, errors are bound to happen. Having ECC support was a must have for GPU enabled Jaguar and Titan compute nodes. The ORNL folks tell me that CUDA was also a big selling point for NVIDIA.
As someone else figured out, it's 13smx, if the numbers are correct. It could be for either reason. Most likely it's yields. If it were just power they could reduce clocks and use fully functioning chips.
I do think that it's 14, according to the numbers I've seen. The AT K20 preview said max cores was 15x192=2880 http://www.anandtech.com/show/5840/gtc-2012-part-1-nvidia-announces-gk104-based-tesla-k10-gk110-based-tesla-k20 and the ORNL Titan AT write-up says 2688 cuda cores. So 14x
Hardly any surprise:
that is why I laugh when poeple try and use OpenCL or DirectCompute for gauging GPGPU performance....CUDA way more relevant...even if AMD cannot run CUDA.
Yeah the same way D3D has become extinct compared to OGL![]()
That is not NV's fault.AMD could have licensed it.NV simply has vastly superior ecosystem.I wasn't aware that D3D couldn't be used on hardware not owned by Microsoft?
I fail to see the comparison.
I don't laugh because CUDA is going to become extinct simply because it's not open source.
I don't laugh because CUDA is going to become extinct simply because it's not open source.
I don't laugh because CUDA is going to become extinct simply because it's not open source.
You mean like Windows vs Linux?
Like DirectX vs OpenGL?
Wishfull thinking is just that...wishfull thinking.
0/10
While I don't agree that CUDA is going to go away, I don't think your comparisons really prove your point for HPC. Who uses Windows and Dx?
CUDA (and it's eco-system) are lightyears ahead of the competition....like it or not.
The majority of PC users and gamers.
Just like CUDA is the most used GPGPU language
the open source crowd are a boring bunch...always with the "wishfull" arguments...to heck with facts or reality.
Bottom line is still:
CUDA (and it's eco-system) are lightyears ahead of the competition....like it or not.
I'm not talking about liking anything. Stop making it personal. I'm only pointing out that we aren't talking about PC's or gamers. In just about everything else, and particularly HPC, Linux rules. Specifically because it's open source and you can do whatever you need to to it to fit it to your needs.
Point me to something similar to CUDA.
Not something WISHING it were CUDA...but something that actually is a real competitor?
If not...you point is nothing but a red herring.
XeonPhi would like to argue that point with you
Not saying how it will turn out, but at the current time it seems like the HPC space is going to be owned by the two of them.
It is still x86 and if you pair it with gpu it can still be accelerated via openacc. I don't see x86 rivaling gpu in parallel workload anytime soon. Moreover all the existing apps need to be redesigned not just recompiled just by using the mmic flag
