Big Kepler news

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
The problem with any non proprietary software is "support". When a company invests on a project the first thing they want is top notch customer support.You can't really build a application where u r not certain about the support u r going to get in future.AMD has always been extremely lazy in this regard.I am all for open source stuff but unless they provide a strong support mechanism to the corporates i see no growth potential in them.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
The above post (wall of text) keeps saying that cuda is falling apart / failing, yet not one shred of evidence is presented. Presenting a Nvidia press release as some kind of proof them giving up is ironic.
Yes, let's just discount supercomputing. It's one of the area's that drives computing on. Weather prediction has many applications and many of them are military driven.
Supercomputers Power Social Networks to Cancer Research

These massive systems, which have thousands of processors and require huge amounts of electricity, are in use 24×7, 365 days a year. There is a constant demand to make them both faster and more energy efficient.
Graphics processors – originally developed to blast pixels in video games – help speed up these systems and increase their energy efficiency. GPUs are currently deployed in three of the world’s fastest supercomputers, and the adoption rate of GPUs in supercomputing centers is increasing dramatically.
We hope this video dispels some of the mystery around these systems. It’s important to know what they’re used for, because your next car, jet engine, energy source or medical device will likely be developed by one. And when you retweet this story, those messages ricochet worldwide, instantly, thanks to computers like these.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
openCL isn't AMD's and it's "support" is up to the people who code for it. Same goes with CUDA and nVidia, actually. I don't understand this "support" issue...

AMD has always been slow in pushing software support, but thankfully for them Samsung, Apple, Microsoft, Google and even nVidia are pushing forward openCL, therefore it really doesn't matter how much AMD pushes the software support (it's Khronos, not AMD) because they just need to provide the hardware for it, just as Apple and Samsung and even nVidia will. Your openCL browser will work on your laptop, your phone and your desktop whereas Google couldn't give a crap about CUDA outside of their HPCs.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I have used cuda for medical imaging and i definitely enjoyed it.We developed shared libraries with nvidia for our project.OpenCl may be great but i have never used it at all.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
openCL isn't AMD's and it's "support" is up to the people who code for it. Same goes with CUDA and nVidia, actually. I don't understand this "support" issue...

AMD has always been slow in pushing software support, but thankfully for them Samsung, Apple, Microsoft, Google and even nVidia are pushing forward openCL, therefore it really doesn't matter how much AMD pushes the software support (it's Khronos, not AMD) because they just need to provide the hardware for it, just as Apple and Samsung and even nVidia will. Your openCL browser will work on your laptop, your phone and your desktop whereas Google couldn't give a crap about CUDA outside of their HPCs.

U misunderstood my point.When i start a venture i need to be certain about the "support" .If u have cuda related issues nvidia provides a ticket based support which is entirely absent for opencl.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
The above post (wall of text) keeps saying that cuda is falling apart / failing, yet not one shred of evidence is presented. Presenting a Nvidia press release as some kind of proof them giving up is ironic.
Yes, let's just discount supercomputing. It's one of the area's that drives computing on. Weather prediction has many applications and many of them are military driven.

Don't mistake CUDA's popularity in HPC with CUDA's overall popularity. CUDA was essentially the first to truly support GPGPU in a large fashion and the HPC community ate it up.

And they still are eating it up but it's only the HPC/workstation crowd that seems to care about it and that's because it is still, despite nVidia's half-ass attempt at open-sourcing it, an nVidia only HPC thing. I don't disagree with you that it's doing well in the HPC; in fact I said it myself. Where it's failing is everywhere else, and that's a big problem.

Multiple comparisons have been drawn between CUDA and OpenCL since its inception.[57][58] They both draw the same conclusions: if the OpenCL implementation is correctly tweaked to suit the target architecture, it performs no worse than CUDA. Because the key feature of OpenCL is portability (via its abstracted memory and execution model), the programmer is not able to directly use GPU-specific technologies, unlike CUDA. CUDA is more acutely aware of the platform upon which it will be executing because it is limited to Nvidia hardware, and therefore, it provides more mature compiler optimisations and execution techniques

http://en.wikipedia.org/wiki/OpenCL

Companies don't want to rely on nVidia hardware and pay a CUDA-tax on Teslas if they don't have to.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
IBM built it's business in part, with guaranteed support for the hardware they sold. You get to situations that are considered mission critical, you want, count on support.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
But that argument falls apart when you consider that Intel doesn't, and more importantly can't, use CUDA either.

CUDA's been slowing down lately and open-sourced alternatives are picking up, namely openCL. Unlike CUDA which can only be used on nVidia hardware (which makes it utterly useless for the mobile platform) the open-sourced software can be bridged to nearly all hardware regardless of who makes it.

Don't mistake CUDA's popularity in HPC with CUDA's overall popularity. CUDA was essentially the first to truly support GPGPU in a large fashion and the HPC community ate it up. You're right in that AMD doesn't push their chip's proprietary anything but that's why other companies along with AMD push forward the open-sourced stuff. Hell, the reason openCL was started and is succeeding is due to Apple and in a very un-Apple-like fashion, it's available to everyone.

CUDA's going to fall apart. There's no reason for another x86-like monopoly when you lock yourself out of any competition and charge inflated prices (Tesla). nVidia got hammered by GCN, make no mistake. nVidia locked themselves in a room with their shiny toy and asked everyone to pay a fee to get in but consumers have been realizing they can find much the same toys in other rooms that don't charge anything at all. Even nVidia realized this wasn't the brightest move and that their strategy needed fixing.

You realize Nvidia supports OpenCL and has done so better than AMD right?
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Better than AMD? I'm assuming you mean pushing for openCL as a standard? Otherwise...




in openCL nVidia haven't done very well at all.

luxmark.png


sandra-gpgpu.png


Moreover, Nvidia limits 64-bit double-precision math to 1/24 of single-precision, protecting its more compute-oriented cards from being displaced by purpose-built gamer boards. The result is that GeForce GTX 680 underperforms GeForce GTX 590, 580 and to a much direr degree, the three competing boards from AMD.

As far as performance goes it's an absolute slaughter. The Tesla cards will certainly make up that gap and likely pull ahead but it'll be at a far higher cost. Recently AMD have been pushing openCL/Directcompute far harder than has nVidia and with good reason: They want to make use of their APUs and GCN architecture in the HPC segment as well as mobile and desktop. nVidia, on the other hand, has neutered their desktop discrete cards in order to push their Tesla designs.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
50% more of those. And the memory will probably be clocked lower than the GTX 680 if the bus is 512-bit to save on power consumption. I'd put memory bandwidth at around 75% higher than the GTX 680.

And most things you mentioned, namely the CUDA cores and memory bandwidth, are there to improve compute performance.

The GTX 680 is around 23% faster than the GTX 580. If the GTX 780/GTX 685 is 25% faster than that it would mean an improvement of almost 50%, which isn't bad.

As technology progresses we get to higher points of diminishing returns.

My sources say +100%. If properly fed, more CUDA cores mean more gaming performance as well. The 680 is 35-40% faster than the 580, see the [H] review.
I only agree with the diminishing returns, but I don't think we're really there yet at 28nm from a technological standpoint. Economically (yields, prices) yes, it hampers performance increase. But I fully expect the GTX780 to be 80% faster than the GTX580 once the process is more mature.

We should get used to more even increases. So no big increase of 60% and then another 15% for the refresh, but each time 30-40% instead.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
You realize Nvidia supports OpenCL and has done so better than AMD right?

How has their support been better? nVidia consistently pushes their proprietary solution in place of OpenCL. Sure they support OpenCL, but IMHO its more of just a bullet point to put on their spec sheet.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
From a narrow perspective, the call for non-proprietary standards. Which is basically saying , Intellectual property,ideas,solutions that someone or groups of someone create should be free for all. That's great and all but that does not pay your bills if you are a programmer etc.
Nvidia supports higher learning centers all over the world, and I can only see this as a positive. That they donate hardware/software create tuition initiatives etc. And it's why they are expanding.
CUDA Center of Excellence


http://research.nvidia.com/content/cuda-centers-excellence
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
What are you talking about? How does CUDA being proprietary lead you to some weird pseudo-communist sentence?

How about this...

Only nVidia can use and uses CUDA due to hardware locking and the way it's built from the ground up. openCL allows the use of a variety of hardware from many manufacturers thus it doesn't rely on a single source for the hardware or a particular hardware at all.

What does this mean?
I can only use CUDA if I have an nVidia GPU, and not your regular GPU which is neutered now but a Tesla designed specifically for this purpose. We *might* see CUDA in Tegra designs but, again, only in nVidia hardware.
OpenCL can be used from x86 hardware to ARM and any manufacturer that wants to use it, from your phone to your desktop and HPC.

They're expanding how, exactly? In the same HPC space that I said they were? How's CUDA doing everywhere else?
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
What are you talking about? How does CUDA being proprietary lead you to some weird pseudo-communist sentence?

How about this...

Only nVidia can use and uses CUDA due to hardware locking and the way it's built from the ground up. openCL allows the use of a variety of hardware from many manufacturers thus it doesn't rely on a single source for the hardware or a particular hardware at all.

What does this mean?
I can only use CUDA if I have an nVidia GPU, and not your regular GPU which is neutered now but a Tesla designed specifically for this purpose. We *might* see CUDA in Tegra designs but, again, only in nVidia hardware.
OpenCL can be used from x86 hardware to ARM and any manufacturer that wants to use it, from your phone to your desktop and HPC.

They're expanding how, exactly? In the same HPC space that I said they were? How's CUDA doing everywhere else?


AMD could license Cuda.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
AMD could license Cuda.

openCL isn't AMD's baby but a baby cared for by Apple, MS, Google, Samsung, Intel and even nVidia themselves.

If those companies aren't and haven't jumped on the CUDA bandwagon (outside of HPC) then why would AMD do it?

CUDA offers one clear advantage and that's because it allows for more direct contact with the hardware itself. Inherently it isn't designed for anything other than nVidia GPUs (CUDA cores) because it would lose that advantage and essentially be no different then openCL, except you'd pay nVidia a tax for it. That makes no sense.

I don't think some of you guys grasp the intention of openCL for GPGPU.

OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. It has been adopted by Intel, Advanced Micro Devices, Nvidia, and ARM Holdings.

OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On 16 June 2008, the Khronos Compute Working Group was formed[3] with representatives from CPU, GPU, embedded-processor, and software companies.

The original intent was to be cross-platform cross-hardware compatible because IBM, nVidia, Intel, Samsung, Apple, etc, all have their own hardware. In order for a user to use an application on their phone and their desktop without jumping through hoops (like you would with CUDA), an open platform benefits all parties because it actually reduces time and cost across the board and the consumer benefits. Software too benefits from this, so Microsoft and Google have also joined the bandwagon, as they both have OSes that stretch across various hardware/platforms, like cell phones to desktops to servers.

CUDA was never designed to be used outside of nVidia and more specifically, outside of HPC.
 
Last edited:

OVerLoRDI

Diamond Member
Jan 22, 2006
5,490
4
81
I am most curious about BigK. Obviously it is very compute oriented, but I want to see the differences between the architectures. Also, how similar it is vs the 7970, which is very compute oriented as well.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I am most curious about BigK. Obviously it is very compute oriented, but I want to see the differences between the architectures. Also, how similar it is vs the 7970, which is very compute oriented as well.
Big Kepler and the 7970 won't even be a contest. We need AMD to create something new.

Things are really heating up. Hopefully AMD has another card waiting that will tape out soon and will compete with Big Kepler. At this point, though, nVidia is only really competing with themselves at the super high-end and Big Kepler will probably cost $600+ with the GTX 680 remaining at $500; perhaps $450 after MIR or something.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
What are you talking about? How does CUDA being proprietary lead you to some weird pseudo-communist sentence?

How about this...

Only nVidia can use and uses CUDA due to hardware locking and the way it's built from the ground up. openCL allows the use of a variety of hardware from many manufacturers thus it doesn't rely on a single source for the hardware or a particular hardware at all.

What does this mean?
I can only use CUDA if I have an nVidia GPU, and not your regular GPU which is neutered now but a Tesla designed specifically for this purpose. We *might* see CUDA in Tegra designs but, again, only in nVidia hardware.
OpenCL can be used from x86 hardware to ARM and any manufacturer that wants to use it, from your phone to your desktop and HPC.

They're expanding how, exactly? In the same HPC space that I said they were? How's CUDA doing everywhere else?

You have some great faith on open source software.How many modern games have u played which were developed for opengl?Once upon a time opengl was heralded as a directx killer but it never worked out.You are constantly missing one vital point,big companies who are into hpc don't care much about Tesla prices as long as the roi is meaningful.Before venturing into an expensive project they typically have slas with the vendor.At the end it doesn't matter how much u payed for tesla card but rather what was your roi?From my experience its quite profitable for the end business.Linux was again heralded as windows replacement but see how that worked out.If u have programmed for Linux u already know there are minor differences between each distros which can break your code.Optimization is the key to extract performance from open source software and sometimes developers have to walk for extra miles.Its easier to optimize software for a particular architecture rather than optimize for a general architecture.You don't see cuda everywhere because programming for gpus is no easy task be it NVIDIA or amd.Okay enough derailing the thread :biggrin:

BigK should be a compute monster but for gaming purpose i don't think it will be much of a upgrade.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
You have some great faith on open source software.How many modern games have u played which were developed for opengl?Once upon a time opengl was heralded as a directx killer but it never worked out.You are constantly missing one vital point,big companies who are into hpc don't care much about Tesla prices as long as the roi is meaningful.Before venturing into an expensive project they typically have slas with the vendor.At the end it doesn't matter how much u payed for tesla card but rather what was your roi?From my experience its quite profitable for the end business.Linux was again heralded as windows replacement but see how that worked out.If u have programmed for Linux u already know there are minor differences between each distros which can break your code.Optimization is the key to extract performance from open source software and sometimes developers have to walk for extra miles.Its easier to optimize software for a particular architecture rather than optimize for a general architecture.You don't see cuda everywhere because programming for gpus is no easy task be it NVIDIA or amd.Okay enough derailing the thread :biggrin:

BigK should be a compute monster but for gaming purpose i don't think it will be much of a upgrade.

CUDA is more Glide than Direct X.

In a year or two, every new x86 based PC shipped is likely to have OpenCL support. Even Direct Compute might be there.

While CUDA, like the Power architecture, will likely retain its (very profitable) niche for some time, its seems unlikely to persevere in the mainstream long term due to its closed ecosystem and limited vendor (ie, nvidia only) support.

Nvidia just doesn't (and won't) have the level of market share to pull that off - and they have said as much.

You can talk about ROI all you want, but vendor lock in is a huge off setting risk in that regard. You don't want to make a huge investment in the next Silicon Graphics - or similar. Those Octane workstations still look pretty sweet though...

In any case, we need to see AMD get their big boy pants on and show us the fruit of their Compute focus.

If BigK is to compute what the 680 is to gaming, I am ready to be impressed. Bring it on :)
 
Last edited:

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
CUDA is more Glide than Direct X.

Glide was simplified GL to run on cards that couldn't do full GL (e.g 16 bit textures).

CUDA is not simplified open CL, it's the other way around. Open CL is far behind CUDA for functionality and features not ahead.