nVidia drops CUDA for GTX465?

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,205
126
Never happen. CUDA is NV's bread and butter. Besides, CUDA is just a software interface into their drivers to take advantage of the computational units that are already in the hardware. Removing driver support for CUDA wouldn't speed anything up, and would negatively affect the marketability of the product.

I LOLed.
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
Unpossible. CUDA support is what makes an NV card an NV card. Without it you lose PhysX and acceleration of various third party apps (encoding, etc) in the ecosystem NV has been trying to grow for the past few years.

What *is* possible is driver limitations on CUDA performance. For instance, the 480 delivers something like 1/4 DP performance of the exact same hardware sold as a Tesla card.

Another possibility is removing or limiting the amount of cache and other Fermi improvements over the previous series.

I'm not sure if either approach will win on the power and heat front, but it would definitely help with product differentiation.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
I could see them downgrading the level of CUDA support back to the level of the G80 or GT200 chips. IE, primarily a normal gpu and not a compute gpu, and thus fermi only in name, but they could have had to do that a long time ago. (if they did, it could potentially make the gtx465 a very competitive and compelling product)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
That article is pretty retarded speculation. The only thing that they could really do would be to somewhat gimp CUDA performance by reducing shaders or cache.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
The concept makes sense, although not the execution. Rumors have the GTX 460 being a further cut down version of the GF100 core (probably as a sink for truly screwed chips). The GTX 465, iirc, was supposed to be the GF104 (or 106, I forget) version that's a true "midrange" chip with a TDP of ~150W. Probably the easiest way for them to cut down the chip was to remove a lot of the "deadweight" transistors that are used for GPGPU, but not most of the consumer market. I'm surprised that it's everything though, as I would think that would cut out PhysX and CUDA functions in games (just recently Just Cause 2 was using only CUDA for graphical enhancements). Maybe they're coming to the realization that despite all their market speak, their midrange GPUs aren't capable of producing a satisfying PhysX experience. Who knows, we'll see.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I could see them downgrading the level of CUDA support back to the level of the G80 or GT200 chips. IE, primarily a normal gpu and not a compute gpu, and thus fermi only in name, but they could have had to do that a long time ago. (if they did, it could potentially make the gtx465 a very competitive and compelling product)

G80 and GT200 are more compute-GPU than graphics GPU aswell... nVidia just didn't put as much emphasis on it yet, in terms of marketing.
The G80 was nothing short of revolutionary in terms of GPGPU.
It is essentially what OpenCL and DirectCompute are based on, and an original G80 card actually supports both these APIs, and supports them well.
Does a Radeon 2900 support them? Nope.
A Radeon 3800 then? Nope...
A Radeon 4000 maybe? Yes, but with very limited performance compared to the older G80 architecture.

If you had 'GT200 features', you'd still have a VERY potent GPGPU solution. I suppose you'd mainly be missing out on the C++ programmability, but neither OpenCL nor DirectCompute support that anyway (because AMD's hardware can't do it).
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
G80 and GT200 are more compute-GPU than graphics GPU aswell... nVidia just didn't put as much emphasis on it yet, in terms of marketing.
The G80 was nothing short of revolutionary in terms of GPGPU.
It is essentially what OpenCL and DirectCompute are based on, and an original G80 card actually supports both these APIs, and supports them well.
Does a Radeon 2900 support them? Nope.
A Radeon 3800 then? Nope...
A Radeon 4000 maybe? Yes, but with very limited performance compared to the older G80 architecture.

If you had 'GT200 features', you'd still have a VERY potent GPGPU solution. I suppose you'd mainly be missing out on the C++ programmability, but neither OpenCL nor DirectCompute support that anyway (because AMD's hardware can't do it).

G80 and R58xx are about on par in gpgpu. If consumer GPGPU apps ever take off, I'd imagine they'll target g80 level of support.

Fermi is quite a bit more gpgpu than g80 or gt200. G80 is still relatively compact for the amount of performance it offers, and even gt200 would likely make a cost-competitive gaming solution at 40nm.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
They can't cut CUDA entirely anyway, since that would kind of screw them in games like Just Cause where there are extra features which use CUDA. Makes no sense to have NV only features coded and for NV to then release new high end cards which don't support those exact features.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
I thought CUDA was just a programing language that worked with their architecture. How and why would they drop it for the GT465? It doesn't make much sense. (Of course neither does disabling Physx on computers that don't use an nVidia card as the primary card, since it would cost them sales and money, but they do that anyway.)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
It makes absolutely no sense to reduce the market penetration and footprint of CUDA-capable compute devices.

Doing this would lower the incentive (market opportunity) for any software programmers considering investing in CUDA codepaths like TMPGEnc.

That said, and considering this logic holds equally true for Physx and we've all born witness to how well that asset has been managed, I'm compelled to conclude the rumor is most likely 100% true. :p
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Fermi is quite a bit more gpgpu than g80 or gt200. G80 is still relatively compact for the amount of performance it offers, and even gt200 would likely make a cost-competitive gaming solution at 40nm.

No it isn't.
And that is my professional opinion as a developer.

If you want to look at it from die-size alone, let me point out that Fermi includes DX11 functionality and a parallel triangle setup/tessellation stage. Things that increase transistorcount considerably, while being 100% graphics features.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
I thought CUDA was just a programing language that worked with their architecture.

Cuda is not a programming language.
It is a 'framework'. nVidia uses the Cuda name to refer to the hardware architecture itself, and the software layer to program it.
C/C++ for Cuda, OpenCL and DirectCompute are languages implemented on Cuda.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
No it isn't.
And that is my professional opinion as a developer.

So, I take it you're a fan of C? Construct wise, C++ support and recursive programming are big features for Fermi over G80, though I don't know if Fermi still retains a big advantage over a cpu while using them. G80 got the low hanging fruit of GPGPU, Fermi gets the rest.
Or at least that's my take on it, feel free to educate me.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So, I take it you're a fan of C? Construct wise, C++ support and recursive programming are big features for Fermi over G80, though I don't know if Fermi still retains a big advantage over a cpu while using them. G80 got the low hanging fruit of GPGPU, Fermi gets the rest.
Or at least that's my take on it, feel free to educate me.

Why would C++ support be a compute-only feature?
The stream processing units are shared between compute and graphics tasks.
This means that new functionality can be used for graphics shaders aswell. The shaders are compiled on-the-fly by the driver, so it can optimize the shaders for the underlying architecture.

I think my point is more that G80 was VERY heavily compute-oriented. Yes it also was a fantastic architecture for graphics, but compared to ATi's architectures at the time (2900/3800 series), clearly the G80 is MUCH MUCH MUCH more compute-oriented.
Just because it also creamed ATi cards in graphics doesn't mean that it was a graphics-oriented architecture.
The 3000- and 4000-series were also quite a bit smaller than nVidia's competing offerings.
It's just that nVidia's architectures were very energy-efficient compared to ATi's offerings, and the pricing was very competitive, so the die-size and transistorcount weren't that apparent.
Nevertheless, G80 and GT200 were very compute-heavy architectures, Fermi is no exception.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
No it isn't.
And that is my professional opinion as a developer.

If you want to look at it from die-size alone, let me point out that Fermi includes DX11 functionality and a parallel triangle setup/tessellation stage. Things that increase transistorcount considerably, while being 100% graphics features.

I think you should go back and read what you quoted again, then read your retort. You aren't talking about the same thing, and you continue to go on about different things further in this thread.

Fox said that the GT480 was a better GPGPU processor than the G80 or the GT200. You say that it isn't better, then go off on how the GT480 has features that have nothing to do with GPGPU. How does that tell you anything about their relative GPGPU capabilities? If the GT480 is worse at GPGPU than the G80 or GT200, just show the difference in the actual GPGPU capabilities, not the differences in the non-GPGPU abilities.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I think you should go back and read what you quoted again, then read your retort. You aren't talking about the same thing, and you continue to go on about different things further in this thread.

Fox said that the GT480 was a better GPGPU processor than the G80 or the GT200. You say that it isn't better, then go off on how the GT480 has features that have nothing to do with GPGPU. How does that tell you anything about their relative GPGPU capabilities? If the GT480 is worse at GPGPU than the G80 or GT200, just show the difference in the actual GPGPU capabilities, not the differences in the non-GPGPU abilities.

He compared die size related to gaming performance, and then drew the conclusion that the balance between GPGPU and graphics is moved more towards GPGPU in the Fermi design, which I disagree with. Hence I pointed out that a lot of the growth in die-size/transistor count is related to graphics, not GPGPU... and aside from that, G80/GT200 are also VERY much geared towards GPGPU, I would almost say *despite* being great performers in graphics... but as I point out, a lot of GPGPU functionality is 'recycled' by the graphics pipeline.
You could trim a LOT of fat in the G80/GT200 architecture if you were to optimize it for graphics rather than GPGPU.
 
Last edited:

SHAQ

Senior member
Aug 5, 2002
738
0
76
In related news..Coca Cola no longer to use Cola nuts in their soft drinks.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Thers a reason nV call the CUDA cores. if the can't do cuda, what are they then?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Thers a reason nV call the CUDA cores. if the can't do cuda, what are they then?

I wonder what Nvidia calls these now:
chip_and_memory.jpg

http://www.pcper.com/article.php?aid=244&type=expert