nVidia GT300 in October?

alcoholbob

Diamond Member
May 24, 2005
6,390
469
126
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!
 

s44

Diamond Member
Oct 13, 2006
9,427
16
81
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!
Why? They did it to the 7950gx2...

Anyway, the amount of rage in Charlie's posts is appalling.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Im guessing that since DX11 is more akeen to DX10.1 than DX10, so the architecture GT300 is based on will be quite different to GT200 (finally). I can imagine nVIDIA spending some resources in DP performance seeing as their current approach is painfully slow compared to the competition not to mention the introduction of CUDA 3.0 along with their new puppy. Theres quite abit of things that nVIDIA can work on to improve such as AA performance (by finally getting rid of their 3 year old ROP design!) for example so Im quit looking forward to what nVIDIA can offer. Hopefully they will also pay abit more attention to the performance/mm^2 by optimizing the layout/die size instead of literally going the godzilla route.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Im guessing that since DX11 is more akeen to DX10.1 than DX10, so the architecture GT300 is based on will be quite different to GT200 (finally).

In essence there is nigh no difference at all between DX10 and DX10.1 in terms of chip architecture, there is a huge one moving to DX11. CS changes the complexity of the shader designs a rather staggering degree. I really don't like seeing it on a 3D rendering angle, it is going to be FAR more expensive then it will ever be worth, although it does enable a lot of things on the GPGPU side of things.

I can imagine nVIDIA spending some resources in DP performance seeing as their current approach is painfully slow compared to the competition

In a theoretical sense, real world nV has more useable DP performance then their competitors by a rather huge margin. Not saying that nV won't consider increasing it, but for the applications that can make use of GPGPU DP, nV is the only real game available atm. nV's design is actually closer to full IEEE DP standards then several HPC CPUs, the competitors isn't close(not knocking them, they have made no claims that they are even in that market so it isn't as if we should expect anything else).

Theres quite abit of things that nVIDIA can work on to improve such as AA performance (by finally getting rid of their 3 year old ROP design!) for example so Im quit looking forward to what nVIDIA can offer.

I see modifying their ROPs to largely be a wasted effort. With the latest drivers, what examples would you point to for them having sub par AA performance? At this point even the 8x gap has closed in almost everything, and honestly the difference between 8x AA and 4x AA is one that I don't think it worth serious engineering effort. I would much rather have far more powerful shader hardware all things considered(don't get me wrong, with unlimited resources I'd want it all, but alas, we don't get those options ;) ).

Hopefully they will also pay abit more attention to the performance/mm^2 by optimizing the layout/die size instead of literally going the godzilla route.

From a consumer standpoint, I'd much rather worry about performance/watt. If I held a lot of nVidia stock, then I would be more along your mindset, but I don't :p
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ben there is a world of differance between DX10 and DX10.1. NV present ARCH. cann't do DX10.1 . So Nv has to change Arch for DX11 as DX10.1 is BIG part of DX11 in so far as NV is concerned. But not ATI. ATI still has to improve their DX10.1 inorder to do DX11.

Ben NV has good tech. But to do ray tracing . at Cheaper transitor cost NV has to change ARCH . Intels RT is unknown . But I can tell ya ATI has the right tech for direction industry is going. AMD/ATI are alot stronger than industry insiders are saying . Let us not forget this is the future which is unknown and that future is NOW. Its Just that Intel went down aroad that leads to better convergence. Noway does Intel want to keep X86. But for now Intel gets to use x86 to their advantage. Same as when X86 was a disadvantage to Itanic 64 bit instruction line. Intel learned from that lesson . Larrabee isn't intels future vision its just not straight line like EPIC was or Cuda is. Its a crooked road designed for compatability and change. Much like what Apple has done with OSX-64. Now in Snow Leopald 64bit is native.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Nemesis 1
Ben there is a world of differance between DX10 and DX10.1. NV present ARCH. cann't do DX10.1 . So Nv has to change Arch for DX11 as DX10.1 is BIG part of DX11 in so far as NV is concerned. But not ATI. ATI still has to improve their DX10.1 inorder to do DX11.

Ben NV has good tech. But to do ray tracing . at Cheaper transitor cost NV has to change ARCH . Intels RT is unknown . But I can tell ya ATI has the right tech for direction industry is going. AMD/ATI are alot stronger than industry insiders are saying . Let us not forget this is the future which is unknown and that future is NOW. Its Just that Intel went down aroad that leads to better convergence. Noway does Intel want to keep X86. But for now Intel gets to use x86 to their advantage. Same as when X86 was a disadvantage to Itanic 64 bit instruction line. Intel learned from that lesson . Larrabee isn't intels future vision its just not straight line like EPIC was or Cuda is. Its a crooked road designed for compatability and change. Much like what Apple has done with OSX-64. Now in Snow Leopald 64bit is native.

God I can't stand this crap. Ben says there is no difference at all from DX10 and 10.1, and Nemesis says there is a "world" of difference between them. Are you both right? Wrong?
Nemesis, what is this "world" of difference? Ben, if there is no difference, why call it 10.1 at all? There has to be something different about it, no?

 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Nemesis 1
AMD/ATI are alot stronger than industry insiders are saying .

What inside information do you have that these "insiders" do not?
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
all he seems to have are broad analogies that do not apply to the subject matter.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Ben, if there is no difference, why call it 10.1 at all? There has to be something different about it, no?

It allows you to simplify some shader code which can speed those particular shaders up if used. The difference between a DX10.1 part and a DX10.0 part is considerably less then the difference between nV's G8x and GTX2xx series parts which both are 10.0 offerings(well, GTX2x we know can handle a decent amount of 10.1's features already, not sure if their is more functionality nV may be hiding due to poor performance).

But to do ray tracing . at Cheaper transitor cost NV has to change ARCH .

Anyone that pushes real time ray tracing as their main goal inside the next two years, software or hardware, is going to fail hitting mass market success. There is no chance for it to succeed outside of it making itself a key part in the console market first so ports can be offered with reasonable costs, and for that it needs to have a console backing it. Maybe MS will decide to go that route, but it would be very surprising as they aren't terribly stupid. I've explained the staggering technical limitations of RTRT to you, you can go ahead and dig up that thread, they haven't changed.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,005
126
Originally posted by: BenSkywalker

With the latest drivers, what examples would you point to for them having sub par AA performance?
8xMSAA performance in OpenGL.

At 1920x1440 a 4850 is equal to a GTX260+ in Prey, Quake Wars & Riddick, and the 4850 is a lot faster in Doom 3 and Quake 4.

These are recent benchmarks using 182.50.

Given the weakling specs of the 4850 compared to the GTX260+, this simply should not be happening.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Originally posted by: s44
Anyway, the amount of rage in Charlie's posts is appalling.

1 part news
3 parts blogging
14 parts speculation
38974 parts rage

Blend

Voila, a Charlie post.

I don't think he's even trying to make it "news" anymore. He seems to have a flock of fans lapping up his words no matter what he writes, as long as he's pissing on NV.

I guess it is human nature to hate... which is why we have Charlie posts, wars, holocaust...

Regarding GT300, for me it is just easier to "wait-see." Same thing with Larrabee and whatever future ATI cards are... in the cards.

Right now I don't have the energy to hate, love or speculate about stuff which currently do not exist for me to buy.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: BenSkywalker
Ben, if there is no difference, why call it 10.1 at all? There has to be something different about it, no?

It allows you to simplify some shader code which can speed those particular shaders up if used. The difference between a DX10.1 part and a DX10.0 part is considerably less then the difference between nV's G8x and GTX2xx series parts which both are 10.0 offerings(well, GTX2x we know can handle a decent amount of 10.1's features already, not sure if their is more functionality nV may be hiding due to poor performance).

But to do ray tracing . at Cheaper transitor cost NV has to change ARCH .

Anyone that pushes real time ray tracing as their main goal inside the next two years, software or hardware, is going to fail hitting mass market success. There is no chance for it to succeed outside of it making itself a key part in the console market first so ports can be offered with reasonable costs, and for that it needs to have a console backing it. Maybe MS will decide to go that route, but it would be very surprising as they aren't terribly stupid. I've explained the staggering technical limitations of RTRT to you, you can go ahead and dig up that thread, they haven't changed.

Ok. Pretty much what I thought it was. Thanks.
Nemesis, your turn. Worlds of difference. ??

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Wreckage
Originally posted by: Nemesis 1
AMD/ATI are alot stronger than industry insiders are saying .

What inside information do you have that these "insiders" do not?

Because I am outside the industry. I set more objectively than those who have stakes in the Tech.

Its clear to see that everthing is heading towards EPIC(VliC) . ATI machine code is VLIC. To many people overlooking that. We know intel wanted to leave X86 along time ago . SO they set a new path when the direct path failed. They started buying software IP. That would take us down I winding road from to larrabee to Haswell. AMD already has the VLIC rearend. That just lack the compilers. IF AMD and IBM are so buddy buddy I see no problem with AMD implamenting HT. AMD is safe now. They have time. Unless they broke agreement. Intel because of the rotating registers and Mask on larrabe shows it to be EPIC rather than VLIC which can't do this. But Both work well together.

Intels Strength Compilers. AMDs weakness compilers.

 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: BenSkywalker
In essence there is nigh no difference at all between DX10 and DX10.1 in terms of chip architecture, there is a huge one moving to DX11. CS changes the complexity of the shader designs a rather staggering degree. I really don't like seeing it on a 3D rendering angle, it is going to be FAR more expensive then it will ever be worth, although it does enable a lot of things on the GPGPU side of things.

Actually there is. Thats why nVIDIA hasn't opted to DX10.1 because it requires a complete re-design of their TMU specs to be DX10.1 compliant.

In a theoretical sense, real world nV has more useable DP performance then their competitors by a rather huge margin. Not saying that nV won't consider increasing it, but for the applications that can make use of GPGPU DP, nV is the only real game available atm. nV's design is actually closer to full IEEE DP standards then several HPC CPUs, the competitors isn't close(not knocking them, they have made no claims that they are even in that market so it isn't as if we should expect anything else).

Proof? What do you mean by usable? nVIDIA's DP runs at 1/8 of the speed where as its competition runs at 1/5 of its SP speed IIRC. If your right about nVIDIA supporting closer to full IEEE DP standards, then it is nice but not when its painfully slow.

I see modifying their ROPs to largely be a wasted effort. With the latest drivers, what examples would you point to for them having sub par AA performance? At this point even the 8x gap has closed in almost everything, and honestly the difference between 8x AA and 4x AA is one that I don't think it worth serious engineering effort. I would much rather have far more powerful shader hardware all things considered(don't get me wrong, with unlimited resources I'd want it all, but alas, we don't get those options ;) ).

Look at BFG10K's post. nVIDIA cards lose quite abit of performance as soon as one goes over 4xAA. I also would like them to have done work on the memory management, because from what I can remember, nVIDIA cards use alot of memory compared to ATi when doing similiar work.

From a consumer standpoint, I'd much rather worry about performance/watt. If I held a lot of nVidia stock, then I would be more along your mindset, but I don't :p

Thats also true. But no need to imply that I own nVIDIA stock! Just what an engineer thinks naturally I guess. :p
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: Nemesis 1
Globial illumination

Go on. Sorry if these questions are annoying to you, but I figured since you don't have any problems with annoying others, well, what's good for the goose.......

So far we have: "I know better about the industry than insiders are revealing because I'm on the outside. AMD/ATI are far stonger than insiders reveal."

DX10 and 10.1 are worlds apart because of "globial illumination".

And we are all headed toward EPIC (VLIC).

Please, go on.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
When I referr to Epic on Larrabbee its in referreance to how the vector unit works . Very epic like. Very transmeta like also . Its interesting. I still think NV should by that tile 64 company its a start up . were going to be hearing about. I would buy in a heart beat.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
10.1 is vapor-ware anyway. Maybe Duke Nukem: Forever will utilize it.


Cant wait for these new chips. This place should get interesting.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: Keysplayr
Originally posted by: Nemesis 1
Globial illumination

Go on. Sorry if these questions are annoying to you, but I figured since you don't have any problems with annoying others, well, what's good for the goose.......

So far we have: "I know better about the industry than insiders are revealing because I'm on the outside. AMD/ATI are far stonger than insiders reveal."

DX10 and 10.1 are worlds apart because of "globial illumination".

And we are all headed toward EPIC (VLIC).

Please, go on.

Globial illumination is all I had to say. Because it does matter now and in future. Sure NV can sidestep but its not real Butter.

 

shangshang

Senior member
May 17, 2008
830
0
0
Originally posted by: Keysplayr
Originally posted by: Nemesis 1
Globial illumination

Go on. Sorry if these questions are annoying to you, but I figured since you don't have any problems with annoying others, well, what's good for the goose.......

So far we have: "I know better about the industry than insiders are revealing because I'm on the outside. AMD/ATI are far stonger than insiders reveal."

DX10 and 10.1 are worlds apart because of "globial illumination".

And we are all headed toward EPIC (VLIC).

Please, go on.

LOL hehe... this is why I enjoy reading your post Keys.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: OCguy
10.1 is vapor-ware anyway. Maybe Duke Nukem: Forever will utilize it.


Cant wait for these new chips. This place should get interesting.

10.1 may be Vapor ware. But if you don't do your DX11 hardware is vaporware.

 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?

I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!

This wouldn't suprise me at all. It always seems the next generations top card is faster than the previous generation's X2 versions.

 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Charlie's just upset that NVIDIA will be the first to market with a DX11 card. While ATI may not follow till next year.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Cookie Monster
Actually there is. Thats why nVIDIA hasn't opted to DX10.1 because it requires a complete re-design of their TMU specs to be DX10.1 compliant.
Actually Derek seems to think the only difference in requirements between DX10 and DX11 are the inclusion of fixed hardware tesselation units. He's pretty clear about DX11 being a strict superset of DX10.1 with regard to hardware requirements. You can read his DX11 write-up, its in there somewhere.

Your post is the first mention of TMU incompatibilies that I've heard of, especially given the DX10.1 enhancements we've seen in games don't seem to be reliant on TMU design at all. Ben's mention of SP differences are also news to me, but given Nvidia and ATI have both improved their SP designs while remaining backward compatible with prior versions, this is less surprising.

Look at BFG10K's post. nVIDIA cards lose quite abit of performance as soon as one goes over 4xAA. I also would like them to have done work on the memory management, because from what I can remember, nVIDIA cards use alot of memory compared to ATi when doing similiar work.
This is actually a much more plausible benefit of DX10.1 as seen in games today, the most documented improvements being the ability to read the MS depth buffer (actually listed in DX10 features but apparently only widely used in DX10.1 titles) and also a new "gather" API function that allows 4x texture or pixel samples with a single call.

DX10.1 features listed @ Technet
DX10.1 features implemented in StormRise

Important to note that reading the MS depth buffer can also be accomplished in DX10 on Nvidia cards, as seen with FarCry 2 where a patch and press release from Ubi specifically stated as much. The benefits are also clearly obvious as Nvidia's 8xAA performance is better than ATI's in both raw FPS and % drop from 4xAA.

Personally I think ATI parts are able to apply some of these hardware sampling and bandwidth optimizations for non-DX10.1 titles for their DX10.1 parts, but at a cost. I've read of numerous reported examples of problems with regard to Z/depth buffer and rendering issues on ATI DX10.1 parts ranging from the most obvious like Z-fighting, to more subtle like missing effects on water, missing HDR/Bloom, missing transparencies, and various other random rendering issues.