- Jun 21, 2005
- 12,065
- 2,278
- 126
Why? They did it to the 7950gx2...Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?
I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!
Im guessing that since DX11 is more akeen to DX10.1 than DX10, so the architecture GT300 is based on will be quite different to GT200 (finally).
I can imagine nVIDIA spending some resources in DP performance seeing as their current approach is painfully slow compared to the competition
Theres quite abit of things that nVIDIA can work on to improve such as AA performance (by finally getting rid of their 3 year old ROP design!) for example so Im quit looking forward to what nVIDIA can offer.
Hopefully they will also pay abit more attention to the performance/mm^2 by optimizing the layout/die size instead of literally going the godzilla route.
Originally posted by: Nemesis 1
Ben there is a world of differance between DX10 and DX10.1. NV present ARCH. cann't do DX10.1 . So Nv has to change Arch for DX11 as DX10.1 is BIG part of DX11 in so far as NV is concerned. But not ATI. ATI still has to improve their DX10.1 inorder to do DX11.
Ben NV has good tech. But to do ray tracing . at Cheaper transitor cost NV has to change ARCH . Intels RT is unknown . But I can tell ya ATI has the right tech for direction industry is going. AMD/ATI are alot stronger than industry insiders are saying . Let us not forget this is the future which is unknown and that future is NOW. Its Just that Intel went down aroad that leads to better convergence. Noway does Intel want to keep X86. But for now Intel gets to use x86 to their advantage. Same as when X86 was a disadvantage to Itanic 64 bit instruction line. Intel learned from that lesson . Larrabee isn't intels future vision its just not straight line like EPIC was or Cuda is. Its a crooked road designed for compatability and change. Much like what Apple has done with OSX-64. Now in Snow Leopald 64bit is native.
Originally posted by: Nemesis 1
AMD/ATI are alot stronger than industry insiders are saying .
Ben, if there is no difference, why call it 10.1 at all? There has to be something different about it, no?
But to do ray tracing . at Cheaper transitor cost NV has to change ARCH .
8xMSAA performance in OpenGL.Originally posted by: BenSkywalker
With the latest drivers, what examples would you point to for them having sub par AA performance?
Originally posted by: s44
Anyway, the amount of rage in Charlie's posts is appalling.
Originally posted by: BenSkywalker
Ben, if there is no difference, why call it 10.1 at all? There has to be something different about it, no?
It allows you to simplify some shader code which can speed those particular shaders up if used. The difference between a DX10.1 part and a DX10.0 part is considerably less then the difference between nV's G8x and GTX2xx series parts which both are 10.0 offerings(well, GTX2x we know can handle a decent amount of 10.1's features already, not sure if their is more functionality nV may be hiding due to poor performance).
But to do ray tracing . at Cheaper transitor cost NV has to change ARCH .
Anyone that pushes real time ray tracing as their main goal inside the next two years, software or hardware, is going to fail hitting mass market success. There is no chance for it to succeed outside of it making itself a key part in the console market first so ports can be offered with reasonable costs, and for that it needs to have a console backing it. Maybe MS will decide to go that route, but it would be very surprising as they aren't terribly stupid. I've explained the staggering technical limitations of RTRT to you, you can go ahead and dig up that thread, they haven't changed.
Originally posted by: Wreckage
Originally posted by: Nemesis 1
AMD/ATI are alot stronger than industry insiders are saying .
What inside information do you have that these "insiders" do not?
Originally posted by: BenSkywalker
In essence there is nigh no difference at all between DX10 and DX10.1 in terms of chip architecture, there is a huge one moving to DX11. CS changes the complexity of the shader designs a rather staggering degree. I really don't like seeing it on a 3D rendering angle, it is going to be FAR more expensive then it will ever be worth, although it does enable a lot of things on the GPGPU side of things.
In a theoretical sense, real world nV has more useable DP performance then their competitors by a rather huge margin. Not saying that nV won't consider increasing it, but for the applications that can make use of GPGPU DP, nV is the only real game available atm. nV's design is actually closer to full IEEE DP standards then several HPC CPUs, the competitors isn't close(not knocking them, they have made no claims that they are even in that market so it isn't as if we should expect anything else).
I see modifying their ROPs to largely be a wasted effort. With the latest drivers, what examples would you point to for them having sub par AA performance? At this point even the 8x gap has closed in almost everything, and honestly the difference between 8x AA and 4x AA is one that I don't think it worth serious engineering effort. I would much rather have far more powerful shader hardware all things considered(don't get me wrong, with unlimited resources I'd want it all, but alas, we don't get those options).
From a consumer standpoint, I'd much rather worry about performance/watt. If I held a lot of nVidia stock, then I would be more along your mindset, but I don't![]()
Originally posted by: Nemesis 1
Globial illumination
Originally posted by: Keysplayr
Originally posted by: Nemesis 1
Globial illumination
Go on. Sorry if these questions are annoying to you, but I figured since you don't have any problems with annoying others, well, what's good for the goose.......
So far we have: "I know better about the industry than insiders are revealing because I'm on the outside. AMD/ATI are far stonger than insiders reveal."
DX10 and 10.1 are worlds apart because of "globial illumination".
And we are all headed toward EPIC (VLIC).
Please, go on.
Originally posted by: Keysplayr
Originally posted by: Nemesis 1
Globial illumination
Go on. Sorry if these questions are annoying to you, but I figured since you don't have any problems with annoying others, well, what's good for the goose.......
So far we have: "I know better about the industry than insiders are revealing because I'm on the outside. AMD/ATI are far stonger than insiders reveal."
DX10 and 10.1 are worlds apart because of "globial illumination".
And we are all headed toward EPIC (VLIC).
Please, go on.
Originally posted by: OCguy
10.1 is vapor-ware anyway. Maybe Duke Nukem: Forever will utilize it.
Cant wait for these new chips. This place should get interesting.
Originally posted by: Astrallite
I'd be more interested in finding out what GT300 really is--will it be a single card?
I find it hard to believe it will be faster than a GTX295, which would make me wonder if nvidia will simply say sayanara--EOL to you mister 295!
Actually Derek seems to think the only difference in requirements between DX10 and DX11 are the inclusion of fixed hardware tesselation units. He's pretty clear about DX11 being a strict superset of DX10.1 with regard to hardware requirements. You can read his DX11 write-up, its in there somewhere.Originally posted by: Cookie Monster
Actually there is. Thats why nVIDIA hasn't opted to DX10.1 because it requires a complete re-design of their TMU specs to be DX10.1 compliant.
This is actually a much more plausible benefit of DX10.1 as seen in games today, the most documented improvements being the ability to read the MS depth buffer (actually listed in DX10 features but apparently only widely used in DX10.1 titles) and also a new "gather" API function that allows 4x texture or pixel samples with a single call.Look at BFG10K's post. nVIDIA cards lose quite abit of performance as soon as one goes over 4xAA. I also would like them to have done work on the memory management, because from what I can remember, nVIDIA cards use alot of memory compared to ATi when doing similiar work.
