• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA Quadro FX 1000/2000

It looks like it will compete with a WildCat 4, except it only has 128MB of RAM and we will have to see about the drivers.

But it has all the other features, plus OpenGL 2.0 and DX9(Wildcat 4 only has support for OpenGL 1.3 and DX 7, VP supports 2.0).

I guess no one can complain about its size. Its only 2/3s the size of the Wildcat 4, but thats probably because the wildcat 4 almost 3 times as much memory.
 
Originally posted by: Mem
wonder what the prices are like.


NVIDIA has described to us the FX1000 as being 1.5x the performance of their former, top Quadro4 980 XGL?.and, as NVIDIA commonly does, this model will be released at the same MSRP as the former Quadro4 980 XGL: $1,295 and an ESP of $900

More info here at Bjorn3D .

50% more isnt that impressive. Even the VP9xx from 3dLabs can beat the Quadro9xx XGL from 10-50%.
 
Originally posted by: dexvx
Originally posted by: Mem
wonder what the prices are like.


NVIDIA has described to us the FX1000 as being 1.5x the performance of their former, top Quadro4 980 XGL?.and, as NVIDIA commonly does, this model will be released at the same MSRP as the former Quadro4 980 XGL: $1,295 and an ESP of $900

More info here at Bjorn3D .

50% more isnt that impressive. Even the VP9xx from 3dLabs can beat the Quadro9xx XGL from 10-50%.
Are you just making up numbers?
 
Originally posted by: Vespasian
Originally posted by: dexvx
Originally posted by: Mem
wonder what the prices are like.


NVIDIA has described to us the FX1000 as being 1.5x the performance of their former, top Quadro4 980 XGL?.and, as NVIDIA commonly does, this model will be released at the same MSRP as the former Quadro4 980 XGL: $1,295 and an ESP of $900

More info here at Bjorn3D .

50% more isnt that impressive. Even the VP9xx from 3dLabs can beat the Quadro9xx XGL from 10-50%.
Are you just making up numbers? The viewperf scores for a Wildcat4 and a Quadro 980XGL (in a similar system) are just about identical. 😕

Go look at the P10 benchmarks, the Wildcat VP series. I cant think of any off the top of my head, but tomshardware posted one a long time ago. VP beat the 9xx XGL hands down (10-50% leads), except for a single texture intensive benchmark.
 
The 3Dlabs web site compares a Quadro 900 XGL to the Wildcat4 7110 and 7210:

ProE-01:

Wildcat4 7210 - 21.14
Wilcat4 7110 - 20
Quadro4 900XGL - 16.33

UGS-01:

Wilcat4 7210 - 18.56
Wildcat4 7110 - 17.2
Quadro4 900XGL - 17.09

3dsmax-01

Wildcat4 7210 - 16.91
Wildcat4 7110 - 15.38
Quadro4 900XGL- 12.87

"System Used: Intel-based 2.54 GHz P4 system with AGP 8X and 1 GB of system memory."

http://www.3dlabs.com/product/wildcat4/comparison.htm

BTW, the 900XGL (unlike the 980XGL) can't utilize AGP 8X.
 
It looks like it will compete with a WildCat 4, except it only has 128MB of RAM and we will have to see about the drivers.
The Wildcat4 7110 and 7210 have dedicated texture caches, thus they have more onboard memory.
 
I dont know WHERE you get 50%. Most of the tests at Toms show tenths of a point lead advantage for the VP970, except for in one or two tests, and those tests they dont have more than a 5% lead. Needless to say 50% is way way over inflated. Its more like 3% most cases, and as much as 10% in a couple cases. If what Nvidia claims is true, the FX1000 and FX2000 beat the hell out of the VP cards, and top the WildCat 4s as well.
 
Quite interesting, I'm been looking forward to the QuadroFX. To me it looks considerably more interesting then ATi's FireGL X1.
A number of those small advancements that to the gaming market won't mean anything, could be beneficial within the next year or so for many Pro3D consumers.

The bandwidth is stilla little less then I'd like, so I'm tenuous on it's performance in some 3D Modelling applications but beyond that it looks nice. Spatial precision is fantastic, it's impressive how quickly nVidia has improved sub-pixel rendering precision from the original Quadro boards through to the QuadrooDCC and now the QuadroFX.

I'm still not wholly sold on nVidia's drivers in the Pro3D realm, and strongly prefer 3DLabs/FireGL.
I'm not huge on Spec ViewPerf numbers, and the varying test systems and vendor reports effectively invalidate them for comparative measurements of any individual element so I won't comment on the actual performance as yet but it looks promising.

It may potentially be a real competitor to 3DLabs WildCat 4 (something that I don't consider of FireGL's X1).
I wonder when the QuadroFX will be available on the market, I'd be interested to see whether nVidia limits the Quadro 2000 to purely to pre-built systems through an SI and if not whether it will be priced and marketed as a direct competitor to the WildCat 4.
In the past, nVidia's been happy competing for the low to mid range Pro3D market but havent made any real effort to target the high end.
 
FYI nVidia reports the clockspeeds of the QuadroFX 300/300 for FX 1000 and 400/400 for FX 2000.
400/400 should be quite adequate to beat out the rather unimpressive (thus far IMO) FireGL X1, but whther it'll be able to consistently outperform the WildCat 4 7210 remains to be seen.

BTW, anyone else feel nVidia needs to drastically improve their FSAA implementation on their workstation boards?
 
Originally posted by: LH
I dont know WHERE you get 50%. Most of the tests at Toms show tenths of a point lead advantage for the VP970, except for in one or two tests, and those tests they dont have more than a 5% lead. Needless to say 50% is way way over inflated. Its more like 3% most cases, and as much as 10% in a couple cases. If what Nvidia claims is true, the FX1000 and FX2000 beat the hell out of the VP cards, and top the WildCat 4s as well.

And I dont know where you get your numbers. FYI, if the 980XGL and 900XGL's only difference is AGP8X, that will have no affect on actual performance.

Link

ViewPerf 7.0
VP970 900XGL %Diff
10.8 9.64 12
12.8 11.4 13
15.64 10.98 42.4

SolidWorks
3.12 3.03 -
2.95 2.94 -

SolidEdge
8.82 5.78 53
 
Originally posted by: Vespasian
BTW, anyone else feel nVidia needs to drastically improve their FSAA implementation on their workstation boards?
What's wrong with it? Does it sacrifice too much performance?

Frankly, IMHO the quality outright stinks and increasing the number of samples is only going to very slightly offset this.
RG MSAA is simply not a very good option for antialiasing in almost all Pro3D applications IMHO, and nVidia doesnt openly allow the other modes that the graphics chips hypothetically support.

From what I've heard thus far the FX still seems to be RG MSAA combines with some form of SS to provide for more smapling points beyond 4X.
Their line AA also has significant weaknesses whenrendering an edge with two stark color gradients.

I have no real complaints with the relative performance, it's quite decent relative to the competition for the most part. Performance is a distent second to quality however.
 
Originally posted by: Rand
Originally posted by: Vespasian
BTW, anyone else feel nVidia needs to drastically improve their FSAA implementation on their workstation boards?
What's wrong with it? Does it sacrifice too much performance?

Frankly, IMHO the quality outright stinks and increasing the number of samples is only going to very slightly offset this.
RG MSAA is simply not a very good option for antialiasing in almost all Pro3D applications IMHO, and nVidia doesnt openly allow the other modes that the graphics chips hypothetically support.

From what I've heard thus far the FX still seems to be RG MSAA combines with some form of SS to provide for more smapling points beyond 4X.
Their line AA also has significant weaknesses whenrendering an edge with two stark color gradients.

I have no real complaints with the relative performance, it's quite decent relative to the competition for the most part. Performance is a distent second to quality however.
Is rotated grid supersampling the highest quality implementation of antialiasing?
 
Originally posted by: Vespasian Is rotated grid supersampling the highest quality implementation of antialiasing?


That's highly debateable, almost any implementation has it's merits and I don't think one can truly pick out any given implementation that is the best at every angle, compatibile with all rendering formats and textures etc.

All in all though, I'd probably give the nod to a truly random RG SSAA implementation as being the best of the 'typical' antialiasing implementations in terms of sheer image quality.

There are certainly more obscure techniques such as prefiltered antialiasing that are almost certainly preferable, but almost all of them have fatal flaws that prevent common usage. For prefiltering that flaw is that it's virtually impossible to implement effectively in a scene in which visibility of any given polygon is not a constant variable... it's great for simple 2D fonts and lines, however it's all but unuseable in consumer level 2D/3D rendering.

Stochastic poisson disk sampling used in combination with SSAA, with 16 samples/pixel might be ideal for games however. It'll reduce aliasing, and the slight increase in noise wouldnt be a big deal for most games.
Of course that's not exactly a viable option as it would effectively kill performance even with the most powerful of graphics cards.

BTW, gamma correct antialiasing is a HIGHLY underrated ability. It's about time that it's being brought to the consumer level in DX9 complaint gaming boards.


It's too bad that consumer level graphics cards have to be so concerned with benchmark performance, otherwise we might see some decent antialiasing implementations.
As is, anything that offers excellent quality usually comes with a performance hit... and 99.9% of reviewers will ignore that benefit in quality and focus purely on the performance hit.
The perfect example of that would be IMHO ATi's R200 > R300 transition, that R200's antialiasing implementation is clearly superior in most respects to that used on the R300. The R300 core is capable of the same implementation but ATi refuses to open it up in the drivers as sites would use it as the "highest quality" mode to benchmark... said benchmarks would put them in an unfavourable light as it's performance would be comparatively low.
Public perception even among the enthusiast community would immediately label it as an inferior implementation regardless of quality purely due to it's somewhat lower performance.

 
Back
Top