my GMA x3000 benchmarks...

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: kreacher
I don't think Intel will improve integrated graphics much if the rumors/speculation about Intel launching discrete graphics solutions next year are true.
It could go either way. ATI is out of the game now that they're AMD and Nvidia just flat out doesn't like the integrated market, but if Intel stalls that could get Nvidia's attention or even make an opening for SiS or VIA.
 

tcsenter

Lifer
Sep 7, 2001
18,491
319
126
Originally posted by: eurotrance4life
If anyone's interested, Intel X3000/X3100 users should rejoice! It's finally here! New Pre-Beta Drivers
Vertex Shader support targeting VS2.x hardware vertex processing
Wow, I thought Intel had hardware VS2.0 working and it was VS3.0 that was not ready. Sheesh!

BTW, according to a new graphics guide from Intel, the ETA for fully implemented hardware SM3.0 on X3000 has been pushed back to August 2007. Also, it looks like Intel has pegged DX9.0C and SM3.0 as the limit for X3000 feature set.

X3100 and X3500 will support DX10 and SM4.0 (some time in 2008...see footnotes).
I don't think Intel will improve integrated graphics much if the rumors/speculation about Intel launching discrete graphics solutions next year are true.
X3100 (965GM) has already shipped and Intel announced June 5th that X3500 (G35) will ship within 90 days. Both will [eventually] support SM4.0 in hardware.

A discrete GPU is not going to cannibalize Intel's IGP sales or demand. Consumers already have a choice between discrete or IGP. They aren't going to suddenly say "Hey, there's another discrete GPU on the market, now I don't have to buy that stinking IGP!"
 

eurotrance4life

Junior Member
Mar 11, 2004
22
0
0
Originally posted by: MercenaryForHire
THREAD RESURRECTION

Anyone tried the new "pre-beta" drivers that enable T&L yet?

- M4H
I would be glad to run some benchmarks for everyone, but unfortunately I'm running Vista and Vista drivers are not out as of yet
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
My G965 benchmarks

System:
Intel Core 2 Duo E6600
Intel DG965WH
2x1GB Transcend DDR2-800 5-5-5-15
WD 360GD Raptor for OS and main hard drive
160GB Seagate 7200RPM 8MB SATA2 300 for newer games
14.31.1 graphics driver(aka 6.14.10.4864)
Windows XP SP2

Company of Heroes

800x600(can't run at higher res as my monitor can only optimally support 1024x768. Some games run 1024x768 some don't)
Everything at High Settings, AA off

Average 10.7
High 27.5
Low 3.6

Model Quality-High
Texture Detail/Physics/Effects Density/Model Detail-Medium
Shader Quality/Reflections/Building Detail/Tree Quality/Terrain Detail/Effects Fidelity-Low
Shadows/Object Scarring-off

Average 33.4
High 61.2
Low 7.0

Everything low

Average 38.3
High 61.0
Low 8.8

Supreme Commander

Enabling Shadow Fidelity will slow things down a lot. Disabling that will help performance more than anything else. Everything low at default res(1024x768) runs the game at 20-30 fps and I didn't notice lag. It seems its somehow limited at 30 fps. Probably fillrate becomes the limit at low detail.

Everything High and Shadow Fidelity off will play at 6-12 fps. It is possible to play if you want to :).

The optimal setting for performance and image quality would be setting fidelity to low and then manually setting level of detail and terrain to medium/high. It doesn't impact performance too much. You can also play with Medium Fidelity and low/medium LoD and terrain detail.

Update: With Anandtech's uATX benchmark settings I get 8.366 average fps, which is on par with other IGPs because I use a faster CPU, and Anandtech uses faster memory.

Command & Conquer 3

Command & Conquer 3 will need low settings to be playable. No need to put it ultra low, but just low. Putting the overall quality adjustment bar to all the way low will mean you'll get around 30-40 fps. Putting it to 2nd lowest setting is the most optimal Auto setting imo, that results in 25-30 fps. You may want to experiment with it for better visual quality/performance

I noticed 50% performance improvement in Company of Heroes with 1024x768 and MQ settings shown above going from 14.31 to 14.31.1.

Age of Empires 3


It runs superb at 1024x768 with Shader Quality at Medium and everything on. It gets 25-35 fps. If you want to push it you can get to High and get 15-20.

Battlefield 2

Has minor graphical glitches on the terrain(also mentioned by Intel). It gets 35-45 fps with 800x600 low quality, and 15-20 fps with 800x600 medium quality. I need to get timedemo like thing for this so I can test it properly lol

More as I test, downloading Bioshock and World in Conflict.
 

eurotrance4life

Junior Member
Mar 11, 2004
22
0
0
great write up IntelUser2000. I'm running on Vista and the drivers are so bad though compared with XPs as Vista drivers still don't seem to support hardware Transform and Lighting (T&L) as well as vertex shaders.

here was the system info stats running the 14.31 XP beta drivers with 3DMark06:
Driver File igxprd32.dll
Driver Version 6.14.10.4842
Driver Date 6-8-2007
Driver WHQL Certified false
Max Texture Width 4096 px
Max Texture Height 4096 px
Max User Clipping Planes 6
*Max Active Hardware Lights 8
Max Texture Blending Stages 8
Fixed Function Textures In Single Pass 8
*Vertex Shader Version 3.0
Pixel Shader Version 3.0
Max Vertex Blend Matrices 0
Max Texture Coordinates 8
VGA Memory Clock 0.0 Hz
VGA Core Clock 0.0 Hz
Max VGA Memory Clock 0.0 Hz
Max VGA Core Clock 0.0 Hz
which looks all good there, but:

here's the system info with the new non-beta 15.6 Vista drivers:
Driver File igdumd32.dll
Driver Version 7.14.10.1322
Driver Date 8-24-2007
Driver WHQL Certified true
Max Texture Width 4096 px
Max Texture Height 4096 px
Max User Clipping Planes 6
*Max Active Hardware Lights 0
Max Texture Blending Stages 8
Fixed Function Textures In Single Pass 8
Pixel Shader Version 3.0
Max Vertex Blend Matrices 0
Max Texture Coordinates 8
VGA Memory Clock 0.0 Hz
VGA Core Clock 0.0 Hz
Max VGA Memory Clock 0.0 Hz
Max VGA Core Clock 0.0 Hz
*no Vertex Shader Version appears in mine

the new non-beta 15.6 vista driver does have a slight improvement from the 15.6 beta drivers as it adds Max User Clipping Planes (use to be 0). However, support for hardware transform and lighting (T&L) and vertex shaders is still missing.

So I also did some benchmarks and here were my results:
3DMark2001SE
15.6 beta 4208
15.6 final 4310
15.4.3 5562
22.5% decrease in performance from 15.4.3

3dmark03
15.6 beta 1269
15.6 final 1710
15.4.3 1652
3.5% increase in performance from 15.4.3

3dmark05
15.6 beta 671 *Force software vertex shader off
15.6 final 882 *Force software vertex shader on
15.4.3 867
1.7% increase in performance from 15.4.3

3dmark06
15.6 beta 425 *Force software vertex shader off
15.6 final 493 *Force software vertex shader on
15.4.3 529
6.8% decrease in performance from 15.4.3, specifically because of decrease in performance of HDR/SM3.0 Canyon Flight game test
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Bioshock

It reboots computer with a bluescreen after a while. It gets around 10-15 fps with 800x600 everything low.

World in Conflict

For some reason I can't get it to load the menu before the computer restarts with a bluescreen.

For reference, I tested Wolfstein ET with the same settings/map Hans007 used.

I got 34.7 fps.

Though I used 1280x1024 and "Normal" settings.

Update: I got 32.4 fps with 1280x1024 with everything High so I don't think details matter too much lol.

That is 60% better than Hans007's score. I do have a faster CPU and better memory, but graphics drivers are also playing a part here.
 

tcsenter

Lifer
Sep 7, 2001
18,491
319
126
It may be better to use software vertex processing in many games using a mid-range or higher dual or quad core CPU, simply because these processors can now do vertex/geometry processing faster than low-end fully programmable execution units. Intel has even issued an application/support note about it:

What applications benefit more with software vertex processing?

Fully programmable execution units, being generalists, are less efficient than refined 2nd or 3rd gen fixed-function processing units, being specialists. This isn't a big deal on discrete GPUs because you can just scale up the number of execution units and other architectural tweaks to offset the performance/efficiency penalty, in addition to the secondary benefits of load balancing. But this is prohibitive on IGP due to significantly added cost, complexity, and transistor count.

e.g. ATI's R200 (R8500) had approx. 60 million transistors, roughly twice as many found in an entire contemporary Northbridge.

Adding 10 million transistors to a GPU is no big deal, but its a big deal in a Northbridge. Intel jumped the shark on its GMA X3000 design in a much-needed attempt to bring its badly-lagging IGP performance and features in-line with NVIDIA and ATI offerings.

Hopefully, GMA X3500 will be much improved and actually come close to the X3000's hype.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Adding 10 million transistors to a GPU is no big deal, but its a big deal in a Northbridge. Intel jumped the shark on its GMA X3000 design in a much-needed attempt to bring its badly-lagging IGP performance and features in-line with NVIDIA and ATI offerings.

Hopefully, GMA X3500 will be much improved and actually come close to the X3000's hype.

GMA X3000 is up to the Nvidia/ATI IGP now. The problem with the hype was that people's interpretation that it would be like a Geforce 6600. No, as Intel even says its meant to compete against 690G, and its able to do that now. You can see my Supreme Commander results that its roughly equal to the 690G and Nvidia IGP tested in Anandtech's uATX article.

The G965 doesn't really use multiple cores though. Because from what people report on their P4's, I get roughly equal to how much faster it is in single thread. I got 3x over my Celeron D 2.53GHz, which is about how fast Core 2 Duo E6600 is with same threads.

GMA X3500 is GMA X3000 with DX10 and OGL2.0 enabled along with 2 more shaders. I wouldn't expect more than 25% improvement. My talk with Gary said he would provide some XP results so we can see how it performs with the other IGPs in comparison.