Can someone show me the math for the GMA 950 bandwidth?

TheDarkKnight

Senior member
Jan 20, 2011
321
4
81
This is just for fun. I am using an older computer that has an Intel GMA 950 on the motherboard for my graphics. Intel specs on this chip are here:
http://www.intel.com/products/chipsets/gma950/index.htm

Under specifications it states that the total memory bandwidth goes up to 10.6 GB/second. It states the memory bus width is 256-bit. I've tried a few different equations and can't figure out how to get 10.6GB/second.

I translated the 256-bit bus to 32 bytes and then multiplied that by the speed of the graphics core running at 400MHz.

So thats 32 * 400,000,000 = 12,800,000,000 or 12.8GB/second.

What am I doing wrong here? Thanks for all help.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I'd imagine it is based off of RAM speed not GPU clock, since it's using the system RAM.
 

TheDarkKnight

Senior member
Jan 20, 2011
321
4
81
I'd imagine it is based off of RAM speed not GPU clock, since it's using the system RAM.

Well, if you are right that would results in even "more" bandwidth since the system RAM speed is faster than the GPU memory speed. So, Im still lost as to how Intel is getting 10.6GB/second.

Edit: I think you are wrong however because it seems the graphics could only go as fast as the slowest part in the pipeline. If the graphics core is running at 400MHz, it probably doesn't matter that the system RAM is faster. But Im no genius so someone who is please help. Thanks.

Edit: Okay, you must be right about the system RAM being used. I think I found the equation on my own but not sure why its divided by 2.

32 * 667,000,000 = 21,344,000,000 21.344GB / 2 = 10,672,000,000 = 10.672 GB/second
 
Last edited:

Itchrelief

Golden Member
Dec 20, 2005
1,398
0
71
I would imagine it's 666MT/s*16bytes=10,656Mbytes/s=10.6Gbytes/s. 16 bytes because there is no GPU memory, it's all system memeory, which I believe is 128bit for most consumer dual-channel systems.
 

TheDarkKnight

Senior member
Jan 20, 2011
321
4
81
I would imagine it's 666MT/s*16bytes=10,656Mbytes/s=10.6Gbytes/s. 16 bytes because there is no GPU memory, it's all system memeory, which I believe is 128bit for most consumer dual-channel systems.

Sounds like you figured it out. It does raise another question for me though. Why would they even use a 256-bit memory-bus width for the graphics core if they are just going to hook it up to a 128-bit system memory bus-width.