AMD & ATI coming together.

Neos

Senior member
Jul 19, 2000
881
0
0
Long been an AMD user & I hope to see them regain the crown soon. In Tom's Hardware there is a pretty good writeup http://www.tomshardware.com/2006/08/08/graphics_beginners_3/ about graphics cards.
What intrigued me most was when they spoke to the difference in 'bit' performance (64 bit being pretty poor all in all).

Made me wonder if AMD, in conjunction with ATI - could integrate say the new AM2 CPU's with a lower cost, high bit graphics processor to really smoke Intel in games??

Comments? Speculation?
 

Furen

Golden Member
Oct 21, 2004
1,567
0
0
Huh? Are you talking about memory interface width? 256 bits basically means quad-channel memory, so A64s have a 128bit memory interface. The 64bit thing for A64s is the amount of integer data that can be stored in a single x86 register (and, of course, manipulated in a single clock), the amount of memory that can be addressed (currently "only" 40-bit addresses are used, AMD plans to expand this to 48 bits soon, but this is mostly an internal thing) and various other extensions to the x86 ISA.
 

Neos

Senior member
Jul 19, 2000
881
0
0
HUH is right - I am sure.
I am not that up on 'BITS'. I just saw that article, and I have read how important the graphics processor can be to a gamer. It seemed that AMD's aquisition of ATI could make for a great sceanio to bring a higher end graphics processor into an AM2 setup (on the board) at a really competative price. That would just add to AMD's share of the market and status as a gaming pc, I woud think.
If I am totally off base - just let this thread die a quick death, please!
 

Furen

Golden Member
Oct 21, 2004
1,567
0
0
Well, I decided to expand a bit on the memory bandwidth thing to make it clearer for you:

Video cards have extremely high-clocked memory because the memory is soldered onto the same PCB as the chip and the trace lengths are are WAY WAY shorter than they are for regular system memory (trace lengths being the physical distance between the memory controller and the memory chips). Video cards, because they process massive amounts of data concurrently, require lots and lots of memory bandwidth, this is especially so with the higher-end parts that are much "wider" than their low-end counterparts. You calculate total realizable bandwidth by multiplying the memory width (32-bits for most single chips, 64-bits for single-channel, 128-bits for dual, 256-bits for quad) by the "effective" clock speed. For example, if you have GDDR4 running at 1GHz (effective 2GHz) and have a 128-bit width then you have (128bits/Hz * 2GHz) which gives you a humongously big number (256Gbits/sec). You then divide this by 8 (8 bits in a byte) and you'll get your bandwidth (32GB/sec, in this case).

Now, I'll tell you why throwing an ATI chip into an AMD socket (or an AMD CPU) WILL NOT give you better performance than a standalone graphics card. Did you see the comparison between 128 bits and 256 bits @ TH? Well, the bits themselves don't mean anything, they only give you the number you need to multiply by the memory clock to get bandwidth. An FX 62 can only get about 10GB/s out of DDR2 1000, which is about 60% of what a geForce 6600GT can provide its chip (it uses GDDR3 @ 500MHz--1000 effective--on a 128 bit channel), not to mention that the FX-62 would also have to feed itself with the same bandwidth, not just the GPU. I experience performance losses with that video card if I lower the ram clock even slightly, and higher-end video cards are significantly more bandwidth-hungry. Now, if ATI can throw 128MB of GDDR3 onto the package or something then you'll very likely experience significant performance improvements but this is probably PHYSICALLY impossible.

EDIT: A bit is the smallest piece of data you can have. It can either be a 0 or a 1. The bits on the CPU tell you how big the data stored in the registers and manipulated can be (a 64 bit integer is a number from 0 to 2^64--something like 14.45 quintillion). This helps out if you want to add really huge numbers, since doing this in 32 bits requires you to break up the operation, using more registers to store the parts of the numbers and usually completing the operation in more than one clock. SSE operations in a CPU, for example, can be up to 128bits, but usually consist of more than one datum. Video cards also handle different amounts of precisions, which is usually given in bits. Floating point precision is what we usually hear about. The bits on your display resolution tell you the color depth, the amount of different colors your current mode can handle. Then there's transfer rates. The memory transfer rate is given in the amount of data that the memory can transfer in a second (20GB/sec, for example), or the amount of transfers the memory does per pin per second (1Gbit/sec*pin). This last one is usually the effective clock.
 

Neos

Senior member
Jul 19, 2000
881
0
0
Thanks. That is a lot to drink in, but I will study it a bit. Actually, I was not indicating a GP built in the CPU - just on the board - and optimized for the board and the cpu - in this proverbial case an AM2 setup.

On first read it seems to say that no cpu can keep up with what is fed to it by a high end vid card. Right?