When comparing cards based on the same technology (6600GT vs. 6800GT, for example) there are three factors to consider in this order:
1) Pixel throughput, this is calcualted by multiplying the core speed * # of pipelines (newegg lists as Pixel Pipelines)
6600GT = 500 MHz * 8 = 4000 megapixels / sec
6800GT = 350 MHz * 16 = 5600 megapixels / sec
2) Memory throughput, this is calculated by multiplying the memory speed by the bit width then diviting by by 8 (8 bits per byte):
6600 GT = 1000 MHz * 128 / 8 = 16000 megabytes per second
6800 GT = 1000 MHz * 256 / 8 = 32000 megabytes per second
3) memory size, but this is a much less significant factor compared to the above two factors. Memory size is largly used for texture quality only. It's impact on speed is small if you set the appropriate texture quality. There are few games with enough textures to require setting the texture quality lower than max for 128MB cards, but they are starting to appear (BF2). Once you turn texture quality down for a 128MB card, performance impact is minimal.
In my experience pixel throughput is generally the most important of the factors. This is especially true in games that utilize the most recent DirectX features.
You cannot directly compare these throughputs across brands or generations because the cores have different designs (i.e. comparing a 7800 GTX to a 6800GT in this manner is not a straight across comparison, you need to add a factor to account for the new features in the new generation that will skew the comparison). But within a specific design generation, this kind of comparison is pretty valid.