Why not use SRAM for main memory?

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
How much more does SRAM really cost to produce than DRAM? Shouldn't it be a bit less than 6x since SRAM has 6 transistors per bit while DRAM has 1 and the pins out of the chip & other control mechanisms don't need to be reproduced 6x?

It occurs to me that a lot of enthusiasts would be happy to pay 6x the cost for RAM to have the entire system running at cache speed. This would entirely eliminate the need for L2 also, thereby simplifying CPU design and the lack of a refresh cycle and simplicity of it would simplify chipset design as well I would think.

This would make a lot of sense for multi-CPU servers also.

Why don't we at least have SRAM/DRAM hybrid modules, i.e. increase burst size from 64 bytes to 256 bytes or so and cache the first 1/4 of each 256 bytes in SRAM for low latency while waiting for the DRAM section of the burst to start?
 

zsouthboy

Platinum Member
Aug 14, 2001
2,264
0
0
Expense is the main reason.. but also isn't it hard to make SRAM in large sizes for some reason?

I too wonder this.. i would _definitely_ pay a lot more for like 512 megs of essentially L3 cache :D
 

borealiss

Senior member
Jun 23, 2000
913
0
0
yields are much lower for sram than are for sdram. sdram can be made in bigger quantities too, the main cost of dram doesn't come from the auxiliary circuits like clock control mechanisms or traces, it comes mainly from yield and die space.
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Considering that High end server chips only have ~1MB of L3 Cache, I dont think its exactly cheap. What I think would be even cooler is if the CPU could access Video Memory.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: glugglug
How much more does SRAM really cost to produce than DRAM? Shouldn't it be a bit less than 6x since SRAM has 6 transistors per bit while DRAM has 1 and the pins out of the chip & other control mechanisms don't need to be reproduced 6x?

It occurs to me that a lot of enthusiasts would be happy to pay 6x the cost for RAM to have the entire system running at cache speed. This would entirely eliminate the need for L2 also, thereby simplifying CPU design and the lack of a refresh cycle and simplicity of it would simplify chipset design as well I would think.

This would make a lot of sense for multi-CPU servers also.

Why don't we at least have SRAM/DRAM hybrid modules, i.e. increase burst size from 64 bytes to 256 bytes or so and cache the first 1/4 of each 256 bytes in SRAM for low latency while waiting for the DRAM section of the burst to start?

You can't do hybrids because the manufacturing processes are extremely different.

Also, IIRC SRAM uses a LOT more power, so you'd need big heatsinks (not heat spreaders - REAL heatsinks) on each of the memory chips. Another thing to consider is the hit rate of a given cache (how often what you need is already IN cache). If in most applications the hit rate is very high, you won't see much performance gain from further increasing the amount of L2-speed memory.

You do need some control stuff for SRAM as well, and I think the number of transistors in the storage area would be much greater than the amount needed in the control, so you would see an almost exact 6x increase in transistors.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: CTho9305
Also, IIRC SRAM uses a LOT more power, so you'd need big heatsinks (not heat spreaders - REAL heatsinks) on each of the memory chips. Another thing to consider is the hit rate of a given cache (how often what you need is already IN cache). If in most applications the hit rate is very high, you won't see much performance gain from further increasing the amount of L2-speed memory.

You do need some control stuff for SRAM as well, and I think the number of transistors in the storage area would be much greater than the amount needed in the control, so you would see an almost exact 6x increase in transistors.

My understanding was that SRAM is more power efficient (uses less) than DRAM because it doesn't "leak" charge and need to be refreshed.
 

zsouthboy

Platinum Member
Aug 14, 2001
2,264
0
0
Originally posted by: Shalmanese
Considering that High end server chips only have ~1MB of L3 Cache, I dont think its exactly cheap. What I think would be even cooler is if the CPU could access Video Memory.

Yes! Why not make your extremely fast(and nowadays 128 megs or MORE) video memory some sort of cache? Of course, only when its not being used... but still... what would be the technical difficulties of this?
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Yes! Why not make your extremely fast(and nowadays 128 megs or MORE) video memory some sort of cache? Of course, only when its not being used... but still... what would be the technical difficulties of this?


The AGP bus to the video card is still slower than RAM. Also it would add latency. (the main difference between RAM and L3 is latency not burst rate - theoretically an SRAM main memory should have even less latency than L3 because it doesn't need to check a tag RAM first to see if the data you want is there (and if so in what place))

Video memory on a high-end card is higher burst rate than your main system RAM but the latency is the same. Sending the data across AGP would kill both. (Besides, the one of the main features of AGP is the reverse of that - your video card DOES use its own memory as a cache and if you have too many textures the excess is stored in main system RAM). -- So that suggested system is sort of in place if you have your code compiled to run directly on the NV30 or Radeon 9700 rather than on your main CPU.
 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
My understanding was that SRAM is more power efficient (uses less) than DRAM because it doesn't "leak" charge and need to be refreshed.

DRAM "leaks", but requires very little steady state power (other than the refresh pulse, that is).
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Why not create a dedicated, low latency bridge between the CPU and GPU specifically for this purpose?
 

Haden

Senior member
Nov 21, 2001
578
0
0
Yes! Why not make your extremely fast(and nowadays 128 megs or MORE) video memory some sort of cache? Of course, only when its not being used... but still... what would be the technical difficulties of this?

Why current situation is bad? I mean video ram is for video data, programs can (and should) keep sprites/buffers in video memory if possible, so when drawing data doesn't need to travel anywhere.