Cache Memory explained...

Nater21

Senior member
Jun 20, 2000
330
0
0
I have seen several posts asking the difference between 4 and 8 way associative cache. Here is the answer I posted on several other message boards.

Ok here goes. Anyone feel free to correct me if I'm wrong, but this is what I remember from class.
It all has to do with the way data is stored in the Cache. cache memory is a go between between the RAM and the CPU due to the incredible difference in speed between the two. Cache stores the most frequently used data so that the cpu can access it faster instead of having to access the SLOW ram.

Cache memory can be set up in several different ways, each having different advantages in speed and cost of implementation. When the CPU has to access data it knows the address of the data which it is accessing. Before looking to that memory address, it checks the cache to see if the data is already there. Therefore, in order for the cpu to know that it is getting the right data, each word of memory in the cache must have several bits associated with it which tell the cpu what location in memory it came from. The way this address is checked is where your question comes into play.

Fully associative cache is the most desireable as it simutaneously (using comparators) checks all data in the cache for the address it is looking for. This however is extremely expensive to implement. Therefore, the cache is usually broken down into sections, Each section contains a block of addresses. Then these sections are broken down smaller and smaller, so the cpu first finds the large section that contains the data and then moves to the smaller subsections until the data is located. The larger the number of sections the closer to being fully associative it is. Therefore, 8 way associative is much better than 4-way.

As far as cache hits vs. misses, if the CPU finds the data it is looking for in the cache, then it is a hit, if it is not in the cache and it must be loaded from RAM, then it is a miss. Obviously when missed the data access time is much longer due to the slow nature of RAM. Therefore more hits is better. This is controlled by the "replacement of data" method employed by the cache. obviously the data in the cache is constantly changing due to the incredible size discrepincy between the cache and your storage media. when data is replaced in the cache the decision as to which data to remove when the new data comes in must be addressed. Do you remove the data that has been accessed the fewest times? Or do you remove the data the hasn't been used for the longest time. Either way if you remove the wrong data, your miss count is going to increase. Thus slowing down your cache.

Well I hope this answered your question. Sorry if it was too technical, or to simple for that matter (I don't know what your understanding of these things is) But that is the best that I, an electrical and computer engineering student, can offer you. Like I said if any of this information is incorrect, then feel free to correct me.

 

Goi

Diamond Member
Oct 10, 1999
6,770
7
91
Pretty much correct...also, the higher associativity, usually the higher the hit time, where hit time is the access time of memory in the case of a cache hit. This is because more sets of memory has to be searched through, and more wires and logic have to be applied, causing higher capacitance.