Is there a technical reason why some application get speed up from my cache?

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
OK i am just finishing up digital logic and the latest thing we have learn and hard to do was to hand assembly programs from a simple CPU that closely resembles the Motorola 68HC12 ( as we will be using that microcontroller this summer for our micro p class).

For what I can assume is that a programmer in High Level language doesn?t have access to the cache like we did when hand assembling but then I thought what was the point for some CPUs having more cache then the others and why only some application seen speed boosts from it?

My first thought was that these applications actually had assembly code in them to take advantage of this but I gave up this idea pretty quick.

My other thought was that maybe they had specific code where their complier seen ways to speed up code if ran on a certain processor with more cache.. But wouldn't that limit it and make the code run a lot slower on CPU switch less cache...

So anyone with an answer would you mind ending my guessing here??

Thanks and now i need to get back studying for my exam in Digital Logic :)

 

BrownTown

Diamond Member
Dec 1, 2005
5,314
1
0
The more spatial and temporal locality in the code the more cache will help.

Not really all that difficult to understand, cache stores the most recently used data, so if you keep using the same data over and over again then it will always be there and you won't have to access the memmory. Also, if the code is going though an array or something the cache loads full 64Byte lines, so the data you need will be loaded before the loop or whatever gets to it.

Also, the cache is a CPU feature, so the compiler, or an assembly writer cannot control how it is used, however you can pretty much assume a modern microprocessor has cache, so when you are writting a program you can try to intellegently load data assuming it will go into the cache.
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
THanks BrownTown,

I guess i was thinking of its simple use and just thought there was more to it lol :)
 

Goi

Diamond Member
Oct 10, 1999
6,766
7
91
Also, the more cache you have the larger the working set of the program can be before you start getting into capacity problems and miss in the cache. In the most extreme case, if your cache only has 1 block, then you're always gonna miss in the cache unless you keep using that same block over and over again(probably not a very meaningful program then). Cache size also isn't everything. There are many other cache parameters that affect performance, such as associativity, block size, replacement policy, write policy, cache latency, etc.
 

SamurAchzar

Platinum Member
Feb 15, 2006
2,422
3
76
Adding to the above, you CAN control some caches. On ARM cores, for example, you can lock down some cache lines. So basically, you can preload and lock down some of your data, making sure it never leaves the cache.

 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
Originally posted by: SamurAchzar
Adding to the above, you CAN control some caches. On ARM cores, for example, you can lock down some cache lines. So basically, you can preload and lock down some of your data, making sure it never leaves the cache.

Wow thats pretty cool..

I can't wait till i start learning how to implement all this cool computer architure(sp?) :)
 

dmens

Platinum Member
Mar 18, 2005
2,274
959
136
Caches can be controlled by software, for example, prefetches and evicts. One of the major drawbacks of the x86 spec is the lack of explicit hints, but it is still a useful technique. In spite of the ISA limitations, the algorithms used today are usually pretty good, even when working alone.