You can only decide if a dual core or quad core is better based on what you use your computer for, and how long you plan to keep it in service.
In the simplest terms, processor cores do the computations that make your computer work. Every time you open a program, execute a command (rename a file, create a document, etc.) or even type / move the mouse, the operating system sends a set of tasks (computations, better known as instructions), to the processor to be resolved. Generally speaking, the more cores there are to make computations, the faster a long list of instructions can be completed. Think of the classic supermarket example - with only one cashier, you'd be standing in line all day; with multiple cashiers, the line can be broken up somewhat evenly to get people checked out in a relatively decent amount of time. The more cashiers (cores) there are available, the less time you have to wait.
Unfortunately it isn't quite that simple. While a multi-core processor is more than happy to complete tasks on any and all of it's cores, the programs that create and send the instructions must be designed to target more than one core. Programs that make use of more than one processor core are considered to be multi-threaded (since a "thread" is a chain of execution instructions; multiple threads mean multiple chains, and thus a each thread can go to a different core and be executed in parallel with other threads). Though multi-threaded applications are becoming more common, many still only make use of one or two cores. Generally, only very CPU-intensive applications (video encoding, 3D rendering, etc.) make use of four or more cores at this point in time. Now that multi-core processors becoming more common, that trend in slowly changing. In the future, applications will hopefully make use of as any number of cores available.
So, now that all of that has been said, you need to decide what you use your computer for, and how "future-proof" you want it to be. Unfortunately, there is no magic number in which clock speed beats core count (though if the difference was immense, I'd certainly choose one over the other - for example, a 5.0GHz dual core vs. a 2.0GHz quad core or a 3.0GHz dual core vs. a 2.0GHz 16-core processor). Applications that are single or dual threaded (such as many games and office applications) will make better use of the additional clock speed on a dual core, while more multi-threaded applications will run faster on quad cores.
In summary, here are some benefits to choosing either a dual core or quad core. Read the advice from others and decide from there.
Dual Core
- generally higher clock speed, more performance in single threaded applications (such as many games)
- generally cheaper
- dissipates less heat
- uses less power
Quad Core
- better performance in applications that support many threads (video encoding, 3D rendering, etc.)
- more "future-proof" as applications and games become more multi-threaded
- only negligibly more expensive than dual cores considering the possible added benefits
- can be overclocked to achieve the clock speed of similarly priced dual cores
Now that that's out of the way, cache is a bank of memory located on the processor outside of the cores. If you look at a die-shot of a processor (such as this one:
http://www.hardwarezone.com.au/img/data/articles/2008/2537/Phenom-Die-shot_sm.jpg), you can physically see the different areas, some of which are cores and some of which are cache, data paths, etc. In that picture, I believe the cache is the blocky areas located near the in the center-left and center-right, though I'm no expert and could certainly be wrong.
Sets of instructions, such as the threads (chains) I talked about previously, need to be "loaded" into the processor's cache before they can be executed. Cache is blazing fast (and extremely expensive) compared to normal system memory, and a processor spends a considerable amount of time fetching instructions from system memory and placing them in cache before executing (modern processors generally grab several extra blocks from the system memory "just in case" it's related to the instruction(s) it was told to fetch, so it can reduce the overall time spent fetching instructions; this is called pre-fetching). Since more cache means more instructions can be stored at once without needing to swap out space to fetch new instructions from system memory, a larger cache generally equates to more performance. How much more cache means more performance, and how much better will the system perform? I have no clue, honestly. I also don't fully understand the difference between L1, L2, and L3 cache, unfortunately.
Once again, I'm not an expert on cache (or processors in general), so don't take my word for gospel. There may be others here that can correct me on a few points or elaborate on some things that I left unclear. Either way, I hope this helped to explain the mystery behind processors without making your head spin. I find that going "too far" in depth is sometimes the best way to really help people understand why something is the way it is rather than just making a "quad cores are better!" blanket statement.