@tweakboy: OP did not ask for apples to oranges comparison.
Yet, the answer is dependent on era. There was a time, when cache was not in the CPU. There was the time, when memory controller was not in the CPU. There was a time, when dual-core was implemented by dumping two single-cores into same die.
The question is, where is the bottleneck? That does not depend only on the hardware, but also on the resource usage pattern of the algorithm. It is easy to write a (useless) loop that fits into the registers within core, just like it is easy to make a mainly network-bound routine.
As already said, forcing the dual-core(s) to share something (like L3 cache) that the separate CPU's don't, might be either good or bad. Thus, while it might be difficult to answer the "what is faster", one could answer what components are shared or not.
Alas, we are back in the "which era". Current generation of CPU's would say that memory controller is common/separate (ie differs), but in earlier generations it was on the motherboard, so there was no difference.
But in the end it is difficult to say whether the hardware differences of this hypothetical comparison actually cause substantial performance difference. Perhaps.