486-multicore vs. opteron dual-core

jhu

Lifer
Oct 10, 1999
11,918
9
81
found this on aceshardware. perhaps someone could shed some light on something like a 12-core 486 vs. opteron? ie, why hasn't anyone tried something like this?
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
Putting 12x the number of cores will not give you 12x the speed. You'll end up with pipelines choked for data to work on. You'll need to also use 12x the cache and even then you won't hit 12x the speed simply due to multiprocessor overhead. The die size would be huge (meaning an expensive chip to produce) and not really perform all that well.

You'll see dual and quad cores soon. Anything beyond that and the performance won't be worth the price.
 

byosys

Senior member
Jun 23, 2004
209
0
76
Originally posted by: Smilin
You'll see dual and quad cores soon. Anything beyond that and the performance won't be worth the price.

Not yet anyway. Eventually we'll have fast enough buses to support 8 and 16 core processors, small enough fabs to make it economicly feasible and reasonable cooling for the task, but that time isn't now or the near future. My guess is 7-10 years. But that is strictly a guess.
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
Unless something unforseen happens (it often does), multicore designs will be the only thing that allows Moore's law to continue for about the next decade, maybe two. Not sure what it will take after that. Probably photon based processing or nanotechnology/quantum computing.
 

thermalpaste

Senior member
Oct 6, 2004
445
0
0
The 486 is crippled by a weak floating point unit. Besides, the cache design of the primitive 486 is a mess, considering that it had a single 8kb L1 cache, and no seperate data and instruction caches.
Also there is no branch prediction unit, which speculates the kind of data that is going to pass through the pipes, no MMX, SSE, et al......
thus its not feasible having a 486 multi-core.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Originally posted by: thermalpaste
The 486 is crippled by a weak floating point unit. Besides, the cache design of the primitive 486 is a mess, considering that it had a single 8kb L1 cache, and no seperate data and instruction caches.
Also there is no branch prediction unit, which speculates the kind of data that is going to pass through the pipes, no MMX, SSE, et al......
thus its not feasible having a 486 multi-core.

true, but it only had a million transistors. how many transistors do current cpus have? the opteron has about 105 million transistors. that's probably including the cache though. if you read the link, the guy talks about a modified 486 with improved fpu including mmx, sse2/3. you could probably fit more than 12 cores and still use the same amount of cache as the opteron.
 

thermalpaste

Senior member
Oct 6, 2004
445
0
0
Originally posted by: jhu
Originally posted by: thermalpaste
The 486 is crippled by a weak floating point unit. Besides, the cache design of the primitive 486 is a mess, considering that it had a single 8kb L1 cache, and no seperate data and instruction caches.
Also there is no branch prediction unit, which speculates the kind of data that is going to pass through the pipes, no MMX, SSE, et al......
thus its not feasible having a 486 multi-core.

true, but it only had a million transistors. how many transistors do current cpus have? the opteron has about 105 million transistors. that's probably including the cache though. if you read the link, the guy talks about a modified 486 with improved fpu including mmx, sse2/3. you could probably fit more than 12 cores and still use the same amount of cache as the opteron.


This is a pretty interesting post. I need to do a bit of research on this, shall get back to you soon.........


_________________________________________________________


My blog
 

thermalpaste

Senior member
Oct 6, 2004
445
0
0
Rather than using a 486, we could use a pentium (16kbdata+16Kb Instruction cache, not the older p54c).
Have you also observed that if we were to share the L2 cache with 12 odd CPUs, the cache hit may reduce drastically if the cache size is insufficient and besides the L2 cache will be difficult to implement for those 12 odd 486s because I do not think it is a wise idea having so many threads running on the L2 cache.
We could use the pentium (the u and the v pipes being the obvious reason) because of it's superscalar architecture and seperate L1 cache for I and D. According to me, the pentium will occupy 4 times the area as a 486. But what they mentioned in this article was that they were going to heavily modify the 486.
Thus the pentium core would approx. occupy 3 times the area of this 486, which is pretty much okay.
Besides if at all AMD starts manufacturing cores using the 0.65u process, then compared to the CPUs manufactured using the 0.13u , the newer core takes about 1/3rd the area of the old core, so we can squeeze 3 cores out there. If at each individual core maxes out using 0.65u process, then possibly they can deepen the pipeline a bit, though it may slow down the CPU a bit........
 

tinyabs

Member
Mar 8, 2003
158
0
0
Originally posted by: Smilin
Putting 12x the number of cores will not give you 12x the speed. You'll end up with pipelines choked for data to work on. You'll need to also use 12x the cache and even then you won't hit 12x the speed simply due to multiprocessor overhead. The die size would be huge (meaning an expensive chip to produce) and not really perform all that well.

You'll see dual and quad cores soon. Anything beyond that and the performance won't be worth the price.

Agreed!

Actually the gist is that 486 is not designed for multiprocessing. Using opteron instead will give you better performance than 12 486.

I guess what the idea is using simpler, faster and multiple processor will do more processing than Opteron. If twelve 486 can beat an opteron, that is not 486. Moreover, that is scaling out instead of scaling up. An updated example is the Athlon MP.

Along the line of scaling out, applications on multiple core have a more predictable response time. It may not be fast but is consistent.

Using 486 will be terribly inefficient and waste of engineering time.
 

SuperTool

Lifer
Jan 25, 2000
14,000
2
0
Originally posted by: Smilin
Putting 12x the number of cores will not give you 12x the speed. You'll end up with pipelines choked for data to work on. You'll need to also use 12x the cache and even then you won't hit 12x the speed simply due to multiprocessor overhead. The die size would be huge (meaning an expensive chip to produce) and not really perform all that well.

You'll see dual and quad cores soon. Anything beyond that and the performance won't be worth the price.

It depends. If you put more simpler cores, you save on R&D because you design a simpler core and replicate it.
It depends on your applications. There probably won't be more than quad cores for consumer CPUs soon, but there will be massively multicored chips coming out for servers in not too distant future.
http://blogs.sun.com/roller/page/jonathan/20040916
scroll down to "Niagara".
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Sun will create a similar system on its Niagara chips:
"Sun's Niagara is expected to use eight highly streamlined Ultrasparc IIi cores, each running four threads, on a 340 mm2 die. It will also integrate a memory controller, multiple Gigabit Ethernet media-access controllers and hardware acceleration for triple-DES and Rc4 security. Sun intends to use Niagara as the CPU in its 2005-class uniprocessor server blade designs and as a network processor in other systems."
Taken from here: http://www.eetimes.com/printab...7&_requestid=57485

Calin