• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Would it be hard to remanufacture older CPU designs on smaller processes?

MadRat

Lifer
I remember awhile back someone posted the link to a site where somebody makes modern-day 486's (only on a smaller process than the originals) for their mini-PC units. As processors become more complex they draw more power. The Taulitan is probably going to be one of the most advanced processors in the mobile market but it still will draw enough power to suck a laptop battery dry in two hours of regular use. Its not the processor that burns all of the juice, it tends to be the hard drive and other moving parts that do the worst of the waste.

The P200 (non-MMX) was a pretty solid little unit in its day, running on 0.8-micron BiCMOS technology at up to 200mHz. (It incorporated a mere 3.3 million transistors, and performed at 284 MIPS.) Imagine the 0.15-micron process pushing these puppies out at 600mHz and a sub-1W draw for a Palm PC based running Windows CE! Integrated L2 caches of no more than 32k would probably more than suffice.

Why is this not practical?
 
Your idea is right. However, embedded devices such as handhelds, cell phones, etc can achieve a higher mips rate by using a small and simple RISC chip. These chips are much cheaper to both design and produce in mass quantities. Compilers are also much easier to design, and compiler support is critical for an embedded chip to succeed. x86 is a large and cumbersome instruction set with many special purpose instructions that are not frequently used. Therefore, the transistors for the instructions represents under-utilized die space, which translates into more cost to produce and buy.

There are many more issues associated with embedded device chips, but these are some of the key issues. I hope this helps clarify your question.
 
Actually MadRat, you happen to have hit a homerun! I just had a brain fart and rememberd that Intel is producing EMP proof .18 micron Pentium1 class systems to use in military hardware such as spy satellites and tanks. They are encasing the processor and supporting chip/board in a radiation proof casing so that when a nuke goes off, these devices can still operate. I recall reading the brief about 2 years ago. This was one rare case when the gov't opted for a CISC chip because of pre-existing x86 software. The military has a large installed base of x86 ADA code, and so their best option is to buy x86 processors.
 
I just ran across an article on Motorola's latest offering in the embedded RISC sector. It is also a brief overview of whats happening in that area and mentions other competitors. Here
 
ah... but I was told that designers preferred x86 for all sorts of embedded uses years ago, because x86 programmers / engineers were plentiful in silicon valley. developing software and debugging/maintenance is also a very high cost.
 
I think I'd prefer current designs than much older designs that just happen to be shrunk. For instance, take a look at the new Motorola Dragonball that JJ8 has linked. Dragonball Brochure PDF. Rather than having just a plain old 486 which by today's standards doesn't do a whole lot, you have an integrated solution which has most of what you want for its purpose, without having too many extra superfluous transistors. I guess it's akin to the new nVidia nForce chipset. Smaller (ie. shrunk) is better, but better is also better. And yes, it will support both PalmOS and PocketPC.
 


<< The embedded market years ago is vastly different than the embedded market today. >>

Yeah, the specs of this new Dragonball make a 486 look like child's play in many ways:

Runs at up to 200 MHz
Colour TFT support
External Flash storage support
USB
Bluetooth
etc.

All of this stuff is built in to the chip, and much of this didn't even exist when the 486 was around.
 
Interesting question. I just set up a &quot;classic&quot; system for use as a server with a 200mhz Pentium Pro that is actually a pretty good performer under Win2K. I was surprised how hot these .35um chips run...much hotter than my Tbird does. I actually burned my hand on the heatsink. I wonder how much cooler a .15um PPro would run...
 
another issue is that some chips use a large number of pins, and in a small enough processes, might become pad limited. Of course, you could add cache to alleviate that, but it wouldn't be a straight-foward process shrink.
 


<< 1.5ghz 286 ... that would kick ass.

and be about as big as your pinky nail too! woooha
>>



Um....there are architecural reasons why a 286 wouldn't make it up to 1.5ghz. Plus, if they shrunk it to .18 micron, that'd be absurdly small - think in terms of a tiny fraction of your pinky nail, not the whole thing 😉
 


<< Um....there are architecural reasons why a 286 wouldn't make it up to 1.5ghz >>

Not to mention you'd be missing 32-bit flat addressing, pipelining, on-die L1 and L2 cache, integrated pipelined FP unit, superscaling, out-of-order execution, and SIMD instructions, among other things. 😉
 
Sohcan-

Is SIMD really necessary in a non-Multimedia unit?

BurntKooshie-

Did you see the new P4 layout? Its like 2/3's the size of the socket-370 chip. Alot of pins can be built into a tiny form-factor these days. The Pentium had like 296 pins, about three-fifths as many as the new P4 design. I don't think it would be out of the question that such a design could be produced.
 


<< Is SIMD really necessary in a non-Multimedia unit? >>

Probably not in the form of SSE1/2...SIMD operates on vectors of data, such as deal with arrays in for loops. One of the first SIMD computers was the Illiac IV which, IIRC, came online in 1972 (long before multimedia was a buzzword 😉).
 
Madrat - I understand, but you're missing my point. The die-size😛in-count ratio get's smaller every process shrink. Enough shrinks, and things get awful tight around the core. There is such a thing as being pad limited, and it does happen.

As such, I was implying nothing about the P4, nothing at all, but I see your point. My point is that for processors designed on large processes, when you shrink 'em enough, the pins get too close. As I mentioned, adding cache can bump up the size to be large enough. I never said that 485's or other embedded x86 CPU's would be pad limited, as I don't know exactly how bad the ratio has to be before it is pad limited - I just mentioned that it can happen, which is why newer designs are often used for embedded chips rather than simply shrinking older one's ad nausem.

As for the Pentium vs. P4 issue, while it may have 2/3 the pins, it is far less than 2/3 the size in the same process technology. See my point?
 
The Pentium is probably 12-15% of the transistor count of the P4, right? You're porbably correct about it being pad limited. I assume that means not enough room for the interconnects?
 
The Pentium Classic has ~3.3 million transistors, while the P4 has ~42million, so the Pentium has more like 8% of the P4's transistor count. The P4 has more cache though, which is denser than logic, so it's not that straightfoward.

But yes, I was talking about the physical, tiny little wires that connect the pins to the core itself.
 
Um....there are architecural reasons why a 286 wouldn't make it up to 1.5ghz. Plus, if they shrunk it to .18 micron, that'd be absurdly small - think in terms of a tiny fraction of your pinky nail, not the whole thing 😉

Architectural shmarchitecturial, me &amp; my band of merry bavarian elves can make anything possible with enough solder.

As for the size, the chip needs pins too, i don't think even with bpga you could shrink it smaller than MY pinky.
 
Back
Top