As I recall, the initial shift to a slot architecture was to enable Intel to place the L2 cache modules on the PCB of the CPU. Once they were able to shift the L2 cache to on-die, the need for slot architecture vanished.
Is this the same for AMD, they also did the same switching at the same time (slightly afterwards). Which raises the question are they buying Intel technology or using reverse engineering to arrive at there own solution.
Actually, the switch to a slot design for the Pentium II was not needed for on-chip L2 cache: this had been successful already with the Socket 8 Pentium Pro and its integrated on-package L2. The real reason Intel switched to Slot 1 was to briefly monopolize the market and to try to kill AMD's cheap new K6, which was humilating their P55C chips.
Well there is a grain of truth in what modus said.
The Slot1 was to help shift away from Socket7 for competition reasons.
But the Pentium2 didn't have on-package cache like the Socket8 P-Pros. At the time and with manufacturing processes of .35 and even .5 micron having on die cache was quite expensive. So it wasn't totally for monopolies sake. Besides...even the P2 hardly had anything to fear from the K6 line even if the P2 was socket7.
The AMD K6s and Intel Pentiums and P-MMXs all had the L2 cache running at Front side bus speed off chip.
The Athlon classic, P2 and P3 Katmai all had L2 cache running at half the clock speed, it wasn't on chip, but it was in the little processor cartridge PCB, they couldn't run the L2 cache at half speed and have it on the motherboard.
The TBird and P3 Coppermine have the L2 cache at full speed built onto the die so they don't need the Slot cartridge anymore so they moved back to a Socket to fast a little money on the CPUs.
The point is, a slot design was not necessary just to have L2 cache running at some fraction of the CPU clock. The PPro did this by integrating the cache into the CPU package -- basically strapped to the top of the core. Intel could have implemented something similar and released a new Socket 7 chipset. They didn't, because they wanted Socket 7 to die a quick death, which they knew would lock out AMD and Cyrix (still a player at that time) for quite a while until they regrouped and either released processors that worked on Intel boards (which they would be sued for) or came up with their own standard.
What Intel didn't bank on was the outstanding longevity of Socket 7, and AMD's success at building on the standard to create Super7 and appeal to the value concious market.
<<Besides...even the P2 hardly had anything to fear from the K6 line even if the P2 was socket7.>>
Not quite. In its heyday, a K6-2/300 could match or beat a P2/300 in the majority of benchmarks. 3DNow! was given a fairly good reception considering the relatively small company that promoted it. Had Intel stayed with Socket 7 and given AMD the opportunity to use their chips on Intel boards, the last few years would have been quite different. AMD would have used the extra income to flood the market, capture mindshare, and rush the Athlon project.
A K6-2/300 could compete with a P2/300 but wasn't it much later than the K6-2 came out? Maybe I've messed up my time lines a bit...don't get me wrong I had a K6-2/350 and it was a great chip I'm not meaning to dig at AMD at all here...but I thought that at the time that AMD had the K6-2 Intel's chips were much faster. Perhaps I was wrong there?
Ok though I can agree with what you are saying, I guess it couldn't have been that expensive for Intel to make a Socket7 P2 with on-package L2 cache, I suppose AMD did it for the K6-3 after all.
But then why did AMD go to SlotA?
Why not go straight from Socket7 to SocketA?
What Noriaki wrote is correct. With Modus's explanation I respectfully disagree - and not just because I work for Intel, but also because what he wrote doesn't really make sense.
The Pentium Pro used Socket 8 which uses GTL+ signalling and is not backwards compatible with socket 7. Switching to a slot didn't changing the signalling method or protocol, and had a negligible impact on pin level timing. There's no reason why, if a competitor could create a socket 8 part, they couldn't switch to a slot instead. It's just a form factor change, not a signalling change.
The reason why the Pentium Pro caching method wasn't on the Pentium II was cost. The Pentium Pro was targetted at the server/workstation market and was (and still is) quite expensive. Part of the reason for this cost was that the MCM (multi-chip module) used on the Pentium Pro was more expensive to produce and introduced yield issues compared to conventional single-chip packages. There's a reason why no one else has ever produced a MCM part for CPU's, memory, chipsets or video cards which is targetted at the home market and this is cost. A cost-effective method was needed that put the cache very close to the CPU for high-bandwidth, low latency operation and this lead to slot 1.
The Slot 1 design allowed Intel to integrate the core and L2 cache into one package until they could bring down the cost of integrating the cache on-die. The PPros were not exactly cheap. Of course, once they were able to do this, the slot design didn't make sense anymore. The popular opinion is that they were also trying to drive AMD out of the market at the time by steering the architecture where AMD couldn't follow. AMD retaliated with the Super 7 concept.
AMD went to a slot for a similar reason, the difference being they were still to come up with an on-die cache design.
In addition to what pm said there was a huge economic factor not mentioned. When introduced the PPro yeilds were sub 90%, to test the processor they had to integrate the cache on the module and mount it in the packaging. If the processor was bad they would end up throwing away all that VERY expensive cache memory. Intel's solution was the larger board with the slot to allow the processor to be mounted and tested before the cache chips were integrated. This would only cost them the PCB to test the processor and not those expensive cache chips. I have no doubt they used the slot switch to kill socket 7 dead (and tried to kill AMD with it), but the switch was purely technical and econmical based. For the most part conspiricy theories are usually garbage...
Quote from Tom's Archive at the release of the Klamath: Intel will also shortly release Pentium Pro CPUs with 1 MB L2 cache, which is very interesting for server system, but fairly neglectable for the average user, especially since it'll cost more than 2000 bucks.
For the most part conspiracy theories are usually garbage, but it's worth making an exception in Intel's case.
What lurks in the back of a CEO's mind... "Only the paranoid survive" yes thank you, Andy Grove. Funny - or scary? You decide.
Intel created socket 8, to stop AMD from keep 'copying' them. SOCKET 8 was what intel used to stop AMD from copying and competitng, not Slot1. Intel patented some of the pins on the PPro which makes it impossible for AMD to copy without violation.
The reason for going Slot1 is also pretty much simple. With the PPro's design, the whole CPU could not be tested unless the whole thing is in place(the L2 cache next door also). Therefore a slight problem in any of the parts, the whole CPU will become useless. This is part of the reason why intel wanted to go on the slot1 which is much cheaper and by using 3rd party cache modules and running those at 1/2 the cpu speed.
<<The Pentium Pro used Socket 8 which uses GTL+ signalling and is not backwards compatible with socket 7>>
Good point. I forgot about that.
Still, the success of the K6-x demonstrated that Socket7 had just as much potential for high performance CPU design as Slot1/Socket8/GTL+ did. Heck, tack a stronger FPU onto the K6-2+ and give it a 133 MHz FSB and you've got a part that easily competes with today's fastest Celerons. That Intel dropped the P55C at .35 micron and 233 MHz -- when it clearly had the potential to ramp higher and reap the benefits of Super7 bus speeds and, perhaps, on die L2 down the road -- showed that they wanted Socket 7 to die before its time and pull AMD and Cyrix down with it. Abandoning Socket 7 so quickly was as much a predatory business move as it was a "necessity to deliver high performance technology to demanding consumers."
<< Still, the success of the K6-x demonstrated that Socket7 had just as much potential for high performance CPU design as Slot1/Socket8/GTL+ did. >>
I guess my first rebuttal is to turn your arguement on it's side: if socket 7 was so great, then why did AMD abandon it in favor of the EV6 bus? When they did that they left Cyrix and Winchip behind while they used DEC's proprietary bus interface. Clearly AMD could have continued to move the K6-2 forward into 0.18um technology and they probably could have ramped it substantially faster - there was plenty of technology headroom beyond. Like you said, AMD could have raised the frequency to 133MHz (well, I disagree with this, but let's assume this is true) and improved the processor and been competitive.
And eventually, when they wanted to move forward in technlogy, they should have used an open specification so that everyone could have competed together. They should have spent a lot of money and time and engineering effort and released it freely (like, for example, Intel and USB and PCI) so that Winchip and Cyrix could compete effectively with them. Modus, you seem to be argueing that technology shouldn't move forward and that, if it does, and I invent a better way to do something then I really should give it away freely so that my competition can use the fruits of my labors to compete better with me.
If socket 7 is so wonderful, why didn't AMD put the Athlon on the super7/socket7 platform?
If the slot 1 design was purely a move designed to shake off the competition, then why did AMD create slot A?
But these questions aren't really a good arguement in favor why a newer bus - or even a different socket or slot - was needed. First off there's the greatly increased power requirements of current processors. We have dramatically cut the voltage (from ~3V to ~1.7V)), while dramatically increasing the current (because power is higher now than it was back then when we had the higher voltage). This mandates more power pins to avoid huge IR drops. But If you add pins to a socket, it's not really the same socket anymore.
Second, I'm nearly certain that the signalling technlogy used in socket 7 couldn't run much faster than 100MHz. IIRC, the low voltage levels of Gunning Transceiver Logic (GTL) are required for high frequency operation on a PCB. In GTL+ the signals use center termination to a fixed reference voltage (1.5V) and swing from there, this minimizes noise on the PCB. I don't think that you could achieve anywhere near 133MHz using the logic the (IIRC) TTL logic used on the older socket. I have a clear idea of how the Pentium Pro/Pentium II/Pentium III bus works, but I don't quite recall how the Pentium bus worked (it's been too long). But I feel confident that if it could have run at 133MHz, then someone would have ramped it up there. I think they needed to switch the voltage levels and the logic swing in order to get it above 100MHz.
So, my point is that I think it would be impossible to put an Athlon or a Pentium III on the old socket 7 bus without a substantial bandwidth cut and without a dramatic redesign of the power consumption(to get it below ~10W).
But discussions of older technology being propped up to avoid incompatibility ignore the fact that the computer revolution is built on finding newer, and nearly always better, ways of doing things. We probably could have really fast 500MHz 486's nowadays if everyone continued to ramp the 486 core down in 0.25um and 0.18um processes. But instead we use much faster (and larger and more complex) superscalar cores running at 1GHz+. Sure, we abandoned the 33MHz 486 186-pin Socket 6 socket in favor of the much faster, and that messed things up for a while, but the newer sockets are both necessary (for power) and higher bandwidth.
Ok the jest is (disregarding the conspiracy theories) technology didn't allow Intel to create a socketed chip with on die cache (cost effectively) so they went to slot until they figured it out, and AMD followed to for the same reason.
i am doubting that AMD followed for the "exact same reason" since they had the k6-3 which had integrated L2. but then the K6-3 yields sucked. so i am going to assume they did so to match intel with the slot thing, as well as the same reason intel did (marketing), until they could "perfect" their manufacturing methods, and the plant in dresden comes online ;-)