Originally posted by: iwantanewcomputer
anybody see the new article about all the amd and intel processors.<BR><BR>it shows an itanium in the furture with ~24 mb l3 cache and 1700 million transistors. how much would one of these cost, cause isn't cache a very expensive thing to add?
Transistor per transistor and die area per die area, cache is much "cheaper" than core logic, from a basis of design effort, power consumption (static and dynamic), and performance ROI.
<BR><BR>now here's some fun math, assuming this thing is 90nm, and based on the size of the prescott vs it's transistor count<BR><BR>112 mm / 125 mil<BR><BR>with 1700 mil transistors that would be a die size of 1523 mm, that's huge, <BR><BR>just rambling, any other opinions
Cache has a transistor density close to 10x higher than core logic, especially for higher-levels of cache. For what it's worth, the maximum die size allowed with 90nm lithography tools is between 600 mm2 and 700 mm2 (I can't recall exactly).
Originally posted by: Dman877
Seems to me that a proc with 24 mb of high speed cache will most likely cost 5 figures...
Not at all, see the new price list in
IntelUser2000's post. "Cost" and "price" are two entirely different issues...FWIW, I remember reading a Microprocessor Report article that estimated the cost of McKinley (421 mm2 1 GHz Itanium 2) to be around $125. And that chip was produced on 200mm wafers, while the upcoming 90nm Itanium will be produced on 300mm wafers at a much higher volume. In addition, Itanium 2's L3 cache architecture, which occupies much of the chip area, has a lot of redundancy built in, which, combined with the defeaturing of the L3 cache for lower-priced variants, has a large positive effect on yield. Itanium 2's core area is actually relatively small, a product of both the architecture and design methodology.
Originally posted by: Dman877
Just out of curiousity, isn't 24MB of cache a complete waste of die space? Wouldn't they see much more performance gains from adding more FPU's, memory controllers, and integer units to the core? 24MB seems excessive. I don't even know of applications that require such capabilities.
It's not like all we're doing is adding more cache...there's a HUGE host of new features going into the 90nm Itanium 2. (I might be a little defensive since I'm on the design team

)
As far as the decision for the large L3 cache, it definitely is very useful for Itanium 2's target market: big databases that are many terabytes in size. For mid-range and back-end database applications, each 1/2 MB of cache adds 1% in performance. Witness IBM POWER5, which has on-die memory controllers and a huge amount of scalable bandwidth, yet still retains an (off-die) 144 MB L3 cache shared among 4 chips.
Out of curiousity, would it not be cheaper and more efficient for intel to develop an SMP setup which is similar to what AMD does with the Opterons?
I wish (off the record) that Itanium had on-chip links and memory controllers already, but there are also non-technical reasons to consider. Itanium uses custom chipsets from HP, SGI, IBM, Unisys, NEC and Bull to support single systems with up to 256 CPUs (and 512 soon from SGI). While a single Itanium 2 node uses a shared bus with up to 4 CPUs, these chipsets allow memory bandwidth to scale across multiple nodes while the number of CPUs increase. These systems were all introduced within the last year or so, and a lot of R&D goes into them...we can't just start changing the system interface on a whim. So one of the unfortunate side affects of designing a common CPU used across high-end systems from many different manufacturers is that the system interface can't change quite so often.
Originally posted by: Zebo
Itanium runs what? windows?
...and HPUX, Linux, OpenVMS and NSK, and on a virtualization layer, GCOS and z/OS (Bull and IBM's mainframe OSs). And based on the rumblings from Sun, Solaris may make a return to Itanium soon.