• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

ClawHammer in October?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.


<< There is quite a bit of info on Clawhammer as well.

Here are some details:
-256kb L2 cache
-103 mm^2 die
-67 million transistors
-.13u
>>



The original K7 had 21 million transistors. Thunderbird added 256KB of L2 cache and has about 37 million transitors. This means 16 million transistors was spent on the L2 cache and the victim buffer. Let's apply this to Clawhammer. With only 16 million (estimate, but good enough for this calc) transistors for the L2 that leaves 51 million for the core, up from 21 million for the K7.

Hammer is based on the K7 design and contains quite alot of new features, but there is NO WAY those extra features adds up another 30 million transistors. Also consider the size of the Hammer die. 103mm^2 vs ~80mm^2 for Tbred. How on earth could you pack 30 million more transistors and into the die and only add a meager 20mm^2 if not most of these are high density L2 cache? The only resonable conclusion from this is that Clawhammer will have more L2 cache than 256KB... Probably 512KB. Either that or Jerry was lying about the 67 million number and I seriously doubt that.

By the way, do the calc for 1MB of cache for Sledgehammer and see what number you come up with and compare that with previous statements... Use Clawhammer 67 million as baseline.



<< I believe Intel only made it to 1ghz with the coppermine. They released a 1.13 ghz version but they recalled it. >>



Didn't Intel release a 1.1Ghz Celeron based on the Coppermine core? That was shortly before the Tualatin Celeron came out IIRC.
 
When we are talking about third Party Chipsets, we aren't talking about disableing the Hammer Memory controller? Or VIA for example just making the AGP Chip and SB? I will welcome nVidia, SiS, etc AGP Bridge's and South Bridges but, I am very much against these third party chipsets using their own Mem controller.
 


<< When we are talking about third Party Chipsets, we aren't talking about disableing the Hammer Memory controller? Or VIA for example just making the AGP Chip and SB? I will welcome nVidia, SiS, etc AGP Bridge's and South Bridges but, I am very much against these third party chipsets using their own Mem controller. >>


Why would the memory controller even come into play with 3rd party board makers? It's integrated onto the Hammer die. So for all intents in purposes, the memory controller is taken care of.
 


<< Why would the memory controller even come into play with 3rd party board makers? It's integrated onto the Hammer die. So for all intents in purposes, the memory controller is taken care of. >>



I remember reading a VIA interview a while back where Hammer chipsets were mentioned. VIA said the integrated memory controller could kill the performance for integrated graphics solutions. I suspect he meant it would kill performance more than their usual integrated chipset offerings ;-) I don't have a link but it's quite likely that you posted the interview in the news section 🙂

So it is quite possible that VIA and others might add a memory controller to their Hammer integrated graphics chipsets. In this case the RAM might be added into the Northbridge in a eDRAM configuration or soldered directly on the motherboard (old style). I guess we'll see though.
 


<<

<< Why would the memory controller even come into play with 3rd party board makers? It's integrated onto the Hammer die. So for all intents in purposes, the memory controller is taken care of. >>



I remember reading a VIA interview a while back where Hammer chipsets were mentioned. VIA said the integrated memory controller could kill the performance for integrated graphics solutions. I suspect he meant it would kill performance more than their usual integrated chipset offerings ;-) I don't have a link but it's quite likely that you posted the interview in the news section 🙂

So it is quite possible that VIA and others might add a memory controller to their Hammer integrated graphics chipsets. In this case the RAM might be added into the Northbridge in a eDRAM configuration or soldered directly on the motherboard (old style). I guess we'll see though.
>>


Yeah, but Anand said that eDRAM was still a bit down the road. I dunno. If VIA is going to be doing their own memory controller, I'd wait til at least the second revision of the chipset😀
 
They only need 8 to 16MB of DDR RAM for integrated graphics. No reason they cannot integrate the video memory controller into the graphics chipset itself.
 
Truthfully andreasl, It was probably that Interview that you reffered to that got me thinking that VIA might use their own memory controller. See here, and below is the key quote from Richard Brown on this matter:

<< But if you look at the history of the PC industry, DRAM speed enhancements occur more times within a given CPU generation than CPU interface enhancements. Therefore, for a CPU-Memory cluster server multiprocessor system, you will need to make a trade-off between the memory size/locality and the memory speed. For the Hammer architecture, such a trade off will definitely be worth it.

For high performance desktops, on the other hand, memory speed and access latency from components other than the CPU are also critical to the performance of the system - just as the VIA Apollo KT266A has demonstrated. A Hammer CPU with local DRAM may not be the best choice for this type of application. With the AGP card for example, it would probably be necessary to increase its onboard memory in order to reduce the dependence on the Hammer CPU's HT bus. For an integrated SMA chipset, it would be unavoidable for the chipset to have a DRAM interface that could serve as a display frame buffer or system memory.

So the way we see things, there will be no single "one size fits all" chipset solution for the Hammer processor. There are a lot of factors that will need to be carefully considered in developing chipsets for each particular market segment. One thing's for sure, however: VIA will have the first cost effective and high performance volume Hammer chipset and will be the validation partner with AMD. VIA will continue to provide the highest speed DRAM controller as a building block of the VIA VMAP architecture. We will provide multiple levels of chipset integration and cost/performance to enable Hammer for different market segments.
>>

I agree totally with you NFS4, I just was thinking in weird terms (I can do that sometimes😱)
 


<< I agree totally with you NFS4, I just was thinking in weird terms (I can do that sometimes) >>


I just didn't quite understand what you were getting at 😉 But I understand now
 
I have to agree with Anand here. It is very likely that K8 will be announced in November; however, I suspect there will be bunches availible at launch and that AMD will stockpile in anticipation of demand.

Some of that demand will be mine 🙂 As soon as they are availible, a Clawhammer system will be on my desk!

PM,
Thanks for the info on SOI.

What are your thoughts on AMD's low k dielectrics (I think they use this on the metalization interconnects only) and their effect on eliminating interconnect delay? Surely there must be some advantage since they are liscensing the technology from Dow. I was under the impression that K8 would use this technology also even though it hasn't been nearly as obveous as SOI.
 


<<

<< << Can we say ... BitBoys Oy? >>

Yes we can 😉
>>



Does that mean that BitBoys Oy is a possible reality or a total joke? Not sure by the inference you made, Anand and I'd LOVE to know... Thanks.
>>



I know "a bit" about Bitboys, but I can't share all that I know. Here's what I can share:

Basically, Bitboys had the design of their chip ready. They were good to go. But then Infineon (their fab) decided to kill their EDRAM-business in the end of last year. Because they broke their agreement with BB, they had to pay compensation. Part of that compensation was not money but fabbing some of the chips. Those chips will be used to test them, and sent to reviewers and other in the industry. Basically, BB wants to clear their image as "kings of vaporware".

What BB are currently doing is to design their follow-up chip to Avalanche (the chip Infineon was supposed to fab). To my knowledge, those plans are proceeding well. Fab will be different (of course, since Infineon is exiting EDRAM-market).

In short: Is Bitboys a joke? No, they are not! What they have had over the last few years is incredible bad luck! Think about it. First, they designed Pyramid3D, the most advanced 3D-accelerator at it's time (that was back when Riva128 and Voodoo Graphics ruled the market). It had (among other things) micro-code programmable geometry (rudimentary programmable T&L), Pixel-shaders, EMBM... But as they were preparing to launch the chip, TriTech (the company they designed the chip for) lost a patent-lawsuit in their audio-business, and they went bankrupt. That killed Pyramid3D. Now they had a new chip ready... Just as they were ready to launch it, their fab decided to exit the market. Bad, bad luck.

Usually, people who flame Bitboys (and there are alot) don't know a bit what's going on it that company.
 
Back
Top