• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD demos Bulldozer at Investors conference

busydude

Diamond Member
http://hothardware.com/News/AMD-To-Demo-Bulldozer-At-Investor-Conference/

AMD's 2H Investor Day is tomorrow and rumors whisper that the company will display Bulldozer performance for the first time ever. In the past, AMD has often used Analyst Days to demonstrate upcoming products or to at least discuss them in more detail than it's done previously. If Bulldozer does make an appearance tomorrow it'll have a lot of weight to carry. AMD's share of the server market was flat in Q3 compared to Q2, despite the rapid proliferation of Magny-Cours processors and the AMD 6000/4000 platforms.
I feel this is true based on JFAMD's statement a few days back..

Why don't we all wait until the 9th with industry analyst day brings instead of following speculation on the web?

Info on Bulldozer posted by John.

http://blogs.amd.com/work/2010/11/09/server-highlights-from-financial-analyst-day/
 
Last edited:
1MB L2 per core, 1 MB L3 per core.

So, cores won't be fighting for Cache resources?

And each core has access to only 1MB L2 and 1MB L3 and not 2MB and 8MB respectively? By using this kind of cache hierarchy won't there be any performance degradation when compared to the current generation processors?
 
So, cores won't be fighting for Cache resources?

And each core has access to only 1MB L2 and 1MB L3 and not 2MB and 8MB respectively? By using this kind of cache hierarchy won't there be any performance degradation when compared to the current generation processors?

I think he was just referring to ratios.

The L2 is shared in a module (between pairs of cores). The L3 is shared chip-wide.
 
L2 and L3 cache – I have been saying for a long time that we would hold cache details until launch, but there were some compelling reasons to include this information in some of the compiler updates. Having the proper cache sizes helps in the optimization of applications, so we decided that helping our customers and ISV partners optimize ahead of the release outweighed the competitive concerns. Each module will have a massive 2MB L2 cache for the 2 integer cores to share and you’ll see an 8MB L3 cache shared per die (16MB on the 16-core “Interlagos” processor.)

John I don't understand why you would have NOT released the cache details before launch 😕

If you want applications to take advantage of your architecture on the day of release then you need to be building compiler support before release...and that means getting the cache info (plus other ISA stuff) into the hands of the compiler teams in advance of the release.

Or are you delineating between the kinds of info you release publicly versus the kinds of info you release to businesses under NDA?
 
John I don't understand why you would have NOT released the cache details before launch 😕

If you want applications to take advantage of your architecture on the day of release then you need to be building compiler support before release...and that means getting the cache info (plus other ISA stuff) into the hands of the compiler teams in advance of the release.

Or are you delineating between the kinds of info you release publicly versus the kinds of info you release to businesses under NDA?

We released it for compilers, as I said in the blog. We were trying to keep it from getting too public, but obviously that is not going to happen, so we went wide with it at analyst day.
 
– This is the most asked question that I get. Today we gave granularity down to the quarter. We expect to launch the client version of “Bulldozer” (code named “Zambezi&#8221😉 in Q2 2011. The server products (“Interlagos” and “Valencia&#8221😉 will first begin production in Q2 2011, and we expect to launch them in Q3 2011.

If I'm thinking right, I thought in the past, server parts were projected to come out first. Did Zambezi get pulled ahead, or Valencia get pushed back?
 
so what happened, did they demo this?
A cache memorandum!?!? That's it!?!?

To be frank.. the demo has been a bit underwhelming. New info regarding the CPU is great.. but to demo a server chip just playing an HD video.. is lame.

These got me interested though:

Turbo CORE – We have disclosed that we would include AMD Turbo CORE technology in the past, so this should not be a surprise to anyone. But what is news is the uplift – up to 500MHz with all cores fully utilized. Today’s implementations of boost technology can push up the clock speed of a couple of cores when the others are idle, but with our new version of Turbo CORE you’ll see full core boost, meaning an extra 500MHz across all 16 threads for most workloads.
So, this boost of 500Mhz on all cores depends on the application? Or is it application agnostic.

Also, do you have TDP info that you could share?
 
Last edited:
@busydude:
TDP should stay the same as of Magny Cours/Lisbon since Interlagos/Valencia go into the same sockets. For DT it's likely that TDP will stay the same.

@Janooo:
It's the photoshopped die photo as shown before. Don't read too much into it as some over at Investorshub assuming a 5th, hidden module for yield reasons 😉
 
@busydude:
TDP should stay the same as of Magny Cours/Lisbon since Interlagos/Valencia go into the same sockets. For DT it's likely that TDP will stay the same.

@Janooo:
It's the photoshopped die photo as shown before. Don't read too much into it as some over at Investorshub assuming a 5th, hidden module for yield reasons 😉
Why to stop at five? Six is more like it. 🙂
So modules could be even smaller?
 
1MB L2 per core, 1 MB L3 per core.

Sounds like the same mistake as made in the original Barcelona: L3 cache not being larger than L2, and as such L3 not being able to compensate for the higher latency by giving a better hit rate.
 
Sounds like the same mistake as made in the original Barcelona: L3 cache not being larger than L2, and as such L3 not being able to compensate for the higher latency by giving a better hit rate.
Just ratios. In reality, each core will have a max of 2MB of L2 they can use (2MB is shared in a module) and 8MB of L3 (L3 is shared for the entire CPU).
 
Back
Top