Bulldozer has 33% less ALUs than K10

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Dec 30, 2004
12,553
2
76
What makes Euler unoptimized? The only real multi-threaded applications that HT doesn't show improvements in are things like LINPACK, which are well optimized enough that they already maximize the floating point throughput of a core and stuff like encryption that are dependent on a few key instructions. Otherwise HT shows measurable gains in rendering, video encoding, computation, distributed computing, database and commercial server applications.

yea, guess you're right. I'm being too aggressive.
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
TBH, as long as we keep getting more cores, I'm not worried.
1). AMD wouldn't be developing this architecture if their PhDs hadn't found that it could increase throughput for cheaper than adding more cores
2). My single threaded performance is fast enough for everything I need. Firefox is dog slow, no matter how fast of a core you throw at it. I have about 20 tabs that load as my homepage and Firefox goes unresponsive for seconds at a time; it even freezes the Windows 7 UI and I can't drag the window from one monitor to the other. All while only using 25% of my available resources-- only using one core.

Alternatively, I open the same 20 tabs in Chrome, and all 4 of my cores peg to 100% for about 3 seconds until all the pages are finished rendering. Love it.

I must say, FF has been looking like crap lately. Our company develops a management UI for networking devices and it runs far better in IE than in the latest version of FF, the use of which is almost a religious issue for some of the engineers. It might be time to take another look at Chrome.

To address the actual thread, I'll mention from my own standpoint that I'd prefer having actual cores, all other things being equal. I respect AMD for sticking with the monolithic approach and trying to refine their existing 45nm architecture, even if they are firmly one process level behind Intel now. With Thuban, they really seem to have made progress, and it's my next upgrade, with AM3 supporting BD. OTOH, it looks like Intel is deep-sixing LGA1366 with Sandy Bridge. That was my next chosen platform but now... I don't think so.
 

JFAMD

Senior member
May 16, 2009
565
0
0
. Well this has been a great read . I don't want to Ruin this thread . So I would like to ask . On sandy Intel uses AVX with a Vexprefix . AMD is using and forgive I not taking time to look XOD or something like that. Could JFAMD if you would explain the differance to me in a meaningful way

I am not an expert on it, but as I recall, there was pretty much a standard that the market was moving to and intel pushed in a different direction, adding proprietary extensions, which broke the whole thing up.
 

JFAMD

Senior member
May 16, 2009
565
0
0
Otherwise HT shows measurable gains in rendering, video encoding, computation, distributed computing, database and commercial server applications.

Then why do some server software vendors recommend turning HT off?
 

JFAMD

Senior member
May 16, 2009
565
0
0
I respect AMD for sticking with the monolithic approach and trying to refine their existing 45nm architecture, even if they are firmly one process level behind Intel now.

Well, if I compare Intel's 32nm server parts to AMD's 45nm server parts, we see that from a performance standpoint we are at parity, from a power standpoint we are at parity and from a price standpoint we are significantly below them.

I would argue that the 32nm benefits aren't really playing out to the end customer. Maybe intel is getting benefit from them, in which case, they are not sharing with their customers. Or maybe there are 32nm issues. Don't know for sure, but there was a lot of promise built up and right now AMD's 45nm is very competitive relative to their 32nm, so I wouldn't say that we are behind at all.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
I reckon AMD's rationale is that their one decoder cannot saturate 3 ALUs they might as well have 2x encoders each serving just 2 ALUs for better execution throughput and saving die space.
 
Dec 30, 2004
12,553
2
76
I reckon AMD's rationale is that their one decoder cannot saturate 3 ALUs they might as well have 2x encoders each serving just 2 ALUs for better execution throughput and saving die space.

Not for 90% of tasks at least, yeah probably something like that.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I am not an expert on it, but as I recall, there was pretty much a standard that the market was moving to and intel pushed in a different direction, adding proprietary extensions, which broke the whole thing up.

Thanks much for your HONEST ans. But really how could intel move the market in a differant direction. As AVX was Intels standard and the Vexprefix works with intels Vex compiler. So Intel didn't move in another direction at all . Wouldn't it be more accurate to say AMD simply doesn't have Intels Vex compiler.

I enjoy very much your spending time here at AT forums , Thank you
 
Last edited:

JFAMD

Senior member
May 16, 2009
565
0
0
Intel owns a compiler, they can do anything that they want.

We don't own a compiler so we need to work with the compiler companies.

Some would say that this is a disadvantage, but seeing how certain compiler tricks have been done in the past, I would tend to say that it is better for customers if the processor companies are working with compiler companies than owning them.

In the RISC world where you own the platform top to bottom this is a different business, but in x86 it is more of an "open" environment and does not benefit from vendor lock in.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Intel owns a compiler, they can do anything that they want.

We don't own a compiler so we need to work with the compiler companies.

Some would say that this is a disadvantage, but seeing how certain compiler tricks have been done in the past, I would tend to say that it is better for customers if the processor companies are working with compiler companies than owning them.

In the RISC world where you own the platform top to bottom this is a different business, but in x86 it is more of an "open" environment and does not benefit from vendor lock in.

No one's forced to use Intel's compiler.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Intel owns a compiler, they can do anything that they want.

We don't own a compiler so we need to work with the compiler companies.

Some would say that this is a disadvantage, but seeing how certain compiler tricks have been done in the past, I would tend to say that it is better for customers if the processor companies are working with compiler companies than owning them.

In the RISC world where you own the platform top to bottom this is a different business, but in x86 it is more of an "open" environment and does not benefit from vendor lock in.

Ya If Ididn't have Boris and the elbrus compiler team . I would say the same thing . But intel bought elbrus and Made Boris a fellow, The man is way ahead of his time. Also Intel is now challenging the Risk processors with X86 . Be honest now . Ya have to admitt its impressive . Its a big deal to me As Zinn2b called this years ago and had the forum in an uproar . Which lead to a ban . X86 is becoming Itanic as I said it would. So it was written so it shall be
 
Last edited:

JFAMD

Senior member
May 16, 2009
565
0
0
Maybe the same reason why sometimes higher clocked Istanbuls beat Magny Cours?

A higher clocked processor will be a lower clocked processor in a single threaded application.

Nobody buys Magny Cours to run single threaded applications. It it a throughput platform and the customers buying 16-48 cores are looking for throughput.

There is no Istanbul configuration that delivers higher throughput than Magny Cours.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
A higher clocked processor will be a lower clocked processor in a single threaded application.

Nobody buys Magny Cours to run single threaded applications. It it a throughput platform and the customers buying 16-48 cores are looking for throughput.

There is no Istanbul configuration that delivers higher throughput than Magny Cours.
Just like customers who are interested in throughput will use Hyperthreading as it increases throughput, especially for commercial server applications.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Intel hyper threading increases throughput because having hyperthreading is better than nothing and doesn't cost much die space. However I'm afraid this increase is still not as large as adding a couple more physical cores. Is AMD's architecture slower per-core? Yeah, a little, but with magny you get 12 real cores instead of 6 real cores + HT, and you get very respectable performance with that in most areas, and better-than-gulftown performance in a few niches.

It's a pity that anand's IT benchmark suite is so dull, but you can see magny, with a 700 MHz clock disadvantage, doing some serious clean-up in the very-parallel work like rendering and fluid dynamics. I wish we could see more workstation and less server apps, though. Something more serialized like a database benchmark is going to just go straight to intel. Their core is simply faster than AMD's and thats the way it's been for a long time. Throw a 700 MHz clock advantage on top of it and yeah, it's better. It may not be explicitly written down in intel's strategy, but they are holding onto the fastest core, they have the best performance per thread and they're keeping it that way.

AMD just has a lot more physical cores and they are clocked a little slower, but it brings definite victories in some areas. There are two distinct, but easy to understand strategies that intel and AMD have chosen because not all work computers do is the same, and you can't have a best-for-everything architecture. Was AMD supposed to just not pursue this strategy? Were they supposed to keep copying intel? That would still leave these embarassingly parallel niches unoccupied.
 
Last edited:

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Since I can't start new Topics . I thought I post here . Someone else can OP it.

http://www.xbitlabs.com/news/other/..._Acquire_IBM_s_Semiconductor_Fab_Analyst.html

Interesting idea, that would really be an acquisition that would give GF the ability to compete head to head with Intel.
Still, it would pretty much end the IBM alliance, no? Granted, at this point I think GF has acquired a few members of the IBM alliance, but that would really turn the semiconductor industry into a 3 horse race, intel, tsmc, gf.
IBM is more useful as a research and dev house than a production source, but they need fabs in order to do their process research, and they need fabs to make their high margin server products. Well, I suppose Power7 and the like are high margin enough that outsourcing probably wouldn't make much difference in costs. They could even do a sweetheart stipulation for the sale of their fabs like AMD did with GF.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Just like customers who are interested in throughput will use Hyperthreading as it increases throughput, especially for commercial server applications.
A lot of professional server applications actually lose performance with HTT enabled.
 

JFAMD

Senior member
May 16, 2009
565
0
0
Yes, exchange is one of those "niche applications" that could be impacted.

http://blogs.amd.com/work/2010/01/21/it%e2%80%99s-all-about-the-cores/

I will not argue the fact that there are places that SMT can bring a throughput increase. I will, however, argue that it is the best way to deliver more throughput. Anytime I see an enterprise software application recommend turning off a feature in order to provide more stability or accuracy, it's a big red flag for me.