Intel's secret weapon against Hammer?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: MadRat
The latency between memory and the controller and from controller to the core is lower in the Hammer. This could be where they gain in the server.
The 30% gain came from a recompile of existing Half Life server code. The 64 bit compile ran 30% faster on the Opteron than the highly optimized 32 bit code. Being run on the same cpu would most likely eliminate your aforementioned variable.
 

MadRat

Lifer
Oct 14, 1999
11,999
307
126
I missed that part, Snoop.

Correct me if I am wrong, but AMD never had x86-32 optimizations for compilers. Perhaps the 30% gain is the compiler taking advantage of AMD's philosophies in design whereas most code is optimized for Intel architecture.
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: MadRat
I missed that part, Snoop.

Correct me if I am wrong, but AMD never had x86-32 optimizations for compilers. Perhaps the 30% gain is the compiler taking advantage of AMD's philosophies in design whereas most code is optimized for Intel architecture.
Could be. Or the Opteron doesn?t handle the 32 bit code as efficiently as the 64 bit code due to its design.

It is also possible that the x86-64 is yielding results which have not been adequately explained. This would also clarify why many game programmers (not just Valve) including lead Unreal franchise programmer Tim Sweeney are supporting x86-64. I doubt EPIC would spend the time porting UT2k3 to x86-64 if their were minimal if any performance gains as we have been lead to believe.

Tim Sweeney on X86-64.
 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
So, here's the quote from Valve about the 30% gain...
In a straight port of code highly optimized for x86-32, Counter-Strike dedicated server tests with both 32- and 64-bit versions revealed a 30% clock-for-clock gain, and is expected to show further performance gains in future upgrades.
I'm not sure if I read that as 64-bit having a 30% advantage over 32-bit.... But rather that the Hammer showed a 30% gain in BOTH 32-bit and 64-bit versions.
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: Wingznut
So, here's the quote from Valve about the 30% gain...
In a straight port of code highly optimized for x86-32, Counter-Strike dedicated server tests with both 32- and 64-bit versions revealed a 30% clock-for-clock gain, and is expected to show further performance gains in future upgrades.
I'm not sure if I read that as 64-bit having a 30% advantage over 32-bit.... But rather that the Hammer showed a 30% gain in BOTH 32-bit and 64-bit versions.
IMO, it was speaking directly to the 32 to 64 bit code conversion and the difference this made.
Posted for the lazy :D
VALVE ANNOUNCES SUPPORT FOR UPCOMING AMD OPTERON? PROCESSOR

64-bit Counter-Strike Linux Server Provides Performance Gain for World?s #1 Online Action Game

San Jose, Calif. ? March 6, 2003 ? Valve, L.L.C., creators of Counter-Strike and Half-Life, today announced immediate availability of a 64-bit version of the Counter-Strike dedicated server using the upcoming AMD OpteronTM processor. Counter-Strike has the largest service footprint of any game on the Internet, with 35,000 servers generating over 4.5 billion player minutes per month.

?These server operators are extremely sophisticated, and were some of the earliest adopters of Linux,? said Gabe Newell, Valve managing director. ?We expect them to be leading-edge adopters of the AMD64 platform. AMD?s approach to 64-bit computing looked great on paper, and it?s nice to see that with real processors and development tools that it fulfills that promise. Every PC developer should be looking to get their server code and development tools running in 64-bits right away.?

In a straight port of code highly optimized for x86-32, Counter-Strike dedicated server tests with both 32- and 64-bit versions revealed a 30% clock-for-clock gain, and is expected to show further performance gains in future upgrades.

?Valve Counter-Strike servers with 64-bit computing can offer customers a better overall gaming experience, and AMD processors will enable this performance boost,? said Barry Crume, director of server segment product marketing, Computational Products Group, AMD. ?With 64-bit dedicated servers using AMD Opteron processors, Valve will offer online gamers increased reliability, improved stability and greater throughput.?

About Valve
Founded in 1996, Valve develops entertainment software titles such as Half-Life, Team Fortress and Counter-Strike. Half-Life, Valve?s debut title, was first released in November 1998 and has won over 50 Game of the Year Awards, was named ?Best Game of Ever? by PC Gamer. Valve's portfolio accounts for over 8 million retail units sold worldwide, and over 88% of the online action market. More information about Valve is available through the company?s Web site at www.valvesoftware.com.
 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
What do you mean, "If you read the news piece..."??? Where do you think I got the quote from?
 

paralazarguer

Banned
Jun 22, 2002
1,887
0
0
Yeah, the newspiece did not specify that the 32 bit code was also being run on an opteron. That being said, it is a dedicated server which is only acting as a server. It's not really even playing the game. So, we already knew that 64bit was faster for servers. What's the news here?
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Yeah, the newspiece did not specify that the 32 bit code was also being run on an opteron. That being said, it is a dedicated server which is only acting as a server. It's not really even playing the game. So, we already knew that 64bit was faster for servers. What's the news here?
The news is that when running a counterstrike server using x86-64 compiled code (which still could be further optimized), they are already seeing a 30% increase in speed. According to the X86-64 myths and realities FAQ, this is an enigma, as I understand it their should be at max a 20% increase under ideal conditions for integer related code. Further, your statement, "we already knew that 64bit was faster for servers" has not been claimed in this thread, nor have I seen any detailed proof that x86-64 code makes an Opteron server (I assume your saying any type of server?) 30% faster by simply recompiling code.
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: Wingznut
What do you mean, "If you read the news piece..."??? Where do you think I got the quote from?

You could have directly quoted my quote :D
Sorry :)
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: paralazarguer
Actually, it has been claimed in this thread several times.
I assume your parsing my sentence, that it has not been claimed in this thread?

So, we already knew that 64bit was faster for servers. What's the news here?
I can only assume that you agree that a 30% increase in performance will be the norm for servers by simple recompille of 32 bit to 64 bit code. Do you have any source to corroborate this?
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: grant2
Originally posted by: Sohcan
I was merely trying to dispel the myth that microprocessors somehow "operate" on some fixed-sized dataset, and that 64-bit microprocessors can somehow "churn" through the data at twice the rate, and that this will somehow lead to a direct speedup in desktop applications.

This "myth of twice the performance" has so far only been brought up by 64bit skeptics. Certainly no one in *THIS* thread claimed anything more than SOME performance increase from 32-64 bit. Yet, as usual, all sorts of naysayers are compelled to crawl out of the woodwork harping on about "diminishing returns" and "limited use of 64bit data" and "if the FAQ says so, it *MUST* be true!!"
The discussion also extends to clarifying the reasons why 64-bit computing will add functionality and performance (from better memory addressing), not just the quantitative speedups achieved. It seems the explanation is still applicable since you are harboring the wrong notions for why 64-bit computing is needed.

I've explained the driving force behind the concept, and supported it with a primary source from other computer architects as well as a very suncinct page from HP explaining the motivation behind 64-bit computing. If you're going to dismiss all of this as harping from naysayers, I certainly can't stop you. If you're only going to believe a source from AMD, here you go.

Q: Why is 64-bit technology important? What applications benefit from 64-bits?

A: The need for 64-bit technology is driven by applications that address large amounts of physical and virtual memory, such as high performance servers, database management systems, and CAD tools. AMD?s evolutionary approach to 64-bit technology allows the gradual software transition from 32-bit to 64-bit. Only those applications that benefit from the features of 64-bit technology need to be ported.

Still no one has answered my question "what did that 16->32 have that 32->64 doesn't?" ... the closest I hear is "64 bit variables aren't used very much" ... The simple retort is "maybe because there was a price to pay for using them."
To answer you question, "what did that 16->32 have that 32->64 doesn't?", in a more explicit manner, applications in the late-70s through mid-80s did need to make use of 32-bit datatypes, since 8-bit and 16-bit integer datatypes express a very limited range. The bit-level parallelism limitation of 8- and 16-bit microprocessors led to a more substantial speedup with the introduction of 32-bit microprocessors. If, at some point in the future when 64-bit computing on the desktop is universal, software development makes larger use of 64-bit datatypes, then that would be for increased functionality, not an increase in performance from previously existing applications due to higher bit-level parallelism. This is one of the differences between the playing field now and in the mid-80s.

But, your faq doesn't at all discuss strings. Yet other articles I've read make the reasonable assumption that there can be HUGE performance gains by handling strings in 8-character variables (64 bit) rather than 4-character (32 bit).

Maybe it's a crock, or maybe it's an idea you didn't consider. Frankly, it's hard to imagine NOT getting a performance increase if 2x as much string can be shuffled each operation.
It's easy to point out pathological cases. In order to support your case, you're going to need to show that faster string move operations will not only show a substantial performance increase in anything but a microbenchmark, but also that the benefit is universal enough to quantify as a distinguishing, motivating factor behind widespread 64-bit computing.

Snoop: It's too difficult to garner enough information out of the one sentence from the Valve press release to make any solid conclusions about the 30% number...there are too many unknowns. Did the server use 4 to 16GB of memory in a segmented mode for the 32-bit code, resulting in a substantial speedup from flat 64-bit addressing? Did the 64-bit code result in a substantial speedup from the extra 8 logical registers in x86-64, resulting in better code generation and memory performance? Was the 32-bit code optimized for the Athlon microarchitecture, whereas the 64-bit code (since it was targeting x86-64 and Hammer) have more explicit Hammer microarchitecture optimizations?

I must stress that the x86-64 performance increases that I mentioned in the FAQ was just hand-waving and shouldn't be used for any definitive conclusions...it was based roughly on statements by Fred Weber (one of the Hammer architects) that Hammer yielded (IIRC) a 15% speedup in SPECint 2000 in x86-64 mode from microarchitecture-specific optimizations in the code and from better code generation with the increased number of logical registers.
 

mrman3k

Senior member
Dec 15, 2001
959
0
0
It will be very interesting to see how Opteron performs in relation to the Xeons.

I heard that the release day is April 22, can anyone confirm?
 

xxsk8er101xx

Senior member
Aug 13, 2000
298
0
0
riiiight. AMD increased their prices because they need to make money.

I still don't like the fact prescott has paladium built in the core.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: xxsk8er101xx
riiiight. AMD increased their prices because they need to make money.

I still don't like the fact prescott has paladium built in the core.


can you give me a link to that last part??? And also the fact it is in the hardware means it also needs the sotware to coexist, right??? meaning until intel puts it in the OS it wont do anything...
 

xxsk8er101xx

Senior member
Aug 13, 2000
298
0
0
Originally posted by: Duvie
Originally posted by: xxsk8er101xx
riiiight. AMD increased their prices because they need to make money.

I still don't like the fact prescott has paladium built in the core.


can you give me a link to that last part??? And also the fact it is in the hardware means it also needs the sotware to coexist, right??? meaning until intel puts it in the OS it wont do anything...


"Intel did mention advanced power management with Prescott, but has yet to disclose exactly what they mean by that. We'll be touching on the improved HT and Prescott New Instructions in the next page, so we'll leave them out of discussion here. With Prescott, Intel will also be introducing LaGrande Support, which is something Intel has talked about at a very high level in the past as a hardware level security standard. "

Prescott's Enhancements (continued)

LaGrande ? Intel Provides Palladium Support
Intel also announced their LaGrande initiative which will bring hardware level security support that could work with Microsoft?s Palladium. There wasn?t much revealed about LaGrande (a codename for the forthcoming productized features) other than it will be implemented through a series of CPU and chipset extensions; this lead us to believe that some of Prescott?s new instructions could be LaGrande related but we have yet to confirm that.

LaGrande ? Intel Provides Palladium Support
 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
Originally posted by: xxsk8er101xx
I still don't like the fact prescott has paladium built in the core.
I'm not going to get into when or if LaGrande technology will be used, since it hasn't officially been announced (that I am aware of)... But you can't deny that the internet is in dire need for better security. The amount of money that is lost in identity theft via the internet is disgusting. And that's really what LaGrande is about, e-business security.

Oh btw, Intel has stated that you'll be able to turn off LaGrande if you choose not to use it.

I'm sure you've heard TONS of hype about digital content protection and whatnot. But I have yet to read anything about that, this is definite fact. It reminds me a lot about when the P3 started using the hard coded serial numbers. Years later, it's not even an issue.

Just don't get caught up in the paranoia.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Wingznut
Originally posted by: xxsk8er101xx
I still don't like the fact prescott has paladium built in the core.
I'm not going to get into when or if

But you can't deny that the internet is in dire need for better security.

Just don't get caught up in the paranoia.
Being concerned about Paladium is not paranoia. "Security" is a terrible reason for Paladium.

:p



rolleye.gif


 

paralazarguer

Banned
Jun 22, 2002
1,887
0
0
Oh btw, Intel has stated that you'll be able to turn off LaGrande if you choose not to use it.

I find it funny, Wigznut, that earlier in your post you say that you don't think Intel has announced LaGrande. But they have announced that it can but turned off...that what can be turned off...umm...nothing, nevermind. I said nothing. NOTHING!
Just thought that was funny.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: paralazarguer
Oh btw, Intel has stated that you'll be able to turn off LaGrande if you choose not to use it.

I find it funny, Wigznut, that earlier in your post you say that you don't think Intel has announced LaGrande. But they have announced that it can but turned off...that what can be turned off...umm...nothing, nevermind. I said nothing. NOTHING!
Just thought that was funny.
You missed the key words: "Officially announced".

:D


 

majewski9

Platinum Member
Jun 26, 2001
2,060
0
0
I am impressed kinda with the Prescott but I think Intel is making a big mistake on relying on the lack luster p4 architechture. They need to completly rework some of the architecture of the pentium 4. All signs indicate prescott will be called pentium 5 which is good for marketing I suppose. still all this consider 1 meg level 2 even with 90 nanometer process is going to be expensive. Its probaly going to be a long while before an application will be optimized for the prescott.

AMD definately has the advantage with the 64 bits and on die memory controller. It kinda makes Intels line a little stale. Innovation it seems is more impotant to AMD at least at a hardware level. Intel's current northwood doesnt stand a chance against Athlon 64 in my mind. AMD get the thing out! My god why cant they just run reg 32 bit XP ? Demoing @ 2ghz is impressive and just makes me want one of these chip even more. They have to be trying to ramp up the clock and perfecting everything. I just dont get it!


 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
apoppin... I completely agree that everyone should be "concerned" with new technologies, and the possible threats they can impose. But you must separate the hype, rumors, and truth. We read things on this board on almost a daily basis that gets people in an uproar, both positive and negative. But many times, at the end of the day, the rumors turns out to be pretty dissimilar to the original version or completely false altogether. Again, take the example of the P3's serial numbers. Many people were crying foul, but in the end it is of little (read: NO) concern to anyone.

That being said, the internet is anything but truly secure. With the amount of money that flows every day across the web, true security would be a godsend. You must hear the stories of people's credit cards, soc sec numbers, etc. being stolen. How could anyone NOT want a system that would come close to eliminating identity theft over the internet? I think that's a hell of a good reason for hardware security.

I have read all the hype surrounding Palladium and the like. If the hype is true, then these technologies will do nothing but alienate the developers customers... I can't see any semiconductor or software corporation thinking that they could somehow benefit by making their customers angry.

If anyone has any factual links about these technologies, and how they would be used for anti-piracy, I would really like to read them. Everything that I have read is complete speculation, and doesn't come close to outlining how MS or Intel (or anyone else) could earn more money by releasing some technology with the rumored intent?


paralazarguer... Intel has talked about LaGrande in a few interviews, and has a few press releases that mention it. But (afaik), they have yet to officially announce what, where, when, and IF it will be used. I am not at liberty to go into details about things that haven't been officially announced by Intel.

I love my job way too much to do that. :)


 

KenAF

Senior member
Jan 6, 2002
684
0
0
They need to completly rework some of the architecture of the pentium 4. All signs indicate prescott will be called pentium 5 which is good for marketing I suppose. still all this consider 1 meg level 2 even with 90 nanometer process is going to be expensive
Intel has confirmed that Prescott is 109mm^2 in size on 90 nanometer. This means that Prescott is 20% smaller than Northwood, even with all that added cache. It also makes Prescott roughly the same size as the Athlon64 with 256K L2 cache (104mm^2), and much smaller than the Hammer with 1Mb L2 (150+mm^2).

 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
Originally posted by: KenAF
They need to completly rework some of the architecture of the pentium 4. All signs indicate prescott will be called pentium 5 which is good for marketing I suppose. still all this consider 1 meg level 2 even with 90 nanometer process is going to be expensive
Intel has confirmed that Prescott is 109mm^2 in size on 90 nanometer. This means that Prescott is 20% smaller than Northwood, even with all that added cache. It also makes Prescott roughly the same size as the Athlon64 with 256K L2 cache (104mm^2), and much smaller than the Hammer with 1Mb L2 (150+mm^2).
And don't forget that Intel is using 300mm wafers (which yields about 2.5x die as a 200mm wafer), and not using the significantly more expensive SOI wafers.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Wingznut
apoppin... I completely agree that everyone should be "concerned" with new technologies, and the possible threats they can impose. But you must separate the hype, rumors, and truth. We read things on this board on almost a daily basis that gets people in an uproar, both positive and negative. But many times, at the end of the day, the rumors turns out to be pretty dissimilar to the original version or completely false altogether. Again, take the example of the P3's serial numbers. Many people were crying foul, but in the end it is of little (read: NO) concern to anyone.

That being said, the internet is anything but truly secure. With the amount of money that flows every day across the web, true security would be a godsend. You must hear the stories of people's credit cards, soc sec numbers, etc. being stolen. How could anyone NOT want a system that would come close to eliminating identity theft over the internet? I think that's a hell of a good reason for hardware security.

I have read all the hype surrounding Palladium and the like. If the hype is true, then these technologies will do nothing but alienate the developers customers... I can't see any semiconductor or software corporation thinking that they could somehow benefit by making their customers angry.

If anyone has any factual links about these technologies, and how they would be used for anti-piracy, I would really like to read them. Everything that I have read is complete speculation, and doesn't come close to outlining how MS or Intel (or anyone else) could earn more money by releasing some technology with the rumored intent?
We only read speculation from the anti-M$ because we still don't have a clear picture (unclouded by M$' BS) of Palladium. Howerer - knowing M$'s "track record" of control - we "know" it can't be "good". ;)

rolleye.gif


And I too would like more info . . . I do know that "known good existing security practices" are largely ignored in favor of "cheap and fast".