Apple releases new Dual G5 2.5 GHz Power Mac

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: drag
Originally posted by: Accord99
Originally posted by: drag
You don't want to fall into the same trap as those morons calling BS on Apple's benchmarks by saying that Spec2000 tests (using the Gcc compiler don't match the Spec2000 results found elsewere (made using the Icc compiler).

(Icc compiler doesn't work on PowerPC, but Gcc works on both x86 AND PowerPC, which is why it was used.)

No, it was used because it was the only way Apple could have a chance at beating the P4 at SPEC2000 scores. Using the "same compiler", especially when the backend is radically different, is not the point of SPEC (probably breaks some rules as well). Anyways, SPEC cores for the G5 using IBM's high performance xlc/xlf compilers would suggest the G5 is still no match for top-of-the-line x86 processors.

unhuh.

So using ICC compiler would be more fair? B.S.

And "the backend is radically different"? GCC is speficly designed to be cross platform. It sacrifices many of the features of ICC in order to work on computers.

I pretty much use GCC everyday, what do you know abou it?

ICC is SPECIFICLY DESIGNED TO MAKE INTEL PROCCESSORS LOOK GOOD IN BENCHMARKS. It's made by intel for intel proccessors. Get a grip.

Gcc is made by the GNU project to make a cross-platform compiler.

That's why when Intel/Dell/whoever pays for it they use Icc. They do it so it looks good in advertisements.

The point of benchmarks is to try to even out the playing feild as much as possible. So the veritest compared a Unix-style OS running GCC on x86 vs a Unix-style OS running GCC on PowerPC.

That's as good at it gets when comparing different platforms and actually is much better then most benchmarks I've seen comparing different platforms.

Accord's comment is fair, using the same compiler does not necessarily "level the playing field" across platforms. The back-end code generation and scheduling, different for each architecture and microarchitecture, is rather crucial for performance and may not be ideal for a particular microprocessor using a particular compiler. As an extreme example, gcc produces poor code for Itanium (unless things have changed recently, this certainly used to be true). It schedules a maximum of one bundle (3 instructions) per cycle, even though Itanium 2 can issue 6 instructions per cycle. It schedules a high degree of NOPs (no-operations) compared to HP's and Intel's compiler, and takes little advantage of any of Itanium's software features (predication, control and load speculation, software pipelining, etc). So using gcc to compare Itanium to a processor of another architecture, for which gcc may produce good code for, does not equate a fair test. Given the extreme importance of code scheduling for performance, even with out-of-order execution processors, using the best compiler for each platform would be a better choice IMO. PPC 970 can take advantage of IBM's AIX compiler, which performs quite a bit better than gcc.

And icc certainly not just for show, here's a sampling of some customers.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sohcan
Originally posted by: drag
Originally posted by: Accord99
Originally posted by: drag
You don't want to fall into the same trap as those morons calling BS on Apple's benchmarks by saying that Spec2000 tests (using the Gcc compiler don't match the Spec2000 results found elsewere (made using the Icc compiler).

(Icc compiler doesn't work on PowerPC, but Gcc works on both x86 AND PowerPC, which is why it was used.)

No, it was used because it was the only way Apple could have a chance at beating the P4 at SPEC2000 scores. Using the "same compiler", especially when the backend is radically different, is not the point of SPEC (probably breaks some rules as well). Anyways, SPEC cores for the G5 using IBM's high performance xlc/xlf compilers would suggest the G5 is still no match for top-of-the-line x86 processors.

unhuh.

So using ICC compiler would be more fair? B.S.

And "the backend is radically different"? GCC is speficly designed to be cross platform. It sacrifices many of the features of ICC in order to work on computers.

I pretty much use GCC everyday, what do you know abou it?

ICC is SPECIFICLY DESIGNED TO MAKE INTEL PROCCESSORS LOOK GOOD IN BENCHMARKS. It's made by intel for intel proccessors. Get a grip.

Gcc is made by the GNU project to make a cross-platform compiler.

That's why when Intel/Dell/whoever pays for it they use Icc. They do it so it looks good in advertisements.

The point of benchmarks is to try to even out the playing feild as much as possible. So the veritest compared a Unix-style OS running GCC on x86 vs a Unix-style OS running GCC on PowerPC.

That's as good at it gets when comparing different platforms and actually is much better then most benchmarks I've seen comparing different platforms.

Accord's comment is fair, using the same compiler does not necessarily "level the playing field" across platforms. The back-end code generation and scheduling, different for each architecture and microarchitecture, is rather crucial for performance and may not be ideal for a particular microprocessor using a particular compiler. As an extreme example, gcc produces poor code for Itanium (unless things have changed recently, this certainly used to be true). It schedules a maximum of one bundle (3 instructions) per cycle, even though Itanium 2 can issue 6 instructions per cycle. It schedules a high degree of NOPs (no-operations) compared to HP's and Intel's compiler, and takes little advantage of any of Itanium's software features (predication, control and load speculation, software pipelining, etc). So using gcc to compare Itanium to a processor of another architecture, for which gcc may produce good code for, does not equate a fair test. Given the extreme importance of code scheduling for performance, even with out-of-order execution processors, using the best compiler for each platform would be a better choice IMO. PPC 970 can take advantage of IBM's AIX compiler, which performs quite a bit better than gcc.

And icc certainly not just for show, here's a sampling of some customers.

I'm willing to bet that the majority of gcc developers use x86 primarily. Other archs are generally an after thought. If gcc doesn't work well enough with Intel's chips, it's because Intel hasn't put much work into it.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: drag
Originally posted by: ViRGE
Originally posted by: PorBleemo
The performance penalty from monitoring twenty-one sensors and actively monitoring and modiying fan and liquid speeds must be high. Ouch. :(
Naa, I used a G4 Xserve for a couple of years, and it had absolutely no problem keeping up with all the sensors. The PowerMac has a few more, but it's also faster, so I don't see that being a problem.

PS drag, your register explaination isn't entirely correct, but I don't know enough about registers to exactly explain why. I do know however that the A64's 16 GPRs is a big deal, so take that as you will

Having more registers aviable to programmers is suppose to make code more flexible or something. But the reality is that x86 ISA does warrent 8 GPR and the Pentium 4 does have 128 GPR's.

It's weird stuff.

16registers vs 8 registers is definately a good thing, but my x86 actually realy describes the layer of hardware abstration between the actual CPU/RAM/Hardware and assembly code used in software. It doesn't nessicarially become obsolete or is obsolete.

For instance take the Transmeta proccessor. It's something that is completely alien to Risc/Cisc/PowerPC/x86 and all other types of hardware before it. It's something completely new, and weird.

But since the software sees it as x86 it's still x86. The nice part is that with the same technoloy it can be PowerPC, Risc or Mips CPU if the designers wanted it to.
Ok, here we go: take a look at this ArsTech article, about half-way down, and it mentions the whole register ordeal. While it's true the P4 has 128 registers, the traditional x86 limitation still applies: only 8 can actually be directly accessed by the programmer, the rest are handled by the hardware directly. A check on WikiPedia offers a complex, but managable explaination, stating that for the renamed registers, the P4 re-maps certain register calls to its hidden rename registers, so that code that is programmed to run in serial(one after another) due to register constraints, can instead run in parallel.

Now, to bring us back to our original discussion, it would be incorrect to state that the P4 has 128 GPRs, the more appropiate term would be that it has 128 rename registers mapped to the 8 x86 GPRs. In relation to the PowerPC ISA, this still puts the P4 at a disadvantage, since register renaming only helps you parallelize some code, you can't use algorithms that need more than 8 GPRs at once, which in the most classic example, is why the G4 was so fast at RC5 cracking, since it could use a "bitslice" method that required dozens of GPRs. In short, the PowerPC still has a register advantage, since you can directly access 32 GPRs instead of the 8 on x86, which as mentioned with the bitslice method, allows for more complex algorithms to be run thanks to the extra concurrent space availible. The P4 does make up for this via register renaming and parallelization, but I know recent PowerPC chips do RR too(although I don't have the specs on that), so the chips sort of settle back in to how things were before we went to RR.

In short, x86 is still old: it just has a good cosmetic surgeon.;)

Edit: Damn, beat by Sohcan. This is what I get for doing research :p
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: Eug
The new G5 has a funky new heatsink cover.

New G5 vs. Old G5.

It seems that the new G5's so-called heatsink cover might just be a cover for the liquid cooling apparatus underneath.
Speaking of pictures, Eug, what happened to the new Power Mac's supposedly smaller motherboard? Looking at those two pictures, it looks roughly the the same size.
 

Eug

Lifer
Mar 11, 2000
24,167
1,812
126
Originally posted by: ViRGE
Originally posted by: Eug
The new G5 has a funky new heatsink cover.

New G5 vs. Old G5.

It seems that the new G5's so-called heatsink cover might just be a cover for the liquid cooling apparatus underneath.
Speaking of pictures, Eug, what happened to the new Power Mac's supposedly smaller motherboard? Looking at those two pictures, it looks roughly the the same size.
The smaller board has 4 memory slots only. It was probably a prototype single-CPU (with only one G5 logo on the cover), with 4 slots.

The dual 1.8 has four slots, but has a dual mobo (of course), so I suspect it will be the same mobo as the dual 2.0, but just populated differently.

We'll see soon enough.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: n0cmonkey
Originally posted by: Sohcan

Accord's comment is fair, using the same compiler does not necessarily "level the playing field" across platforms. The back-end code generation and scheduling, different for each architecture and microarchitecture, is rather crucial for performance and may not be ideal for a particular microprocessor using a particular compiler. As an extreme example, gcc produces poor code for Itanium (unless things have changed recently, this certainly used to be true). It schedules a maximum of one bundle (3 instructions) per cycle, even though Itanium 2 can issue 6 instructions per cycle. It schedules a high degree of NOPs (no-operations) compared to HP's and Intel's compiler, and takes little advantage of any of Itanium's software features (predication, control and load speculation, software pipelining, etc). So using gcc to compare Itanium to a processor of another architecture, for which gcc may produce good code for, does not equate a fair test. Given the extreme importance of code scheduling for performance, even with out-of-order execution processors, using the best compiler for each platform would be a better choice IMO. PPC 970 can take advantage of IBM's AIX compiler, which performs quite a bit better than gcc.

And icc certainly not just for show, here's a sampling of some customers.

I'm willing to bet that the majority of gcc developers use x86 primarily. Other archs are generally an after thought. If gcc doesn't work well enough with Intel's chips, it's because Intel hasn't put much work into it.

And gcc has historically produced poor code for the Pentium 4, even using P4-specific flags (though it may have improved). The fact that gcc was first developed for and is widely used with x86 does not mean that it produces good code for any x86 microprocessor...code scheduling is a more of a microarchitectural issue rather than architectural.

I wouldn't be surprised if, for example, icc (or Microsoft's VC++) produces a greater normalized performance boost over gcc for the P4 than it will for the P3. If a particular compiler is producing more ideal code for one microprocessor over another, then using the same compiler is not leveling the playing field.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sohcan
Originally posted by: n0cmonkey
Originally posted by: Sohcan

Accord's comment is fair, using the same compiler does not necessarily "level the playing field" across platforms. The back-end code generation and scheduling, different for each architecture and microarchitecture, is rather crucial for performance and may not be ideal for a particular microprocessor using a particular compiler. As an extreme example, gcc produces poor code for Itanium (unless things have changed recently, this certainly used to be true). It schedules a maximum of one bundle (3 instructions) per cycle, even though Itanium 2 can issue 6 instructions per cycle. It schedules a high degree of NOPs (no-operations) compared to HP's and Intel's compiler, and takes little advantage of any of Itanium's software features (predication, control and load speculation, software pipelining, etc). So using gcc to compare Itanium to a processor of another architecture, for which gcc may produce good code for, does not equate a fair test. Given the extreme importance of code scheduling for performance, even with out-of-order execution processors, using the best compiler for each platform would be a better choice IMO. PPC 970 can take advantage of IBM's AIX compiler, which performs quite a bit better than gcc.

And icc certainly not just for show, here's a sampling of some customers.

I'm willing to bet that the majority of gcc developers use x86 primarily. Other archs are generally an after thought. If gcc doesn't work well enough with Intel's chips, it's because Intel hasn't put much work into it.

And gcc has historically produced poor code for the Pentium 4, even using P4-specific flags (though it may have improved). The fact that gcc was first developed for and is widely used with x86 does not mean that it produces good code for any x86 microprocessor...code scheduling is a more of a microarchitectural issue rather than architectural.

I wouldn't be surprised if, for example, icc (or Microsoft's VC++) produces a greater normalized performance boost over gcc for the P4 than it will for the P3. If a particular compiler is producing more ideal code for one microprocessor over another, then using the same compiler is not leveling the playing field.

Then Intel should correct the situation. Apple put some work into gcc to improve performance, get rid of some bugs, and help out the objective C stuff. I don't see how using a p4 specialized compiler will be "more fair" than using a cross platform compiler.

gcc 3 has improved performance of applications (at the expense of compile times :|). I refuse to buy a pentium 4 to do any testing on improvements in gcc though. ;)
 

Eug

Lifer
Mar 11, 2000
24,167
1,812
126
Do any heatpipes use water mixed with propylene glycol? It sure does sound like liquid cooling but I'm still not 100% convinced it's not just a super-duper heatpipe.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: n0cmonkey
Originally posted by: Sohcan

And gcc has historically produced poor code for the Pentium 4, even using P4-specific flags (though it may have improved). The fact that gcc was first developed for and is widely used with x86 does not mean that it produces good code for any x86 microprocessor...code scheduling is a more of a microarchitectural issue rather than architectural.

I wouldn't be surprised if, for example, icc (or Microsoft's VC++) produces a greater normalized performance boost over gcc for the P4 than it will for the P3. If a particular compiler is producing more ideal code for one microprocessor over another, then using the same compiler is not leveling the playing field.

Then Intel should correct the situation. Apple put some work into gcc to improve performance, get rid of some bugs, and help out the objective C stuff. I don't see how using a p4 specialized compiler will be "more fair" than using a cross platform compiler.

gcc 3 has improved performance of applications (at the expense of compile times :|). I refuse to buy a pentium 4 to do any testing on improvements in gcc though. ;)

My point is that claiming fairness can't be made if a compiler produces better code for one microprocessor than another. Using the same compiler removes the front-end optimizations as a variable in the tests, but back-end code generation is perhaps even more important for performance...if the code generation is bad for one microprocessor, then the test can't be claimed to be fair.

Why not use the best that's available for each platform? And you don't have to use icc, even MS VC++, which is perhaps the most common for commercial software, produces much better code and performance for the P4. IBM's tests from nearly two years ago on the 1.8 GHz PPC 970 using its compiler produced around 1000 SPECint IIRC, compared to Apple's result of 800 using the 2 GHz PPC 970. I'm not trying to rag on the 970FX...I'm sure that with the increased frequency, bus bandwidth, and upswing from software improvements, IBM's score with the 970FX could meet or surpass that of the P4 or A64 with icc.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sohcan
Originally posted by: n0cmonkey
Originally posted by: Sohcan

And gcc has historically produced poor code for the Pentium 4, even using P4-specific flags (though it may have improved). The fact that gcc was first developed for and is widely used with x86 does not mean that it produces good code for any x86 microprocessor...code scheduling is a more of a microarchitectural issue rather than architectural.

I wouldn't be surprised if, for example, icc (or Microsoft's VC++) produces a greater normalized performance boost over gcc for the P4 than it will for the P3. If a particular compiler is producing more ideal code for one microprocessor over another, then using the same compiler is not leveling the playing field.

Then Intel should correct the situation. Apple put some work into gcc to improve performance, get rid of some bugs, and help out the objective C stuff. I don't see how using a p4 specialized compiler will be "more fair" than using a cross platform compiler.

gcc 3 has improved performance of applications (at the expense of compile times :|). I refuse to buy a pentium 4 to do any testing on improvements in gcc though. ;)

My point is that claiming fairness can't be made if a compiler produces better code for one microprocessor than another. Using the same compiler removes the front-end optimizations as a variable in the tests, but back-end code generation is perhaps even more important for performance...if the code generation is bad for one microprocessor, then the test can't be claimed to be fair.

And I'm saying that if gcc is better at producing correct code for PPC processors than it is at producing correct code for the P4, then Intel is to blame.

Why not use the best that's available for each platform? And you don't have to use icc, even MS VC++, which is perhaps the most common for commercial software, produces much better code and performance for the P4. IBM's tests from nearly two years ago on the 1.8 GHz PPC 970 using its compiler produced around 1000 SPECint IIRC, compared to Apple's result of 800 using the 2 GHz PPC 970. I'm not trying to rag on the 970FX...I'm sure that with the increased frequency, bus bandwidth, and upswing from software improvements, IBM's score with the 970FX could meet or surpass that of the P4 or A64 with icc.

Run the tests then. Post the results.

Apple, IBM, and Motorola contribute to gcc (just a guess, not sure what Motorola really does these days ;)). Intel contributes to gcc (or do they keep their fingers out of this too?). If gcc works better for Apple than it does Intel, isn't it Intel's fault?
 

Eug

Lifer
Mar 11, 2000
24,167
1,812
126
Originally posted by: Sohcan
My point is that claiming fairness can't be made if a compiler produces better code for one microprocessor than another. Using the same compiler removes the front-end optimizations as a variable in the tests, but back-end code generation is perhaps even more important for performance...if the code generation is bad for one microprocessor, then the test can't be claimed to be fair.

Why not use the best that's available for each platform? And you don't have to use icc, even MS VC++, which is perhaps the most common for commercial software, produces much better code and performance for the P4. IBM's tests from nearly two years ago on the 1.8 GHz PPC 970 using its compiler produced around 1000 SPECint IIRC, compared to Apple's result of 800 using the 2 GHz PPC 970. I'm not trying to rag on the 970FX...I'm sure that with the increased frequency, bus bandwidth, and upswing from software improvements, IBM's score with the 970FX could meet or surpass that of the P4 or A64 with icc.
Nah, it was 1051 with SPECfp, with whatever OS they were running (AIX? Linux?) on a different mobo. SPECint was 937, but they dropped lower after it was released. (I can't remember the exact numbers.)

The bus bandwidth now seems OK (1.25 GHz although "only" half that in each direction), and the frequency is OK for now at 2.5 GHz. The cache ain't the greatest though. The xl compilers will be improved overall and will have autovectorization added, but we shouldn't expect those to come out until near the end of 2004. But even with all this, I think icc/ifc on x86 will have the upper hand for SPEC2000.

I did make the comment earlier about vector. A lot of Apple's multimedia software seems centred around Altivec (which SPEC doesn't directly test). They pretty much had to push SIMD because the G4 sucked so bad otherwise, esp. at fp. But now with fast GHz and Altivec, that software is doing some nice tricks.

Anyone care to comment on IBM's implementation of SIMD vs the x86 implementation and its overall impact?
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: n0cmonkey
Originally posted by: Sohcan
Originally posted by: n0cmonkey
Originally posted by: Sohcan

And gcc has historically produced poor code for the Pentium 4, even using P4-specific flags (though it may have improved). The fact that gcc was first developed for and is widely used with x86 does not mean that it produces good code for any x86 microprocessor...code scheduling is a more of a microarchitectural issue rather than architectural.

I wouldn't be surprised if, for example, icc (or Microsoft's VC++) produces a greater normalized performance boost over gcc for the P4 than it will for the P3. If a particular compiler is producing more ideal code for one microprocessor over another, then using the same compiler is not leveling the playing field.

Then Intel should correct the situation. Apple put some work into gcc to improve performance, get rid of some bugs, and help out the objective C stuff. I don't see how using a p4 specialized compiler will be "more fair" than using a cross platform compiler.

gcc 3 has improved performance of applications (at the expense of compile times :|). I refuse to buy a pentium 4 to do any testing on improvements in gcc though. ;)

My point is that claiming fairness can't be made if a compiler produces better code for one microprocessor than another. Using the same compiler removes the front-end optimizations as a variable in the tests, but back-end code generation is perhaps even more important for performance...if the code generation is bad for one microprocessor, then the test can't be claimed to be fair.

And I'm saying that if gcc is better at producing correct code for PPC processors than it is at producing correct code for the P4, then Intel is to blame.

:confused:I'm not denying that, I partially agree. I'm talking about testing methodology, not who's to blame for the P4's performance on gcc. Normalizing the compiler is simply not in the spirit of the SPEC CPU test: "These benchmarks measure the performance of the processor, memory and compiler on the tested system. "

These complaints are minor...my biggest complaint about the Apple/Veritas (I think that's what it was called) SPEC CPU tests is simply that the results were not submitted to spec.org, making peer review and complaints impossible. The purpose of the SPEC comittee and submission process is to send results for a central body for review, after which the accepted results can be used for marketing purposes. SPEC does not want companies or even independent groups to use unsubmitted or unaccepted results in comparisons for marketing purposes (though it does happen sometimes).

Why not use the best that's available for each platform? And you don't have to use icc, even MS VC++, which is perhaps the most common for commercial software, produces much better code and performance for the P4. IBM's tests from nearly two years ago on the 1.8 GHz PPC 970 using its compiler produced around 1000 SPECint IIRC, compared to Apple's result of 800 using the 2 GHz PPC 970. I'm not trying to rag on the 970FX...I'm sure that with the increased frequency, bus bandwidth, and upswing from software improvements, IBM's score with the 970FX could meet or surpass that of the P4 or A64 with icc.

Run the tests then. Post the results.
[/quote]With a non-existent 970FX AIX system? :)
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sohcan

:confused:I'm not denying that, I partially agree. I'm talking about testing methodology, not who's to blame for the P4's performance on gcc. Normalizing the compiler is simply not in the spirit of the SPEC CPU test: "These benchmarks measure the performance of the processor, memory and compiler on the tested system. "

You should be able to use the "official" posted benchmarks and compare them to the Apple results. But if I'm looking at benchmarks when buying a system (not sure why I would be :p), I'd want comparisons based on software I'd use. I don't have icc or MS's compiler, so gcc results would be more interesting to me.

With a non-existent 970FX AIX system? :)

One excuse after another. :p
;)

Over all, it's marketing. Don't pay attention to it, you'll feel better and live longer. ;)

EDIT: Fixed quoting, I think.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: n0cmonkey
Originally posted by: Sohcan

:confused:I'm not denying that, I partially agree. I'm talking about testing methodology, not who's to blame for the P4's performance on gcc. Normalizing the compiler is simply not in the spirit of the SPEC CPU test: "These benchmarks measure the performance of the processor, memory and compiler on the tested system. "

You should be able to use the "official" posted benchmarks and compare them to the Apple results. But if I'm looking at benchmarks when buying a system (not sure why I would be :p), I'd want comparisons based on software I'd use. I don't have icc or MS's compiler, so gcc results would be more interesting to me.

I agree, I would never base purchase decisions on SPEC CPU, although I do use a few of the programs in the integer suite (Perl, gzip, gcc). It's purely an academic exercise for me....though I can guarantee, regardless of what processor you use, SPEC CPU was used to test the effect of design decisions on performance during its design. :)

To be honest, I find TPC and SAP SD and SAP APO more interesting.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sohcan
Originally posted by: n0cmonkey
Originally posted by: Sohcan

:confused:I'm not denying that, I partially agree. I'm talking about testing methodology, not who's to blame for the P4's performance on gcc. Normalizing the compiler is simply not in the spirit of the SPEC CPU test: "These benchmarks measure the performance of the processor, memory and compiler on the tested system. "

You should be able to use the "official" posted benchmarks and compare them to the Apple results. But if I'm looking at benchmarks when buying a system (not sure why I would be :p), I'd want comparisons based on software I'd use. I don't have icc or MS's compiler, so gcc results would be more interesting to me.

I agree, I would never base purchase decisions on SPEC CPU, although I do use a few of the programs in the integer suite (Perl, gzip, gcc). It's purely an academic exercise for me....though I can guarantee, regardless of what processor you use, SPEC CPU was used to test the effect of design decisions on performance during its design. :)

To be honest, I find TPC and SAP SD and SAP APO more interesting.

Way above my head. ;) I just go for what I can afford that has the support and features I want. ;)
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
if this had real liquid cooling.. you'd find average joes asking you for help changing the coolant from the radiator...
 

dullard

Elite Member
May 21, 2001
26,185
4,844
126
Originally posted by: ViRGE
20-30% isn't enough to keep competitive? Their competition(*cough*Intel*cough*) hasn't gone anywhere in the last year, so I don't really think this is hurting them. Now, it isn't enough to help them recover from situations they were behind in by major amounts with the 2ghz system, but it's still a sizable boost.
No I don't think it is enough. I was being generous with the 30% mark (it only went up 25% in clock speed). Apple's own benchmarks here say in Photoshop the 2.5 GHz G5 gets a 1.98 the 2.0 GHz G5 gets a 1.82 and the 3.4 GHz P4 gets a 1.00. So Apple's own bechmark puts the 2.5 GHz at 1.98 / 1.82 = 8.8% faster in Photoshop. Other numbers: the 2.5 is 14% faster than the 2.0 in Bibble, 13% faster in Final Cut Pro, 13% faster in Logic Pro. All of those numbers are far below the more generous 20%-30% I gave it. Now lets look at the competition.

Intel last year:
3.06 GHz Xeon
533 MHz FSB
512 kB cache
32-bit

Intel 1 year later (rumored parts as we don't know for sure) - Release date June 27 for a lot of these
3.4 GHz (possibly higher)
Note: 3.6 GHz Xeons are scheduled for Q3 2004 (pushed back from Q2) and 3.8 GHz Xeons for Q4 2004
800 MHz FSB
1 MB or 2 MB cache
64-bit

Tough to say what some of those improvements will bring. But a 10% clock speed boost should help performance by 7-8%. When the 20% faster clock speed 3.6 GHz comes out, that should add ~15% to the speed. 800 MHz FSB should be another 5-10% boost. The double cache adds 0%~25% but that varies a lot with programs so lets give it a total 5% boost. As for 64-bit who knows. Total it up and it should be quite competitive with the 20%-30% number I used for the G5. So to say the Xeon hasn't gone anywhere is quite a stretch.

AMD July 1st last year:
Opteron 244 1.8 GHz

AMD in July this year:
Opteron 250 2.4 GHz
Opteron 252 2.6 GHz coming soon (anyone know an exact date, I'm thinking Sept)

Thus on MHz alone, AMD went up 33% (44% at release of the 252). So again, the competition went up at least as much if not more than the G5.

All I'm saying is that I'm disappointed. 20%-30% just isn't enough to really impress.
 

Erasmus-X

Platinum Member
Oct 11, 1999
2,076
0
0
They really ough to up the RAM on those bad boys.
2 GB for the top-of-the-line, and 1 GB for the others.

Why the heck would you want to pay the extra $$$ to have more standard memory from Apple anyway? Whenever I order one, I always go with minimal RAM making sure other slots are open and add as much as I want myself.
 

LethalWolfe

Diamond Member
Apr 14, 2001
3,679
0
0
Originally posted by: Eug
Originally posted by: crazycarl
wait so is it true that these only have 2 internal hardrive bays?????
Yeah, and I agree it's kinda lame, especially when the there is an extra IDE connection available, but just no place to put that extra IDE drive.

I'd say the next Power Mac revision needs space for 3 drives. A nice system for video or Photoshop would be a single boot drive and a RAID0 scratch disk. Actually people have done this. They've slaved an IDE drive to the optical drive and then just strapped it on top of the optical drive. (Yes there's room there, but no housing for it.) Then they just software RAID0'd the two SATA drives for the scratch.

FWIW, the G5 does come with two Firewire 400 ports and one Firewire 800 ports. There is no significant drive speed bottleneck with Firewire 800.


At least in regards to video editing that was my thought at first. But after I thought about it I realized that not having lots of internal space for HDDs is not a big deal. If you are editing at the DV level all you need is an OS/Software drive and a media drive (which can easily be FW). If you are editing on higher quality formats you'll be running off of a bunch of external drives anyway.

With the exception of a possible niche here and there there is no need for having 3+ internal HDDs anymore.


Lethal
 

dannybin1742

Platinum Member
Jan 16, 2002
2,335
0
0
Goguen said it had proved difficult to accommodate the power-density curve presented by the chip. "It's a challenge to cool the part," however, he added that from a user standpoint, the more-efficient water-based cooling combined with the existing segmented air-flow technology let the faster, hotter systems run just as quietly as previous models.

simple thermo dynamics shows us that wattage of power going into the cpu has to come off the heatsink, so in other words it heats your room faster
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: dannybin1742
Goguen said it had proved difficult to accommodate the power-density curve presented by the chip. "It's a challenge to cool the part," however, he added that from a user standpoint, the more-efficient water-based cooling combined with the existing segmented air-flow technology let the faster, hotter systems run just as quietly as previous models.

simple thermo dynamics shows us that wattage of power going into the cpu has to come off the heatsink, so in other words it heats your room faster

So do the 3 Athlon 2400's in my bedroom.