Apple releases new Dual G5 2.5 GHz Power Mac

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NightCrawler

Diamond Member
Oct 15, 2003
3,179
0
0
Originally posted by: Eug
Originally posted by: NightCrawler
Looks nice but $3000 is outrageous, can build a great PC for:

Case and PSU: .....................$60
Motherboard: .......................$56 ( Epox KT600 )
Athlon XP 2200 Mobile: ..........$77 ( overclocking of course )
Corsair 512MB PC3200 ..........$85
Hitachi 160GB 7200 RPM ........$104 ( Near Raptor performance without the price and loads of space plus RAID if you buy two )
128 meg Video Card ..............$100? ( Ati or nvidia so many to choose from )
8x DVD+R and Dual Layer ......$93
OS: Linux or Windows .............$$$
======================================================

BANG FOR YOUR BUCK :)
That's all fine and dandy, but the dual G5s should be compared to dual 3.2 GHz Xeons or dual 2.2 GHz Opterons. Plus there are things like optical outputs, Firewire 800, free software, etc.

Yes but in terms of performance a single PC proccessor for Desktop users is usually plenty. A dual Opteron is overkill for home users and actually not any faster with most software. Opteron is intended for servers.

Dual G5's are a niche market intended for the wealthy.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Eug
Originally posted by: NightCrawler
Looks nice but $3000 is outrageous, can build a great PC for:

Case and PSU: .....................$60
Motherboard: .......................$56 ( Epox KT600 )
Athlon XP 2200 Mobile: ..........$77 ( overclocking of course )
Corsair 512MB PC3200 ..........$85
Hitachi 160GB 7200 RPM ........$104 ( Near Raptor performance without the price and loads of space plus RAID if you buy two )
128 meg Video Card ..............$100? ( Ati or nvidia so many to choose from )
8x DVD+R and Dual Layer ......$93
OS: Linux or Windows .............$$$
======================================================

BANG FOR YOUR BUCK :)
That's all fine and dandy, but the dual G5s should be compared to dual 3.2 GHz Xeons or dual 2.2 GHz Opterons. Plus there are things like optical outputs, Firewire 800, free software, etc.

I don't know about the dual Xeons (personally I wouldn't pay for them if I can get a opteron instead), but the dual Opterons at 2.2ghz will spank the dual G5 in most cases.

But they would be right around the same price. You can get cheap dual opteron motherboards, but they suck because they share the same memory channel, which cripples the performance advantage of dual cpu Opterons. (Multiple Xeons proccessors share the same memory channel. G5's proccessors have independant channels)

Nicer boards are expensive, around 400-500 dollars. And 2.2ghz Opterons are around 700 dollars. ;)

Originally posted by: dannybin1742
typical and actual are two differnt things
Yes, that's why I posted actual numbers... for a chip that IBM rates as hotter than the 2.5 GHz 970FX.[/quote]


IBM rates it's cpu wattage differently then AMD or Intel. Intel for instance when you see wattage ratings you see the expected normal wattage under load.

IBM rates them at highest possible wattage. So if IBM says they are 50 watts, it's closer to to 30-40 watts if you translate it to Intel's ratings.

Not entirely sure about that, but I beleive it's true.
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Originally posted by: NightCrawler
Yes but in terms of performance a single PC proccessor for Desktop users is usually plenty. A dual Opteron is overkill for home users and actually not any faster with most software. Opteron is intended for servers.

Dual G5's are a niche market intended for the wealthy.
You're right that dual G5s not for general consumers, but they are intended for people who want to get work done and who can afford to pay for the hardware. These are same people who buy dual Xeon and dual Opteron workstations.

Originally posted by: drag
I don't know about the dual Xeons (personally I wouldn't pay for them if I can get a opteron instead), but the dual Opterons at 2.2ghz will spank the dual G5 in most cases.
I don't believe that's correct. It really depends on software. For multimedia encoding I think the G5 would probably win, but for gaming the Opteron would win for sure. But I think it's reasonable to compare a 2.2 GHz Opteron to a 2.5 GHz G5.

Now this is an extreme example but at NAB 2004, Apple demonstrated a dual G5 2.0 decoding H.264 high-definition video. A competing vendor used THREE Opteron machines linked together to accomplish the same feat, with more glitchiness. A lot of that is bad coding on the Opteron side, but it does show you that for example gaming is not necessarily a good measure of performance, unless you happen to play that game.

Originally posted by: drag
IBM rates it's cpu wattage differently then AMD or Intel. Intel for instance when you see wattage ratings you see the expected normal wattage under load.

IBM rates them at highest possible wattage. So if IBM says they are 50 watts, it's closer to to 30-40 watts if you translate it to Intel's ratings.

Not entirely sure about that, but I beleive it's true.
Actually that is not true.

IBM's ratings are in the ballpark of half of real-life max. They don't publish max numbers at all for general consumption.

Intel's TDP is essentially real-life max.
 

Wahsapa

Diamond Member
Jul 2, 2001
3,004
0
0
Originally posted by: Eug
Originally posted by: NightCrawler
Dual G5's are a niche market intended for the wealthy.
You're right that dual G5s not for general consumers, but they are intended for people who want to get work done.

work = money = wealth


g5's pay for themselves! :roll:
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Originally posted by: Wahsapa
does anybody know of a g5 vs opteron review?

After Effects

Video Editing

PC Mag (Xeon/G5 only)

Basically what we know is that on average an Opteron 2.2 is much faster than a G5 2.0. There has been no tests of the G5 2.5 yet, but I do think it's competitive on average overall. The Opteron 2.4 would be faster than the G5 2.5 though on average.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Eug
Originally posted by: drag
I don't know about the dual Xeons (personally I wouldn't pay for them if I can get a opteron instead), but the dual Opterons at 2.2ghz will spank the dual G5 in most cases.
I don't believe that's correct. It really depends on software. For multimedia encoding I think the G5 would probably win, but for gaming the Opteron would win for sure. But I think it's reasonable to compare a 2.2 GHz Opteron to a 2.5 GHz G5.

Now this is an extreme example but at NAB 2004, Apple demonstrated a dual G5 2.0 decoding H.264 high-definition video. A competing vendor used THREE Opteron machines linked together to accomplish the same feat, with more glitchiness. A lot of that is bad coding on the Opteron side, but it does show you that for example gaming is not necessarily a good measure of performance, unless you happen to play that game.

Well a networked cluster isn't going to scale as well if the CPU's were in the same machine. And the software can make a huge difference, bigger difference then it seems. You can have a 3-4x difference in speed when encoding media, even on the same machine, depending on the codec/software you use. So it's not realy a valid comparision.
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Originally posted by: drag
Well a networked cluster isn't going to scale as well if the CPU's were in the same machine. And the software can make a huge difference, bigger difference then it seems. You can have a 3-4x difference in speed when encoding media, even on the same machine, depending on the codec/software you use. So it's not realy a valid comparision.
That was exactly my point. For some software the Opteron will rule, and for other software the G5 will rule.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: tart666
allrightie then. Finally all three major CPU vendors have arrived at the same brick wall: HEAT (edit, or at least all three have same power dissipation levels AND same technology). Now we can see who has better architechture... (and it looks like x86 is really showing its age)

Remember x86 is realy the x86 ISA. This ISA is actually a layer of hardware abstraction.


For instance one of the big gripes against he "old age" of x86 is that x86 cpu's can only have 8 general purpose registers vs PowerPC that has 32 GPR's. (A register is were the information is put into the CPU, and you have several different types, another type is floating point registers.)

However the Pentium 4 has 128 GPR's, which is 4 times as much as PowerPC's ISA uses.

It can do this and still be x86 because it continously renames the registers so that each time the software sees the hardware it's presented with a different group of 8 registers. Thus the pentium 4 can proccess much more at once then would normally be possible.

The ISA is just a hardware abstraction layer nowadays, the hardware itself can be anything realy just as long as the software sees the computer as a x86 computer it will run.

So x86 can't realy become obsolete per say, at least not in the way people generaly thinks it can.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Eug
Originally posted by: drag
Well a networked cluster isn't going to scale as well if the CPU's were in the same machine. And the software can make a huge difference, bigger difference then it seems. You can have a 3-4x difference in speed when encoding media, even on the same machine, depending on the codec/software you use. So it's not realy a valid comparision.
That was exactly my point. For some software the Opteron will rule, and for other software the G5 will rule.

Ya, but when comparing CPU's you want to compare same software situation with different CPU's.

Comparing different software on a different cpu in a cluster vs different software with a different cpu in a SMP configuration is worthless.


You might as well compare the FPS scores of Quake3 in a dual G5 vs the FPS scores of UT2004 in a pentiumIII.



You don't want to fall into the same trap as those morons calling BS on Apple's benchmarks by saying that Spec2000 tests (using the Gcc compiler don't match the Spec2000 results found elsewere (made using the Icc compiler).

(Icc compiler doesn't work on PowerPC, but Gcc works on both x86 AND PowerPC, which is why it was used.)
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Well...

Photoshop - G5 faster or slower depending on which filters used
Video encoding - G5 2.0 faster than Xeon 3.06 using two cross platform benches
Lightwave - G5 slower than Xeon
After Effects - G5 faster or slower depending on which benchmarks used

Etc. etc.

Anyways, since you mentioned icc and gcc. I'll note that the IBM xl compilers are MUCH faster for the Mac than gcc, but they still aren't as fast as icc/ifc for SPEC. However, they don't autovectorize either (although Intel's speedup is mostly just because they're very optimizing, and not so much because they're autovectorizing). It's interesting to note though that IBM will be releasing new tweaked and autovectorizing xl compilers for Mac OS X this year, and the speedup vs. last year's xl compiler release is supposed to be very nice. But Intel still will win for SPEC.

I wonder if it's possible to create a vector test. SPEC doesn't test vector at all (except from whatever can be gained from autovectorizing compilers). One of the reasons Macs seem to do very well with certain software (BLAST, Apple video software) is because Apple has excellent Altivec programmers. The reason I brought up that H.264 example was to illustrate this. Correct me if I'm wrong but I believe Apple is the only company ever to demonstrate decoding of normally-encoded HD content on H.264 on a dual desktop machine. This has never been demonstrated with Opteron or Xeon dual workstations.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Originally posted by: drag
You don't want to fall into the same trap as those morons calling BS on Apple's benchmarks by saying that Spec2000 tests (using the Gcc compiler don't match the Spec2000 results found elsewere (made using the Icc compiler).

(Icc compiler doesn't work on PowerPC, but Gcc works on both x86 AND PowerPC, which is why it was used.)

No, it was used because it was the only way Apple could have a chance at beating the P4 at SPEC2000 scores. Using the "same compiler", especially when the backend is radically different, is not the point of SPEC (probably breaks some rules as well). Anyways, SPEC cores for the G5 using IBM's high performance xlc/xlf compilers would suggest the G5 is still no match for top-of-the-line x86 processors.
 

InlineFive

Diamond Member
Sep 20, 2003
9,599
2
0
The performance penalty from monitoring twenty-one sensors and actively monitoring and modiying fan and liquid speeds must be high. Ouch. :(
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: PorBleemo
The performance penalty from monitoring twenty-one sensors and actively monitoring and modiying fan and liquid speeds must be high. Ouch. :(
Naa, I used a G4 Xserve for a couple of years, and it had absolutely no problem keeping up with all the sensors. The PowerMac has a few more, but it's also faster, so I don't see that being a problem.

PS drag, your register explaination isn't entirely correct, but I don't know enough about registers to exactly explain why. I do know however that the A64's 16 GPRs is a big deal, so take that as you will
 

crazycarl

Senior member
Jun 8, 2004
548
0
0
Originally posted by: killershroom
Originally posted by: crazycarl
wait so is it true that these only have 2 internal hardrive bays?????

Yup.


wow
well i guess we are in the era of 400 and 500 gb drives... maybe not such a necessity anymore....
 

0roo0roo

No Lifer
Sep 21, 2002
64,795
84
91
Originally posted by: killershroom
Originally posted by: crazycarl
wait so is it true that these only have 2 internal hardrive bays?????

Yup.

well considering with those two bays you can get what? 600gb or more these days its no biggie. firewire external drives are pretty fast these days to boot, and u can atttach enough to satisfy most anyone.


but yea, o/s controlled liquid cooling? no user maintenance, thats just badass:) if its like the fan control of before, the o/s anticipates heavy usage and ups cooling before heat builds up:) just look at heat piping, the zalman case, it can barely keep one cpu cool, and it costs over 1k alone.
 

0roo0roo

No Lifer
Sep 21, 2002
64,795
84
91
Originally posted by: ViRGE
Originally posted by: PorBleemo
The performance penalty from monitoring twenty-one sensors and actively monitoring and modiying fan and liquid speeds must be high. Ouch. :(
Naa, I used a G4 Xserve for a couple of years, and it had absolutely no problem keeping up with all the sensors. The PowerMac has a few more, but it's also faster, so I don't see that being a problem.

PS drag, your register explaination isn't entirely correct, but I don't know enough about registers to exactly explain why. I do know however that the A64's 16 GPRs is a big deal, so take that as you will

eh yea, its nothing. think about how much cr@p we have running in our systemtrays in windows. well most of us atleast. doesn't drag our systems down.
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Originally posted by: crazycarl
wait so is it true that these only have 2 internal hardrive bays?????
Yeah, and I agree it's kinda lame, especially when the there is an extra IDE connection available, but just no place to put that extra IDE drive.

I'd say the next Power Mac revision needs space for 3 drives. A nice system for video or Photoshop would be a single boot drive and a RAID0 scratch disk. Actually people have done this. They've slaved an IDE drive to the optical drive and then just strapped it on top of the optical drive. (Yes there's room there, but no housing for it.) Then they just software RAID0'd the two SATA drives for the scratch.

FWIW, the G5 does come with two Firewire 400 ports and one Firewire 800 ports. There is no significant drive speed bottleneck with Firewire 800.
 

Eug

Lifer
Mar 11, 2000
24,176
1,816
126
Hmmm... Maybe the chip is hotter than we were led to believe...

eWeek: Apple Pumps Up Power Macs with Dual Processors

Goguen said it had proved difficult to accommodate the power-density curve presented by the chip. "It's a challenge to cool the part," however, he added that from a user standpoint, the more-efficient water-based cooling combined with the existing segmented air-flow technology let the faster, hotter systems run just as quietly as previous models.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Accord99
Originally posted by: drag
You don't want to fall into the same trap as those morons calling BS on Apple's benchmarks by saying that Spec2000 tests (using the Gcc compiler don't match the Spec2000 results found elsewere (made using the Icc compiler).

(Icc compiler doesn't work on PowerPC, but Gcc works on both x86 AND PowerPC, which is why it was used.)

No, it was used because it was the only way Apple could have a chance at beating the P4 at SPEC2000 scores. Using the "same compiler", especially when the backend is radically different, is not the point of SPEC (probably breaks some rules as well). Anyways, SPEC cores for the G5 using IBM's high performance xlc/xlf compilers would suggest the G5 is still no match for top-of-the-line x86 processors.

unhuh.

So using ICC compiler would be more fair? B.S.

And "the backend is radically different"? GCC is speficly designed to be cross platform. It sacrifices many of the features of ICC in order to work on computers.

I pretty much use GCC everyday, what do you know abou it?

ICC is SPECIFICLY DESIGNED TO MAKE INTEL PROCCESSORS LOOK GOOD IN BENCHMARKS. It's made by intel for intel proccessors. Get a grip.

Gcc is made by the GNU project to make a cross-platform compiler.

That's why when Intel/Dell/whoever pays for it they use Icc. They do it so it looks good in advertisements.

The point of benchmarks is to try to even out the playing feild as much as possible. So the veritest compared a Unix-style OS running GCC on x86 vs a Unix-style OS running GCC on PowerPC.

That's as good at it gets when comparing different platforms and actually is much better then most benchmarks I've seen comparing different platforms.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: ViRGE
Originally posted by: PorBleemo
The performance penalty from monitoring twenty-one sensors and actively monitoring and modiying fan and liquid speeds must be high. Ouch. :(
Naa, I used a G4 Xserve for a couple of years, and it had absolutely no problem keeping up with all the sensors. The PowerMac has a few more, but it's also faster, so I don't see that being a problem.

PS drag, your register explaination isn't entirely correct, but I don't know enough about registers to exactly explain why. I do know however that the A64's 16 GPRs is a big deal, so take that as you will

Having more registers aviable to programmers is suppose to make code more flexible or something. But the reality is that x86 ISA does warrent 8 GPR and the Pentium 4 does have 128 GPR's.

It's weird stuff.

16registers vs 8 registers is definately a good thing, but my x86 actually realy describes the layer of hardware abstration between the actual CPU/RAM/Hardware and assembly code used in software. It doesn't nessicarially become obsolete or is obsolete.

For instance take the Transmeta proccessor. It's something that is completely alien to Risc/Cisc/PowerPC/x86 and all other types of hardware before it. It's something completely new, and weird.

But since the software sees it as x86 it's still x86. The nice part is that with the same technoloy it can be PowerPC, Risc or Mips CPU if the designers wanted it to.
 

Sohcan

Platinum Member
Oct 10, 1999
2,127
0
0
Originally posted by: drag
However the Pentium 4 has 128 GPR's, which is 4 times as much as PowerPC's ISA uses.

It can do this and still be x86 because it continously renames the registers so that each time the software sees the hardware it's presented with a different group of 8 registers. Thus the pentium 4 can proccess much more at once then would normally be possible.

Just a nit-pick, the P4's 128 rename registers cannot be called GPRs, they are not accessable to software. Your understanding is off on rename registers vs. logical (architected) registers.

Having a pool of physical renaming registers that is larger than the number of logical registers is necessary to support out-of-order execution. Take the following snippit of code: (the register to the left of "=" is the destination register, the two to the right are the source operands)

add r1 = r2,r3
add r4 = r2,r1

There is a RAW (read-after-write) dependency between the two instructions; the second add uses r1, a result of the first add, so the two instructions cannot be executed out-of-order. But say you had the following code:

add r1 = r2,r3
add r3 = r2,r4

There is a WAR (write-after-read) dependency...the second add writes to r3, which the first add uses as a source operand. While the second add does not use the result of the first, you still cannot execute these instructions out-of-order, because the second add would over-write whatever value used to be in r3, changing the result for the first add. Say there are only 4 registers in the instruction set...the compiler, running out of logical registers, was forced to re-use register 3 as a destination operand for the second add...it's ordering does not reflect any data actually flowing between the two instructions.

The solution to the WAR (and WAW, write-after-write) problem is register renaming. Instead of using the architected registers to feed data to the operands of instructions, we have a larger pool of physical registers. A structure maintains the current mapping of logical registers to physical registers...instructions are "renamed" in program order, and each "right hand side" (of the equals sign) source register uses the current logical-to-physical mapping, and the destination register receives a new logical-to-physical mapping. Any subsequent instruction that uses this logical register as a source uses this new mapping.

Reusing the WAR example, let's say (magically) that currently logical register r1 maps to physical register r1 (ditto for r2, r3, r4). Say there are 8 physical registers, so register r5 to register r8 are currently "free" (they have no mapping). The first instruction is renamed, and it just so happens that it uses the same physical register numbers. The second instruction is renamed, and the logical source registers r2 and r4 receive the current mappings to physical registers r2 and r4. The source register, r3, gets a new mapping to physical register r5. So the code now looks like:

add r1 = r2,r3
add r5 = r2,r4

The instructions can now be executed out-of-order, and store their results without fear of violating some dependency between the instructions. If, for example, the first add depended on a memory access and was stalled, the second add could continue to execute. Any subsequent instruction that uses logical r3 as a source will use the mapping that the second add produced, so it will get mapped to r5. Thus, once the second add has executed, any subsequent instruction that used its destination register mapping can execute.

At the end of the pipeline, the instructions have to be "retired" in program order. The physical register used for an instruction's destination gets returned to the list of free physical registers. This method I've described is an overview of how the MIPS R10000 does out-of-order and renaming....other out-of-order CPUs have used similar or different methods of achieving renaming.

Anyway, what this gets down to is that the physical renaming registers are not software visible. Having more physical registers allows you to keep more instructions in-flight...the P4 can have 126 instructions in-flight, so it has 128 renaming registers. The POWER4/PPC 970 can have 200 instructions in-flight, so it has somewhere around 180-190 renaming registers as I recall.

But this doesn't get around the fundamental 8 architected register limitation of x86, which is all the software can see. Regardless of how many physical registers are present, if a compiler needs more than 8 registers for a particular procedure, it has to start spilling registers to memory, which is going to reduce the amount of instruction-level parallelism. Having fewer architected registers also limits compiler optimizations, such as loop-unrolling and software pipelining.

Hrrm, that ended up being a lot longer than I intended...hopefully it makes a little bit of sense. :)
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Megatomic
Originally posted by: Marble
Maybe if the OS was better I would buy one.
OSX is unix (a form of BSD), stable, feature rich, and quite beautiful aesthetically. I'd love to have a dual G5, I just can't afford one. :(

Be careful. Calling things a unix can get you in trouble. ;)

Plus, it's a MACH kernel running on FreeBSD (mostly) userland. ;)