Luxology speaks up about its dual G5 vs. dual Xeon demo benchmark.

Eug

Lifer
Mar 11, 2000
24,029
1,646
126
Luxology talks about its demo pitting the dual Xeon 3.06 vs the dual G5 2.0. See here:

The impressive SPEC benchmark results presented at the Apple G5 launch have been greeted with skepticism and a host of questions about their validity and methodology.

Luxology was one of the ?real world? applications used to showcase the speed of the new processor. We would like to outline a few key facts about our development practices and how we went about putting the demo together. We hope this will shed some light on the performance users can expect from the G5, and clear up some confusion surrounding the various speed measurements.

Before I get started, let me mention something about the platforms we support. We project that 65 to 70% of our sales in the next 3 years will be to Windows customers. We plan to support Mac, Windows and quite possibly Linux. In our development team, 75% of the engineers work on Windows machines as their primary workstation and 25% use Macs. We like to consider ourselves platform agnostic. As a business we migrate towards the platform with the fastest CPU and OpenGL as it makes our applications look better. Also, in visual fx, compute time is money so we must be acutely aware of which systems will be most economical for our customers. As artists, however, many of us simply prefer OS X for its attention to detail and workflow for everyday use.

Luxology uses a custom-built cross platform toolkit to handle all platform-specific operations such as mousing and windowing. All the good bits in our app, the 3D engines, etc, are made up of identical code that is simply recompiled on the various platforms and linked with the appropriate toolkit. It is for this reason that our code is actually quite perfect for a cross platform performance test. It also allows us to support multiple platforms with a relatively small development team. Huzzah.

The performance demo made with Luxology technology shows our animation playback tools utilizing motion capture data. Typically with 3D animation playback the application taxes the GPU (graphics processor) using Open GL or Direct 3D to handle the hardcore 3D computations. In the case of our demonstration we actually moved many of those functions to go through the CPU and stated as much in the presentation. After all, this was a test of raw CPU power, and not the graphics cards in the box (which were identical Radeon 9800s, by the way). We did quite a bit of performance tuning during the preparation for the demo. However, we did absolutely no altivec, or SSE coding. In fact, the performance tuning was done on Windows and OSX. We used Intel's vTune, AMD's CodeAnalyst and Apple's Shark. Again, 75% of our engineers were on Windows boxes and 25% working on the Mac, not to mention the fact that we only had one engineer with access and security clearance for the G5 prototype. That is hardly relevant, however, as any optimization done on one system is implicitly passed on to the other, as these were general optimizations to our raw 3D engines. The demo set up itself was designed to require a large number of computes and to push a large amount of data in and out of the chip to show both processor speed and bandwidth. I believe the demo accomplished this in an effective and very fair manner.

While we do not currently have commercial software on the market (we are in the R&D phases for several concurrent products) we would be more than happy to host any press persons (contact) and/or some technical experts to come to our Bay Area R&D facility to evaluate these claims.

Brad Peebler
President
Luxology, LLC


(BTW, Luxology was formed by the developers of LightWave.)

EDIT:

1) I suspect the differentiating factor here is bus bandwidth and NOT CPU speed. (I'm ignoring compiler optimizations at the moment.)

2) I'm 100% sure this thread will deteriorate into a flame fest. :p
 

PrinceXizor

Platinum Member
Oct 4, 2002
2,188
99
91
Nice post Eug. Surprised no one else has posted. Ah well.

<Awaits more 3rd party comparisons>

P-X
 

PrinceXizor

Platinum Member
Oct 4, 2002
2,188
99
91
Yeah. Saw that one already :)

Very interesting stuff. Glad the macophiles have some nice hardware for a change.

P-X
 

Dug

Diamond Member
Jun 6, 2000
3,469
6
81

Typically with 3D animation playback the application taxes the GPU (graphics processor) using Open GL or Direct 3D to handle the hardcore 3D computations. In the case of our demonstration we actually moved many of those functions to go through the CPU and stated as much in the presentation.

This is the problem with trying to make a cross platform benchmark.

Sure you can move those functions to go through the cpu. But you don't do that in real life. You would use the graphics card.

It's like running 3dmark to determine if the video card is best for your games. There's no way of telling because you don't use 3dmark in real life games.



 

mpitts

Lifer
Jun 9, 2000
14,732
1
81
Originally posted by: Dug
Typically with 3D animation playback the application taxes the GPU (graphics processor) using Open GL or Direct 3D to handle the hardcore 3D computations. In the case of our demonstration we actually moved many of those functions to go through the CPU and stated as much in the presentation.

This is the problem with trying to make a cross platform benchmark.

Sure you can move those functions to go through the cpu. But you don't do that in real life. You would use the graphics card.

It's like running 3dmark to determine if the video card is best for your games. There's no way of telling because you don't use 3dmark in real life games.

But by moving it all to the CPU, it provides you with an adequate measure of how one CPU matches up against another.