Can't wait until we measure CPU's in a standard MiPS format...

MadRat

Lifer
Oct 14, 1999
11,967
280
126
P233MMX faster than P2-233

P!!!-500B Slot-1 faster than P!!!-500E FCPGA

P!!! faster than P4

K7-Athlon faster than Duron

533mHz G4 faster than 733mHz G4



One of these days we'll find our old P5-200MMX's outrun the 10gHz chips coming out tomorrow. When will this madness end?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
isn't mips "millions of instructions per second"? Wouldn't you need a specific definition of instruction? Otherwise a risc-based chip would appear to be much faster than it is... just plain instructions is almost as bad as MHz (if you had to do 16 instructions for an emulated floating point divide?)
 

MadRat

Lifer
Oct 14, 1999
11,967
280
126
It shouldn't be too hard to figure out standards for x86 chips. Some type of alogorythm would suffice. Just don't let Intel or AMD set it up, perhaps some intermediary like Apple would do it for the good of the x86 industry! hehe
 

Cat

Golden Member
Oct 10, 1999
1,059
0
0
You cannot measure a CPU by its instructions per second. RISC vs. CISC, as pointed out above, and MMX / SSE / SIMD / 3DNow! instructions further complicate the matter.

Say I've got a chip that can only do one 100 instructions / sec, but each instruction is so complex that it can process all the data that Quake 3 needs in a frame. Voila! 100 FPS.

A simple analogy. You and I are digging two seperate pits. I can dig at one shovel-ful a second, and you can dig at two. However, I have a HUGE shovel, and yours is plastic beach trowel:) Who wins?
 

MadRat

Lifer
Oct 14, 1999
11,967
280
126
The FPS test is too dependent on videocard drivers, so its not valid. The size of the shovel doesn't matter, either, since the smaller shovel will be more than twice the speed of the larger shovel...
 

lifeguard1999

Platinum Member
Jul 3, 2000
2,323
1
0
The real measure of any CPU is a benchmark. That benchmark should be a program that you use. If you are a gamer, then Q3/UT/etc. If all you do is surf the web, then a 533 MHz Celeron is as good as an 1.3 GHz Athlon.

Another example: the Athlon beats many single CPUs on supercomputers in the number of floating point operations per second (flops) it can perform. The CPUs on supercomputers cannot play Quake3. (Well, they could if someone ported it. Someone did that as an example for an immersive, virtual reality experiment. But that is getting off on a tangent.)

What good is a supercomputer then? The benchmark to use on them is not Quake3, or even FLOPS. The benchmark to use on them is the programthat you intend to run. For example, CTH (a shock physics code) or GASP (computational fluid dynamics) or GAMMESS (computational chemistry). Supercomputers use up to thousands of CPUs with a fast interconnect that a gigabit ethernet cannot touch.

Even then, an Athlon in a Beowulf cluster can beat a supercomputer, if you choose the correct benchmark.

To close, I will repeat myself.

The real measure of any CPU is a benchmark. That benchmark should be a program that you use.