AMD may pass Intel, and release a 3400 first.......(Intel in trouble they say, I doubt it)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

justly

Banned
Jul 25, 2003
493
0
0
There are many good replies, but there is one thing that I get tired of hearing about and that is the comment that Hyperthreading = multitasking.

I think it best we clear up what ?multitasking? is, dictionary.com does this quite well.
http://dictionary.reference.com/search?q=multitasking

After reading this it should be clear that multitasking is not limited to the P4 or HyperThreading (just as PetNorth stated). It would be more accurate to describe HyperThreading as parallel processing with two logical CPUs.
<a target=new class=ftalternatingbarlinklarge href="http://dictionary.reference.com/search?q=parallel%20processing
">http://dictionary.reference.com/search?q=parallel processing
</a>
With this newfound understanding (for some) of how these differ, and the realization that the P4 architecture with its long pipeline has a detrimental effect on performance when changing threads, the statement accord99 made ?So with Hyperthreading, you're trading decreases in single-thread performance for overall increase in work per unit time.? is somewhat correct but only in respect to the P4 Architecture, and only if both threads can run at or near optimal performance with only half the P4s capabilities.

The Athlon 64 architecture with its integrated memory controller and shorter pipeline can shift tasks almost seamlessly in comparison to the P4, preventing its processing power to be cut in half for multiple threads.

It is not so much that one multitasks better than the other, but more the question of how it multitasks. The characteristics/demands of the apps being run may very well be the determining factor between one solution or the other being better.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
I thought Intel already has a 3.4GHz P4EE out?
or are the ones on the reviews OC'ed?
 

Bovinicus

Diamond Member
Aug 8, 2001
3,145
0
0
Originally posted by: Cerb
The cheaper CPU is better than the more expensive CPU. Otherwise, they are even.

"The P4 is better at media encoding!"
OK, for LAME I'll give it to you.
Video? You're either insane or you don't watch what you encode. I'd be suprised if a P4EE could even manage 30FPS for a rip that you could play and compare to the DVD it came from. And I'd suprised if it could manage 20FPS for animation of similar quality. You just have to resign yourself to going to bed and not worrying about how long it takes if you want quality compressed video. Period.

"The Athlon64 is better in gaming!"
Yes, it is...but if you can notice 5%, you're too picky and need to get out more (and when you hear that from a geek with no GF who's main social event is a local LARP, you're in bad shape).
The A64 is mainly better for gaming because it's cheaper, espeically with the 3000+.

3D rendering...
If you're serious about this, you'll get a nice workstation, dual Opteron, dual Xeon, P4C or AthlonFX. small 15K SCSI main HD, maybe a Quadro FX, and you'd STILL have to be patient with it. Ultimately, archetectual(sp) differences gradually become removed, the more you need to actually work, and the ability of the system integrator makes or breaks it. What benchmarks most show, except for gaming and the Business Winstone, are performance numbers far too stripped down to be of real use (though many of the synthetic benchmarks are useful, if just to make sure everything is running like it should).

Oh, and we need better benchmarking.
I want to see the difference between the A64/AFX and P4C/P4EE when they have Norton realtime protection going, winamp, trillian, mail client checking every minute or two, weather channel thingie, and MBM running.
Then we'll finally get some real-world performance scores. 'Cause see, most people have this stuff running, and it would definitely show if either CPU/chipset setup gets hindered by light multitasking.
Good point on the video encoding. I have tried a little of it in my day. It takes a ridiculous amount of time for quality video that is highly compressed. Patience is a virtue that one must have when encoding high quality videos.

True, a 5% difference in performance is not really tangible. However, there are a few games that really do accel from the architectural enhancements of the Athlon64. Unreal 2, UT2003, and Age of Mythology all improve upon the 3.2C by 10% or more on similar platforms (30%+ in AoM's case). There are probably other games as well, but the variety of games that actually get benchmarked is relatively low in comparison to the number of games out there. I can definitely notice 10% in very taxing moments of gameplay.

If you are serious about 3D rendering, then you may purchase a system with the high dollar components that you are talking about. However, there are a lot of people that like to delve into many areas of computing but simply don't have the money that professionals have for the same equipment. Unfortunately, all of this is moot because of what previous posters have said: the actual rendering of a scene is only a small part of creating 3D animation, and that is statistic that most benchmarks report.

Damn straight. Benchmarking does need to be improved. We have to deal with what we can get in the meantime. A rough idea of real world performance is better than no idea at all.