• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Morre's Law Re-examined -- Are Computers Slowing?

Caveman

Platinum Member
Seems like it... I bought my "state of the art" system about a year and a half ago with the intent to replace it after two years this November...

I assumed that what would be available this fall/winter was a machine about 4x as fast. Perhaps I assumed incorrectly that computer speed doubles each year, but that seemed to be the case based on my research nearly 2 years ago before I built my rig?

I'm an avid "power user": flight simulation, engineering apps, high speed internet, etc... and I'm disappointed with what will be available to buy in 6 months. Even if the Clawhammer is a success, won't it still only be about twice the speed of my current machine or am I underestimating what's hitting the shelves this fall.

Looking for something at least 3x speed to what I have now assuming I but state of the art components throughout; will I make it or am I dreaming?
 
Moore's law doesn't state anything about performance directly. It states that, on average, the number of transistors in any particular MPU will double every 18 months or so. That does not translate directly into performance for that MPU and most certainly doesn't translate directly into double the system performance every 18 months.
 
But Moore's law has nothing to do with the speed of the cpu... It's about transistor density.

(EDIT: Damn! Beaten by mere seconds.)
 
or am I dreaming?
yes.

Speed will be up about 50% over last June as of 6-23 when the 3.2 GHz P4 is released. But (as discussed in two other threads a couple of days ago) there's very little incentive right now for intel to push for more big speed increases. The intel roadmap shows steady increases over then next year and a half, but no wild leaps.
 
You could buy a 1800+ a year and a half ago?

Well I'm not complaining. Atleast this way we can still play new games on pcs without much problems.

I remember my 233 mhz machine...two years after i built that in 1998 I couldn't play anything without lowering the resolution to the minimum and turning down details on most settings

But I would be so happy provided my rig is still pretty good a year from now...but considering Athlon64, pci express, Tejas and whatever I doubt that
 
According to the incorrect version of Moore's law (which is nonetheless the most useful version), performance should only be 2.5x greater after 2 years.
 
my philosophy on upgrading is at least 2x total performance (well, hard drives have only come so far... so..)

you are guaranteed to notice things happening 2 times quicker, 2 times the frame rates, etc 🙂
 
Originally posted by: imgod2u
Moore's law doesn't state anything about performance directly. It states that, on average, the number of transistors in any particular MPU will double every 18 months or so. That does not translate directly into performance for that MPU and most certainly doesn't translate directly into double the system performance every 18 months.

Not entirely correct...Moore's law ascribed the rate of decreasing cost per function as being function of time with an exponential character.

The exponential character gives rise to a near-linear slope on a log-log plot with a time function that originally centered on a 12month cycle. Meaning every 12months the cost per function reduced to half the cost 12months prior.

This near-linear slope moved up (slowed down) to 18months nearly 7 years ago, and 2 years ago it moved to a 24month cycle.

Now, cost per function is only loosely correlated to transistors. Doubling the number of functions does not necessarily double the number of transistors. Sometimes more transistors are needed per function (additional SRAM to support throughput, etc.) when you double the number of functions per chip, other time less (hyperthreading).

The fact that the rate of increased performance roughly correlates to the rate of decreasing cost per function is tied intimately to the prevalent technique of reducing the cost per function, namely shrinking the geometry of the transistors needed to describe a "function".

So, if the complexity (number of functions) on a chip is increased monotonically with the decrease in cost of producing such functions, so as to maintain equivalent manufacturing cost of producing the chip (not to be confused with market price), then we can expect the number of functions on a state of the art chip to be roughly twice that of a state of the art chip produced two years ago. Memory (DDR, etc.) is a good example of this. Logic (CPU, ASICS, etc.) is a moderately good example of this but confuses the correlation to the various changes in ratios of the types of functions on more and more complex chips (SRAM to Logic ratio, etc.)

Even more of stretch is to say that the rough correlation of doubling the number of functions every 24 months in monotonic agreement with the decrease in cost of those functions also correlates to a monotonic doubling of performance every 24 months.

Moore's law is one of the most readily confused aspects surrounding semiconductor manufacturing and economics, and yet it is one of the easiest to concepts to comprehend out of the myriad of technical issues surrounding mfg'ing ICs and selling them. Maybe the fact that it is almost too easy to understand is why everyone tries to be an expert (including myself) on Moore's law and hence all the confusion that comes with everyone expressing their opinion as fact.

My 0.02$, although that may be an overvaluation of the cost per my function, come back in 24months and we'll see what its worth 😀
 
Yeah, I know that Moore's law deals quantitatively deals with the number of transistors but qualitatively, it relates to overall speed as well...

And what I'd hope to get this fall was a machine at lease twice as fast on the final framerate than the machine I have now...

I was thinking a CPU at about 3.4 GHz with 1 gig RAM and the latest video card... Probably ATI's next incarnation after the 9800...

BTW, when can we first start to expect CPUa on the order of 4-5 GHz???
 
Caveman I know what you're saying...

I bought an AMD Athlon 1.4 GHz in Fall of 2001 and I upgraded to an XP2100+ last August. Its really nice to open apps that quickly but I see zero MANDATORY need to get the latest AMD or P4 3.0GHz CPU just to do normal stuff.

I would get the latest system for poorly coded over the top-graphix games like Splinter Cell or perhaps just get the latest video card.

I think today's programmers are spoiled by the amount of resources available.

Look at the games from the 80s - they had KILOBYTES and 1-10 MHz to work with and made amazing games. Now programmers have GIGABYTES and GIGAHERTZ to mess around with and yet the amount of pure crap is astonishing (Enter the Matrix - they're selling it just based on the name of the movie)
 
I though I would ignore Moores law, and try a bit to answer your question.

Anandtech

Comanche 4 benchmark using a lot of CPU's, from 1.5GHz to 3.0GHz, which should give an indication of the sort of speed you'd need to double your computers speed in actual terms (not 1.5GHz to 3.0GHz necessarily).

A 1.8GHz Pentium 4 is about 50% of the speed of the 3.0GHz P4 with 800MHz FSB, but the XP1800+ is about 65% or the speed.

You might be able to get something like 2-2.5x the speed at the end of the year, depending on what the Prescott and Hammer thingies manage to put out in terms of added performance from cache/FSB etc, not just numbers.

Dunno if that will help any, but if you manage to trace back CPU release dates, it might help indicate where we might be in the future and where we have gone in the last 18 months or so.
 
Prescott is going to be the CPU that will give you the speed you want because it will be very scalable like the Northwood and Williamette cores were. They went from 1.4ghz to 3.2ghz. Prescott will go from 3.2Ghz all the way up through 5 Ghz a little. It will also feature a larger cache and newer Hyper Threading. Also, some interviews have mentied even more cache that 1mb so if they get to 1.5mb or even 2mb of cache, that could improve performance a lot.
 
Back
Top