• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Is the PC hardware industry slowing down?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I think computers have reached the point of being good enough for the majority of people. The focus is big on mid-range and low-end these days. Truth be known, the iPad is plenty computer for your average user. It's not like the 90s when people needed the latest and greatest. Most people just use their systems to surf the web, watch YouTube vids, and IM. You don't need an i7 for that.

I disagree. It is easy to write something that taxes the hardware beyond what it is currently capable (x264, it can easily bring an i7 down to a slow 1.5 fps, or lower with the right filters). I think it is more of a problem with architectural design. Just about every corner that could be cut, every known optimization, ect has been implemented in hardware. Now they have been stagnating with adding more and more cores in hopes that some magical enhancement will come along.

True. X86 is getting a little long in the tooth. Revisions aside, most computers today are built on technology that's 30 years old. There's probably a better way of doing it but logistics gets in the way. Everything has to be built from the ground up again. People want that legacy support, especially businesses. Just going to 64-bit wasn't exactly smooth. Even if we do develop a new architecture, it's going to have to incorporate some sort of x86 virtualization.
 
While it's true that more cores is a solution that has been forced on us by limitations in lithography it's not just some temporary hack. Multicore computing along with properly threaded software is a very legitimate way of greatly improving the processing power of modern computers. It will inevitably run up against some of the same miniaturization and heat dissipation briers that stopped the Ghz wars but it is the way forward.

To a point. By the time you get to about 8 cores, software that it able to effectively use those cores becomes very scarce. (Heck, software that can use 2 core effecively is somewhat on the scarce side).

With predictions of 16, 32 cores, you start to get to a "diminishing returns" situation. I didn't mean to come off as saying multicores aren't an improvement, just that they were more of a "Shoot, I can't think of what to do next" solution. I think Intel, and AMD, realized that parallel programming is a pain, I think that is part of the reason developing dual cored CPUs took as long as it did. They both hit clock and power walls, and that is what forced them to go a new direction.
 
While it's true that more cores is a solution that has been forced on us by limitations in lithography it's not just some temporary hack. Multicore computing along with properly threaded software is a very legitimate way of greatly improving the processing power of modern computers. It will inevitably run up against some of the same miniaturization and heat dissipation briers that stopped the Ghz wars but it is the way forward.

This is really a matter of mindset for programmers. For years applications have been designed to take advantage of a single thread. The faster that thread can be processed, the faster the application will run. Recently programmers have (as a generalization) began to think in terms of dual cores; that is running 2 threads simultaneously. Eventually they will begin to think in terms of taking advantage of an unlimited number of cores; meaning applications will scale to the amount of cores automatically. While we have reached the real cap in terms of mhz/ghz (at least with our current fabrication process) we have only begun to develop what can be done running several processes at once (multiple cores). A gross simplification, but you get the idea. Technology has always preceded our ability to take advantage of it. I suspect it will continue in that direction for some time.
 
We have also reached the limits of what we need to accomplish with personal computers. Much of the business world still runs on P3 & P4 class machines, they only upgrade when one goes dead. The number of people who need faster machines is dwindling.

Yep.

PC manufacturers and even Apple have now messed up by selling computers that are too fast for 90% of the work they do. I know that for some random family that just uses it to surf the web and what not won't have to upgrade their computers for years if that bought a half decent one. I mean, most mid-range Dells have fast dual to quad core CPU's, 4-8GB RAM, plenty of drive space, Blu-ray, etc...rather fast machines. So...unless it breaks, why upgrade even now...even 2 years from now...5-10 years? Even some Gateway Celeron POS from 2003 can still run everything around...not games, but if you have a 2003 Gateway you arent worried about games...
 
This is really a matter of mindset for programmers. For years applications have been designed to take advantage of a single thread. The faster that thread can be processed, the faster the application will run. Recently programmers have (as a generalization) began to think in terms of dual cores; that is running 2 threads simultaneously. Eventually they will begin to think in terms of taking advantage of an unlimited number of cores; meaning applications will scale to the amount of cores automatically. While we have reached the real cap in terms of mhz/ghz (at least with our current fabrication process) we have only begun to develop what can be done running several processes at once (multiple cores). A gross simplification, but you get the idea. Technology has always preceded our ability to take advantage of it. I suspect it will continue in that direction for some time.

Not necessarily. Threading has its own shortcomings that aren't easily overcome. Just spawning and destroying a thread requires significant overhead. Not only that, but there are just some paths and dataroutes that don't lend themselves to be threaded.

It really isn't as simple as saying "Hum, I'm doing this and this, lets run them at the same time!" The problem is, you can only make statements like that after doing a large amount of profiling to make sure that splitting the data route will actually yield any benefits.

One example of this we had in the programming forums. We were developing an algorithm for fast Pythagorean triplet finding. Once we achieved the most optimum solution, adding threading on top of it actually slowed things down significantly. Applying threading to some of the slower solutions did yield some benefits, however, if the solution was significantly fast enough, nothing was gained.

There are other issues. For example, you can't simply say "Spawn x threads because I have Y cores" It isn't always that simple. Sometimes doubling the number of threads for the number of cores will yield higher speeds, sometimes halving it will yield better speeds. There is little rhyme or reason to it as well.

And don't get me started on the standard problems such as race conditions and deadlocking.

There is only so much you can do with threads, and it isn't as simple a task as just saying "Well, split it!". Many programmers look at it in this fashion "Is it worth my time to implement this?" Any more, the software model has become "Is the program fast enough?" not "How much faster can I make it?" Most programs simply don't need the full speed of the CPU.
 
This is really a matter of mindset for programmers. For years applications have been designed to take advantage of a single thread. The faster that thread can be processed, the faster the application will run. Recently programmers have (as a generalization) began to think in terms of dual cores; that is running 2 threads simultaneously. Eventually they will begin to think in terms of taking advantage of an unlimited number of cores; meaning applications will scale to the amount of cores automatically. While we have reached the real cap in terms of mhz/ghz (at least with our current fabrication process) we have only begun to develop what can be done running several processes at once (multiple cores). A gross simplification, but you get the idea. Technology has always preceded our ability to take advantage of it. I suspect it will continue in that direction for some time.

That Ahmdahl guy is a real asshole. 😡
 
We have also reached the limits of what we need to accomplish with personal computers. Much of the business world still runs on P3 & P4 class machines, they only upgrade when one goes dead. The number of people who need faster machines is dwindling.

This. Hence the advent of netbooks. I <3 my $180 old used thinkpad tablet.
 
On the CPU front, I'd say the apparent slow down is from lack of competition. AMD is putting out products that are speed competitive for the most part with Intel's Core 2 Quad generation. Intel has no real motivation to quickly put out new faster processors, they're already the fastest.
 
I don't think 6 months is an eternity but I do find it amusing that the best $100 video card has been the Radeon 4850 for over a year. I have a 4870 in my rig, a 2-year-old card, and it still handles everything I throw at it just fine.

Not that I'm complaining!
 
What's sad here is that most post indicate that overall computer processing power / usability can be measured by the ability to handle/play games.

Pitiful.
 
What's sad here is that most post indicate that overall computer processing power / usability can be measured by the ability to handle/play games.

Pitiful.

Not really. Computer games are some of the most hardware intensive things that the general population can do. The rubric of game performance is really a good one for how far the industry has advanced. (Especially for the GPU industry, as games are just about the only things that take advantage of a GPU's processing power).
 
I came in here to voice my agreement to many of the above sentiments. My machine 2.5 years old (C2D 4500 and nVidia 8500GT) and its still going VERY strong. Here's to another two years machiney :beer;

But, in all honesty, computing power now exceeds computing needs signficantly. We are approaching a new technology horizon however, where Video will be much much more prevalent. Think about how everything you do now is through text (text menus, forums, etc) - Now visualise a "personal assistant", a new and improved version of ol' paperclip from MS Office. Think about the possibilities of holograms - like that in Star Wars and Star Trek - Think about 3D - Can't wait 🙂
 
Not really. Computer games are some of the most hardware intensive things that the general population can do. The rubric of game performance is really a good one for how far the industry has advanced. (Especially for the GPU industry, as games are just about the only things that take advantage of a GPU's processing power).

As was stated in one post, the video game industry shifted it's design focus during this latest generation of consoles. Prior to that, games were most often programmed and designed for PC and then ported over to the consoles. So the limiting factor was the PC hardware available at the time and features would be scaled back in order to adapt the game to the consoles' inferior hardware. Now consoles are the primary design focus and games are being designed for aging hardware. Developers rarely program games to scale up and make full use of the resources available on a PC built within the last few years that would outstrip any of the current consoles.

But there are other areas where PC hardware still is fully taxed. I believe bigi would be referring to 3D modeling, CAD work, and the like.
 
As was stated in one post, the video game industry shifted it's design focus during this latest generation of consoles. Prior to that, games were most often programmed and designed for PC and then ported over to the consoles. So the limiting factor was the PC hardware available at the time and features would be scaled back in order to adapt the game to the consoles' inferior hardware. Now consoles are the primary design focus and games are being designed for aging hardware. Developers rarely program games to scale up and make full use of the resources available on a PC built within the last few years that would outstrip any of the current consoles.

But there are other areas where PC hardware still is fully taxed. I believe bigi would be referring to 3D modeling, CAD work, and the like.

Even though it isn't programmed the best it could be, that doesn't mean it isn't still taxing. It just means it isn't running as fast as it possibly could (or using all the features available). A while(true) loop can be very taxing to the system, even though it isn't using the SSE registers.
 
Part of the issue is Intel is sitting on the best CPU's right now, and can just sit there and not mess with the price much because AMD has nothing competitive.
 
the ati evergreen chips are less than a year old. i wouldn't really expect a successor yet. particularly since it's only in the last month that nvidia came out with a truly compelling part.
 
More's law says nothing about speed. Besides which it's just an observation that has happened to hold for a long time.

excluding intel, moores law isnt going so well 😛

It also appears that Intel has been sitting on their hands with both chipsets and cpus SKUs and price drops because of an uncompetitive AMD.

The graphics market has stagnated almost entirely due to issues at TSMC and UMC getting their processes ready for mass production.
 
Part of the issue is Intel is sitting on the best CPU's right now, and can just sit there and not mess with the price much because AMD has nothing competitive.

I don't know if I would say that. AMD's chips might not be able to go toe to toe w/ Intel, but for their price they are pretty good choices.
 
The thing that boggles me is that prices really have stagnated. If I built the PC I have now again, and then I went back in time to last November. It'd be cheaper then.
 
Some parts have actually gone up in price. Things have been kind of wonky for a few years now. Hell, most of my previous upgrades have been pointless.
 
Back
Top