Are CPUs advancing faster than software requires?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikeymikec

Lifer
May 19, 2011
17,675
9,516
136
What OS did Microsoft produce that was bloated? 2000 and XP had a reputation for being lightweight. Vista wasn't really bloated, it was just misunderstood. Anything that can run Windows 7 can run Vista, given the two operating systems are very simple.

I built a machine about a month ago with a Core i5 processor, 4GB RAM and installed it with Vista (the customer has a volume Vista licence that they're still using up). It took a heck of a lot longer to settle down than XP or Win7 does, and that was just with the necessary drivers, Windows updates and nothing else, not even anti-virus.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
In short, no. Maybe yes if all you do is browse the web and type some emails, but games are constantly pushing current HW to its limits. Many modern games won't even run on a single core cpu, so don't wish for that 10GHz P4.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
And in 10 years time even games won't be able to make full use of available hardware. Theoretically they could, but it would simply be too expensive to develop. Hardware is improving faster than software. Exponential wins from linear. It may take some time, but in the end it does.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
In short, no. Maybe yes if all you do is browse the web and type some emails, but games are constantly pushing current HW to its limits. Many modern games won't even run on a single core cpu, so don't wish for that 10GHz P4.
I wish, that was the case with the arithmetic power. There are still only a handful of games that are using more than 4 threads.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
I wish, that was the case with the arithmetic power. There are still only a handful of games that are using more than 4 threads.

Writing software that scales well to multiple threads is not easy, and some applications lend themselves much better to parallel processing than others. Games in general are not what I'd call embarrassingly parallel algorithms. That's why I want to see new cpu's keep increasing single-threaded performance, not just adding more cores.
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
Eh, depends on the application. F@H doesn't give me 250k PPD on my machine. When I can get an i7 that'll give me 250K PPD @ 25 Watts, then I'll take 4, and Intel can slow down development a bit ;)
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
As a developer I would love to push image recognition onto everyone's computers. I could make software that searched through images based on what they contained. But alas the hardware isn't even remotely fast enough. This is just one example of a problem a computer can solve but which the hardware isn't fast enough. Fundamentally the software you see today is the software that will run on your hardware. Everything else is theoretical or running on big clusters of machines in a massive lab somewhere. Once you understand this you'll realise that there is no reason why you can't run Google's entire algorithm on your local machine, except for the fact that your machine isn't fast enough.

Hardware leads software, always. No one builds software that no one can run, it certainly doesn't sell because no one can run it.

An unfortunate consequence of clock speed barrier and the lack end of single threaded performance gains is that its harder to get performance gains. Hardware companies are delivering extra performance in ways that as a software developer I don't want. I can utilise extra clock speed and IPC, I can sometimes utilise additional cores but I can very rarely utilise GPGPU. So while its just a "software" problem its a struggle to deal with the complexity of the systems we have built today, adding yet more complexity hasn't helped us get to the performance in any but a few circumstances.

I made some code go twice as fast yesterday making it multithreaded, moving it from a single threaded algorithm to using 6 cores fully got me a 2x speed up. It now runs 1/3 of the speed on a single core. As a trade off that means anyone with below 3 cores will have no use for this change, it will hinder them. Its less power efficient for the same work and worse than that it likely wont see any benefit past 8 cores. Was it a worthwhile change or not?
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
As a developer I would love to push image recognition onto everyone's computers. I could make software that searched through images based on what they contained. But alas the hardware isn't even remotely fast enough. This is just one example of a problem a computer can solve but which the hardware isn't fast enough. Fundamentally the software you see today is the software that will run on your hardware. Everything else is theoretical or running on big clusters of machines in a massive lab somewhere. Once you understand this you'll realise that there is no reason why you can't run Google's entire algorithm on your local machine, except for the fact that your machine isn't fast enough.

Hardware leads software, always. No one builds software that no one can run, it certainly doesn't sell because no one can run it.

An unfortunate consequence of clock speed barrier and the lack end of single threaded performance gains is that its harder to get performance gains. Hardware companies are delivering extra performance in ways that as a software developer I don't want. I can utilise extra clock speed and IPC, I can sometimes utilise additional cores but I can very rarely utilise GPGPU. So while its just a "software" problem its a struggle to deal with the complexity of the systems we have built today, adding yet more complexity hasn't helped us get to the performance in any but a few circumstances.

I made some code go twice as fast yesterday making it multithreaded, moving it from a single threaded algorithm to using 6 cores fully got me a 2x speed up. It now runs 1/3 of the speed on a single core. As a trade off that means anyone with below 3 cores will have no use for this change, it will hinder them. Its less power efficient for the same work and worse than that it likely wont see any benefit past 8 cores. Was it a worthwhile change or not?

Some very good points here.
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
I made some code go twice as fast yesterday making it multithreaded, moving it from a single threaded algorithm to using 6 cores fully got me a 2x speed up. It now runs 1/3 of the speed on a single core. As a trade off that means anyone with below 3 cores will have no use for this change, it will hinder them. Its less power efficient for the same work and worse than that it likely wont see any benefit past 8 cores. Was it a worthwhile change or not?

Wow, only a 2x speed up going from 1 to 6 cores. Must have been difficult to extract parallelism from that algorithm. Is there too much shared data blocking execution due to spin locks or the like?

I've been playing around with C++ AMP in VS2012 RC. So far it looks promising. IIRC, image processing has a high degree of inherit parallelism - I would think using a GPU for executing this code would be a huge speed up. The problem is, of course, that most people just have built in Intel graphics. Starting with Haswell, we should see some large gains using the GPU and AVX2 for this sort of code. Perhaps in 5 years, your dream may be realizable with a sufficiently large enough market for profitable sales.
 

PaGe42

Junior Member
Jun 20, 2012
13
0
0
As a developer I would love to push image recognition onto everyone's computers. I could make software that searched through images based on what they contained. But alas the hardware isn't even remotely fast enough. This is just one example of a problem a computer can solve but which the hardware isn't fast enough.

Actually the hardware can easily run the software. It is the amount of data that causes problems. And there is so much data to be found, hardware will never be able to catch up with it. But that is an entirely different point.
 

videogames101

Diamond Member
Aug 24, 2005
6,777
19
81
Look at the AT homepage review on the new Macbook Pro. Both GPU and CPU combined can't drive Apple's new Display at any frame rate that could be called smooth.