- Jan 7, 2002
- 3,322
- 0
- 71
I imagine we'll be seeing 8,16,32,64 etc. core cpus in the future as clock-speed can't really increase that much more...or can they with smaller and smaller processes?
Originally posted by: heyheybooboo
Too bad the overwhelming majority of software can only run a single major program thread on a core. That hasn't changed in 10 years (much less the last 2). It's easier to optimize code for specific instruction sets (like SSE4 - SSE4a) than completely rewriting a programs code for multicore thread parallelism.
Originally posted by: Shortass
Originally posted by: heyheybooboo
Too bad the overwhelming majority of software can only run a single major program thread on a core. That hasn't changed in 10 years (much less the last 2). It's easier to optimize code for specific instruction sets (like SSE4 - SSE4a) than completely rewriting a programs code for multicore thread parallelism.
For now. This will likely change sooner than later.
Not trying to be sarcastic, I'm really curious. Why would you say it's not necessary if processes push my a single core to 100% while my other 3 cores do nothing? Is making apps multithreaded really that difficult? Why can't the OS split tasks to different cores?Originally posted by: heyheybooboo
Originally posted by: Shortass
Originally posted by: heyheybooboo
Too bad the overwhelming majority of software can only run a single major program thread on a core. That hasn't changed in 10 years (much less the last 2). It's easier to optimize code for specific instruction sets (like SSE4 - SSE4a) than completely rewriting a programs code for multicore thread parallelism.
For now. This will likely change sooner than later.
You are deluding yourself if you think so.
There is no compelling reason to do it. It's not necessary. There is no specific need for it. The overwhelming state of your cpu is 'Idle'.
Originally posted by: DSF
In terms of silicon we're going to hit the limit pretty soon as far as process size goes.
Originally posted by: Foxery
Also note that Moore?s Law is sometimes misquoted as "CPUs double in speed" every 2 yeas, but it's actually "CPUs double in transistors,"
Originally posted by: Extelleron
Increasing the number of cores is the way the industry is heading. How far that will continue is the question. Increasing cores is a good (and power efficient) way to improve performance, but as you get above 4-8 processing cores scaling starts to get inefficient even in applications with solid multicore support.
Originally posted by: Extelleron
Eventually the increase in cores will become the same thing as the increase in frequency.... eventually it will be unsustainable and inefficient when it comes to power usage.
Originally posted by: Extelleron
What we might see in 2010 with Sandy Bridge or perhaps in the next new architecture in 2012 is a heterogeneous CPU where you see several large OoO cores and a larger number of simple in-order cores, like the Cell processor.
Originally posted by: Extelleron
Increasing cores is a good (and power efficient) way to improve performance, but as you get above 4-8 processing cores scaling starts to get inefficient even in applications with solid multicore support. .... eventually it will be unsustainable and inefficient when it comes to power usage.
Originally posted by: heyheybooboo
There is no compelling reason to do it. It's not necessary. There is no specific need for it. The overwhelming state of your cpu is 'Idle'.
Originally posted by: Idontcare
Originally posted by: Extelleron
Increasing the number of cores is the way the industry is heading. How far that will continue is the question. Increasing cores is a good (and power efficient) way to improve performance, but as you get above 4-8 processing cores scaling starts to get inefficient even in applications with solid multicore support.
In my simplistic view it is no different then the evolution of L2 and L3 cache designs from off-die to on-package (MCM) to on-die...which once accomplished then became an iterative march to larger and larger sized caches.
It's always a tradeoff with cache size versus cost and power consumption as well, no different with size of the core, complexity of the core, and the power consumption of the cores. (where the evolution was multi-socket mobo, MCM dies on package, on-die multicore, etc)
Originally posted by: Extelleron
Eventually the increase in cores will become the same thing as the increase in frequency.... eventually it will be unsustainable and inefficient when it comes to power usage.
This part I don't follow the logic. It's no less sustainable than clockspeed ramping in that it is true that any given time if the design is intentionally crazy then sure it will be unsustainable.
An 8GHz Netburst chip was not a problem unless you were silly enough to attempt it with a 90nm node, at 32nm it probably would be just fine. So too you could argue that a "native" quadcore chip is proper for 45nm node and beyond but ludicrous for 65nm and earlier.
Sustainability and efficiency come down to implementation and timing not some inherent fundamental limitation in the CMOS itself (unless you are talking about clocking chips in the THz region where silicon has fundamental physics-based limitations)
Originally posted by: Extelleron
What we might see in 2010 with Sandy Bridge or perhaps in the next new architecture in 2012 is a heterogeneous CPU where you see several large OoO cores and a larger number of simple in-order cores, like the Cell processor.
Being an old-school Beowulf cluster builder I agree with the concept of the superiority in having your serialized codes process on a beefier more complex core while the parallelized codes (multi-threaded portions) are farmed out to more numerous but simpler cores.
However the complexity saved at the hardware level is merely transferred to the software. The software must increase in complexity so as to be capable of managing where threads are allocated, migrated, spawned, as hardware resources are utilized.
In Beowulf applications (such as roadrunner, blue-gene, and just about every HPC out there) this is an accepted part of the system...applications are intentionally coded to manage their threads.
But will Microsoft do what is needed to make this feasible on the desktop by 2010? I won't hold my breath, not for a second.
Originally posted by: Idontcare
Originally posted by: DSF
In terms of silicon we're going to hit the limit pretty soon as far as process size goes.
I'd argue it's not quite like that. There is a limit to scaling process technology, it's impractical to do what it takes to shrink atoms just for the desktop market segment, but the limit we are approaching faster still is the financial limit.
The cost of developing successive process technology nodes is exquisitely prohibitive as you go below 45nm. For one the materials of choice become more and more exotic (as far as the industry is concerned) which means elevated risk which means elevated costs to quantify and characterize the risk, etc etc.
This is why you saw consolidation of R&D efforts in the form of the Crolles Alliance and the IBM Ecosystem (aka fab club) develop at the 90nm and 65nm nodes. The situation gets direr at 45nm and beyond.
Intel has the revenue stream to justify the R&D cost structure necessary to fund 22nm and 16nm node development. But does AMD and the associated IBM Ecosystem? Yes but not at a cadence of 2yrs/node...they will be forced to either throw in the towel (ala Texas Instruments) or reduce their process technology cadence to something that reduces the annual R&D commitment to something their revenue stream can cost justify.
The economic limitations will dominate process technology cadence for everyone but Intel going forward (beyond 45nm) more so than the technology challenges of scaling towards atoms.
That's not to say it isn't a challenge, the money is needed to afford the tools needed to solve those challenges. EUV at $180M per tool is a barrier to entry for developing 16nm process technology for any company whose annual sales volume is <$10B.
MIT: Optical lithography good to 12 nanometers
Optical lithography can be extended to 12 nanometers, according to Massachusetts Institute of Technology researchers who have so far demonstrated 25-nm lines using a new technique called scanning beam interference lithography.
http://www.eetimes.com/news/se...HA?articleID=209400807