Linus Torvalds: Too many cores = too much BS

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Dec 30, 2004
12,553
2
76
Or just make it competitive?

For the rest of us, we had a great-performing CPU for YEARS vs. a slower one. Nothing bad against the FX-8xxx series, but trying to justify a past purchase with UPCOMING software is pretty sad.

You should buy for what you need, not what 'might' come out later. That's foolish for any component IMHO.

It's competitive right now. It gives you >45fps in all the games that matter, 60 in most, and is only going to pull ahead from the i3's with concurrency, just like it handily beats the i3's at Handbrake encoding.

that's how I did it with my e8400. Guess how much those run you on ebay?

yeah I got 7x that for my Ph2-x4-965.

and better game performance of course
 
Last edited:
Dec 30, 2004
12,553
2
76
So you're going to play the, "I'll just stick my head in the sand and assume everyone's lazy, because it's really much easier than anyone else thinks it is," card. Great. It can't be made so limited as pathfinding, for example, can be. Each frame there may be many iterations of decisions for every actor, and they will need to vary throughout development, possibly well into the support portion of the life cycle (meaning they may not be able to be limited to a small number of deterministic paths up front, like pathfinding).
yeah, basically. I never thought anyone would be able to make good use of the Cell (SPEs in particular but also the gimped single PPE), and then we got Last of Us. UE3 is a great engine, BF4 appears to make substantial use of >4 cores, etc.

Pathfinding is a poor example. A 4-fold performance increase isn't going to get you much better results no matter how good the algorithm, and poor pathfinding isn't what keeps me from believing I'm playing a bot.

How many games have had faces with skin that moves over the bone and muscle structure, without stretching like it's a rubber mask? I don't disagree that Valve did it as well as anyone still has yet to, but that only works right for young and/or female and/or hairless-alien faces. Adolescent or older male faces I still haven't seen done well, outside of TV/movies. It's 10x worse if it's a male face with dimples, too.

and they pulled off what they did with lowly 2ghz dual core processors. Pulling off the older faces is more art than math IMO, just like the younger faces.
 
Last edited:
Dec 30, 2004
12,553
2
76
Research institutions have loads of smart people who work for peanuts (PhD post-docs). Still nothing in wide usage other than Mach.

But team size doesn't really have much to do with it. For example, the NetBSD and OpenBSD teams are significantly smaller than the number of people working on Linux. Yet they were able to produce a robust and production ready OS.

so what's the problem?

also, which has the greatest theoretical performance if architected properly?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
In the previous post here , I mentioned Intel adding iGPU rather than cpu cores on the mainstream dies.

But what if instead of adding more die size to the mainstream processors, Intel decided to keep the processor quad core and actually reduce the iGPU and total die size

I bet most people (for all practical purposes) wouldn't even notice the difference in performance.

And the smaller iGPU would give Intel much greater freedom to lower prices in the future if necessary.

P.S. Same goes to AMD. I really hope they decrease iGPU size as well. In fact, for AMD (on desktop) the iGPU problem appears to be more serious. This, in part, due to the fact AMD does not have the node advantage and generous xtor budget of Intel. For AMD mobile, my opinion is a little different because I can see some benefit to the integration in a gaming laptop, etc
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
In the previous post here , I mentioned Intel adding iGPU rather than cpu cores on the mainstream dies.

But what if instead of adding more die size to the mainstream processors, Intel decided to keep the processor quad core and actually reduce the iGPU and total die size

I bet nobody (for all practical purposes) would even notice the difference.

Because Intel cheapen out on solder and used TIM instead reducing the amount of the black silicon would surely make thermals worse and that would be bound to be noticed by overclockers. The average Joe? Not so much.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Because Intel cheapen out on solder and used TIM instead reducing the amount of the black silicon would surely make thermals worse and that would be bound to be noticed by overclockers. The average Joe? Not so much.

I thought about the benefit of the dark silicon, but even with the iGPU active I'd imagine maximum clocks are not affected that much under typical air cooling conditions.

In fact, even with just the stock cooler, my G3258 doesn't overclock that much worse with the iGPU active vs. it being dark silicon.
 

Carson Dyle

Diamond Member
Jul 2, 2012
8,173
524
126
What happened to multitasking people? There are inifinite combinations of real-world multitasking needs that quad cores do not satisfy, let alone dual cores.

My wife opens like 15 browser tabs, in those tabs you have adobe reader and flash playback decoding video and audio.

Your wife is an amazing woman if she can listen to 15 audio and video streams at once. I have about 50 browser tabs open right now and my attention is focused on exactly one. There's very little, if any, processing happening in the other 49.

People "multitask" by duplexing, not parallel processing.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
People "multitask" by duplexing, not parallel processing.
People don't all use their systems the same. For many an i3 is fine, for others it completely chokes when you have a bunch of stuff going on at once. The i3 is an absolute sloth when using a bunch of applications, a VM, audio player etc. it's unbearable.
 

ninaholic37

Golden Member
Apr 13, 2012
1,883
31
91
What happened to multitasking people? There are inifinite combinations of real-world multitasking needs that quad cores do not satisfy, let alone dual cores.

My wife opens like 15 browser tabs, in those tabs you have adobe reader and flash playback decoding video and audio. At the same time she uses skype, such a heavy program for its intended usage, plus thunderbird for e-mails. Then, based on the information on the browser and e-mails, add ArcGIS and MATLAB to do the actual work. The Core i7 QM-based laptop workstation just sweats at those workloads. Add to that any background virus checks, windows updates or unzipping files. Quad core is simply too little.

I'm not going to be talking about my needs, since I do things that almost nobody does on their PC, but I could use 16 cores, easy.

Let's discuss about a gamer, that wants to stream on twitch.tv and chat with his viewers. There we go again, tons of browser tabs running javascript or whatever, skype for communication, the actual game itself (plus the need to sustain 60 minimum fps), the streaming software, that captures and encodes in real-time plus whatever background tasks are necessary. You think a quad core is enough? No it is not.

God forbid anyone ever needs to use Virtual Machines.

The point is, if a freaking video-game needs 6 cores to work as intended, then I need another 2 cores to have a processor that is reasonably utilized.

And I bet, if an 8-core i7 3Ghz Intel processor cost 300$ using something like cheap triple channel DDR3 and a cheap platform similar to Z97 everyone in these forums would be using the 8-core instead of the 4Ghz 4-core i3. Then those quad core flagship laptops would seem pretty weak wouldn't they?

And this is where we have to blame the lack of competition, that caused intel to grow complacent in the desktop segment.
I have a single core and it works fine. Your wife sounds very disorganized.
 

lyssword

Diamond Member
Dec 15, 2005
5,630
25
91
skyrim_1920n.png


w2_1920n.png


sc2_1920n.png


tw_1920n.png


fsx_1920n.png


4.7 Ghz FX-6300 beats 4.7 Ghz FX-4300 in every game.

Sometimes the advantage is not much, but other times the advantage is substantial.

One thing to keep in mind though (As Enigmoid pointed out earlier) is that FX-6300 will have less module penalty in lightly threaded games (owing to the fact it has one more module).

I love how the oc'd g3258 (that I have) keeps up w/ oc'd 6300 in a lot of games (even autocad), and quite a few leads. And yet in the "death of dual core" thread people are betting their house that fx6300 absolutely destroys g3258.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I love how the oc'd g3258 (that I have) keeps up w/ oc'd 6300 in a lot of games (even autocad), and quite a few leads. And yet in the "death of dual core" thread people are betting their house that fx6300 absolutely destroys g3258.

Looking back on those games in posts #90 and #91.

4.7 Ghz G3258 beats 4.7 Ghz FX-6300 in 8 out of 15 games.

4.7 Ghz G3258 beats 4.7 Ghz FX-4300 in 12 out of 15 games

4.7 Ghz G3258 beats 4.5 Ghz Athlon x4 860K in 13 out of 15 games.

With that mentioned, I feel the FX-6300 is really not that bad because even though it only has 7 wins out of 15 against the G3258, the wins are fairly substantial in some cases. (Of course, the G3258 also has some substantial wins as well)

Pertaining to the AMD quad cores (FX-4300/Athlon x4 860K), in general I feel those processors are very lackluster (although still very workable). Sure they can win sometimes, but not like the FX-6300.

Now if only AMD could get the price lower.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
With my native language being English and all...are those merely average FPS charts, as they appear to be?
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
G3258 has been reported to be very lackuster in the min frametimes/smoothness area. Avg FPS cant tell the whole story nowadays i guess.
 

velis

Senior member
Jul 28, 2005
600
14
81
Linus is right in saying that some problems are not parallelisable. He is not however right in saying that scaling beyond 4 cores doesn't happen. That's a made up number that may fit today's situation, but may not fit tomorrow's. It's the same as saying "one core is enough" or "4096 cores is enough". It all depends on the problem being solved. When dual cores were being introduces, many review sites simply concluded that no game scaled to second core...

Parallelisation will always have problems with synchronisation, but synchronisation itself is also a parallelisable problem, again to the limit of the problem being solved.

So, ultimately, more cores is good, but cores should remain powerful for problems that can't be parallelised.

I myself like the current development of things: you have super powerful CPU cores handling mostly serial / complex problems and a rapidly evolving GPU compute to help with parallelisable problems. It's not there yet, but it will be in a few years.

Since GPUs are also getting integrated with CPUs, that will also help reduce the latencies making this a reality this much sooner. So everyone will then have their dream solution: fast CPU cores for Linus, many GPU cores for (various forms of) content processing.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Check out this thread count:

MWSnap_2015_01_07_08_21_53.jpg


This is a program I wrote myself, designed to hammer the crap out of an ethernet port. My program spawns over 1000 processes, and 3000+ threads. By all rights this should run faster on an i7-920 vs an overclocked G3258, right? But it doesnt. Probably because each thread only requires a few hundred thousand cpu cycles. It's mostly I/O.
 

ashetos

Senior member
Jul 23, 2013
254
14
76
Check out this thread count:

MWSnap_2015_01_07_08_21_53.jpg


This is a program I wrote myself, designed to hammer the crap out of an ethernet port. My program spawns over 1000 processes, and 3000+ threads. By all rights this should run faster on an i7-920 vs an overclocked G3258, right? But it doesnt. Probably because each thread only requires a few hundred thousand cpu cycles. It's mostly I/O.

You are right and wrong at the same time.

I have also written storage and network drivers, and what you don't know is that I/O is getting faster, fast.

Non volatile memory is coming, huge throughput over PCI-E interfaces. CPU cores are beginning to be the bottleneck instead of I/O.

My tests over 10 Gigabit ethernet using the traditional TCP/IP stack destroy the CPU cores even if you are not doing particularly intelligent staff. If you add intelligence there is no CPU left for the actual applications.

The trend is that I/O is closing the gap with CPU and memory, and we need more cores to handle the increased loads.
 

III-V

Senior member
Oct 12, 2014
678
1
41
Linus is right in saying that some problems are not parallelisable. He is not however right in saying that scaling beyond 4 cores doesn't happen. That's a made up number that may fit today's situation, but may not fit tomorrow's. It's the same as saying "one core is enough" or "4096 cores is enough". It all depends on the problem being solved. When dual cores were being introduces, many review sites simply concluded that no game scaled to second core...
Linus's argument was not "more cores are never useful." It's that they have a cost, directly in the form of die size and transistor count -- in most cases, for most people, it is better to spend those transistors elsewhere, or simply omit them and save on cost.

I don't know why his argument is being so frequently, so tremendously misunderstood. He's not trying to take your cores away... for those of you that actually make use of them.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Linus's argument was not "more cores are never useful." It's that they have a cost, directly in the form of die size and transistor count -- in most cases, for most people, it is better to spend those transistors elsewhere, or simply omit them and save on cost.

I don't know why his argument is being so frequently, so tremendously misunderstood. He's not trying to take your cores away... for those of you that actually make use of them.

Well Linus tends to state things in an inflammatory, 'my way or the highway' manner. That throws allot of people off. I thought about contributing towards Linux development years ago, but Linus was such a bully on the mailing list - I decided I didn't need the potential aggravation.

Once you get past the loud rhetoric, he is usually being sensible.
 
Feb 25, 2011
16,992
1,621
126
Once you get past the loud rhetoric, he is usually being sensible.

There's an ROI question too - do I hire a bunch of developers and parallelize this legacy codebase, or do I wait for Intel to release faster CPUs?
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
There's an ROI question too - do I hire a bunch of developers and parallelize this legacy codebase, or do I wait for Intel to release faster CPUs?

Relying on advancement in single-threaded performance is not a wise bet nowadays it has been advancing really slowly ever since SB and how many years ago was that? Almost 4 and during that time ST advanced only by about 20% or even less with OC because top CPUs nowadays have far less headroom than SB did. That's for mainstream CPUs but how about E series?
http://anandtech.com/bench/product/552?vs=1316

The improvement here is really pathetic.
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Well Linus tends to state things in an inflammatory, 'my way or the highway' manner. That throws allot of people off. I thought about contributing towards Linux development years ago, but Linus was such a bully on the mailing list - I decided I didn't need the potential aggravation.

Once you get past the loud rhetoric, he is usually being sensible.

I saw some email exchanges from him and it mostly went along the lines of:

"You're an idiot and this is why you're an idiot..."
<bunch of technical facts>
"So yes, you're an idiot."
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I saw some email exchanges from him and it mostly went along the lines of:

"You're an idiot and this is why you're an idiot..."
<bunch of technical facts>
"So yes, you're an idiot."

Seems like a lovely dude. He reminds me of Steve Jobs.