• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

i5/i7 Difference - Hyperthreading?

flexy

Diamond Member
Sort of a "noob" question, but I am somewhat confused.

I am talking about say a i5 4690k vs. a i7 4790k

BOTH CPUs have four cores, right?

The main difference between i5 and i7 is "Hyperthreading", sort-of means "the i7 can make two logical CPUs from one core". Which I read that the i7 would present itself as having 8 threads as compared the i5 which "only" has 4 threads, right.

In some recent reviews it seemed to me that people are heavily pushing i7 since it would be far superior to the i5 when it comes to applications that use multithreading, multiple cores, like video editing etc. and SOME games. (In those comparisons it sounded almost as if the i5 would not be able to do multithreading at all)

But isn't the difference really merely that the i7 has 8 threads - yet an i5 STILL has 4 real CORES and 4 threads....so the difference is solely in numbers? (Yes obviously 8 is "better" than 4 for what it's worth). And then of course as a criterion whether to get an i5 or i7 is how many apps/games would actually utilize EIGHT threads? (Obviously not too many)

Sort-of torn seeing that the i7 costs €80 more but then realizing it's definitely NOT the case that an i5 would "suck" at multi-threading...I mean it still has 4 cores. Do I see something wrong there?

Short: Would a "normal" user who does some gaming, some typical work (MS Office, Excel, PaintShop), Web Browsing etc. ever see a real performance advantage from an i7 over an i5?
 
Last edited:
What makes the i7 more attractive in this case is the 4Ghz baseclock and 4.4Ghz turbo. Unlike previous with a 100Mhz difference, there is now a 500Mhz difference.
 
14% higher clockspeed, 33% larger L3 cache, hyperthreading which increases utilization of the cores in multithreaded workloads.

But for a "normal" user? Nah, you probably won't notice the difference. Just go with the i5.
 
What makes the i7 more attractive in this case is the 4Ghz baseclock and 4.4Ghz turbo. Unlike previous with a 100Mhz difference, there is now a 500Mhz difference.

suppose first gen was just to test the waters at 100mhz?

I remember scratching my head trying to figure out what the point of another 100 mhz was
 
suppose first gen was just to test the waters at 100mhz?

I remember scratching my head trying to figure out what the point of another 100 mhz was

The difference the last 5 years have been 100Mhz. Speedbin+HT. With DC they changed that, so its 500Mhz+HT. The non K is still 100Mhz+HT.

Its quite generous with the K model. Premium cost premium money. But you could advocate that the 4790K with its +500Mhz is rather cheap in that sense.
 
I wouldn't bother with a 4790K for gaming. Might as well spend a little bit more and get a 5820K then mildly OC it to 4GHz. 6 cores would be a better choice in 2015.
 
You can overclock the 4690k to similar speeds as the 4790k. It requires a Z motherboard of course. Considering the high stock clocks and limited additional OC headroom, I would strongly consider just getting a 4790k and running it stock. You can save money on the motherboard and should get by with thestock cooler.
 
X99 Extreme 4 isn't that expensive. RAM, well its the early adopter tax. I'd take 6 cores over a 4790 anyday now for gaming.

Yes it is. It's $240. You can get decent Z97 mobos for half of that.

Expensive RAM is still expensive, regardless of 2 more cores on the 5820K. I'd also like to see evidence for '2 more cores are better'.
 
Yes it is. It's $240. You can get decent Z97 mobos for half of that.

Expensive RAM is still expensive, regardless of 2 more cores on the 5820K. I'd also like to see evidence for '2 more cores are better'.

Look at all the recent GameGPU benchmarks. The 5960X is at the top of the tree. Haswell with 6 cores would be next in line if they bothered to test a 5820/5930. Unity and Inquisition can both scale to 6 cores, I'd bet money that GTA V and Witcher 3 will follow. $120 extra for the mobo and $150 or so for the RAM over the lifetime of the build is insignificant.
 
Look at all the recent GameGPU benchmarks. The 5960X is at the top of the tree. Haswell with 6 cores would be next in line if they bothered to test a 5820/5930. Unity and Inquisition can both scale to 6 cores, I'd bet money that GTA V and Witcher 3 will follow. $120 extra for the mobo and $150 or so for the RAM over the lifetime of the build is insignificant.

Hmm. Interesting.
http--www.gamegpu.ru-images-stories-Test_GPU-Action-Assassins_Creed_Unity-test-ac_proz.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Assassins_Creed_Unity-test-ac_intel.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Assassins_Creed_Unity-test-ac_amd.jpg
 
I wouldn't bother with a 4790K for gaming. Might as well spend a little bit more and get a 5820K then mildly OC it to 4GHz. 6 cores would be a better choice in 2015.

Right now I am seeing "spending a little bit more" as silly, even with my extremely outdated system. Reason simple that Skylake is coming "some time soon" and I don't think it's too smart to spend €300+ JUST on a CPU (which I never did in the past) for architecture which WILL be outdated 6-12 months down the road. A 4690k would make more sense, in fact right now I am even debating whether I should actually build Ivy Bridge really "cheap" from second hand parts to tie me over 12 months til Skylake.
 
Right now I am seeing "spending a little bit more" as silly, even with my extremely outdated system. Reason simple that Skylake is coming "some time soon" and I don't think it's too smart to spend €300+ JUST on a CPU (which I never did in the past) for architecture which WILL be outdated 6-12 months down the road. A 4690k would make more sense, in fact right now I am even debating whether I should actually build Ivy Bridge really "cheap" from second hand parts to tie me over 12 months til Skylake.

Would not count on "some time soon" being anytime soon.

Everyone on the planet knows there are only so many more nodes left before it is game over, and only the most foolish of CEOs is going to rush to that end-point full steam ahead.

Haswell refresh popped up on the roadmap out of thin air and delayed broadwell by essentially a year, plus the delays in 14nm itself. Who is to say we won't suddenly catch wind of a "broadwell refresh" for next xmas season and Skylake gets pushed to fall 2016?

Stranger things have happened.
 
Haswell refresh popped up on the roadmap out of thin air and delayed broadwell by essentially a year, plus the delays in 14nm itself. Who is to say we won't suddenly catch wind of a "broadwell refresh" for next xmas season and Skylake gets pushed to fall 2016?

Stranger things have happened.
It didn't pop up on the roadmap out of thin air: the rumor appeared on June 6, 2013. But you're obviously being facetious, since putting Skylake in fall 2016 after we got all the rumors of Q2 and Intel's statement of H2, must be the worst prediction in 2014!

Everyone on the planet knows there are only so many more nodes left before it is game over, and only the most foolish of CEOs is going to rush to that end-point full steam ahead.

Wrong, wrong wrong. Some refutes from Intel I could easily find.

Wrong: “We will not take the foot off the [Moore's law] pedal here.” --Brian Krzanich

Wrong: The mission is to really utilize Moore's Law. We have it. We believe we lead at it. We drive it. We define Moore's Law as a company.” --Brian Krzanich

Wrong: “And you'll have to trust a little bit the 50 year history we have with Moore's Law and that we should be able to keep it going for 51 or 52 years.” --Brian Krzanich, CEO Intel

Wrong: “We are in fact accelerating Moore's Law.” --William Holt

Wrong: http://www.reddit.com/r/IAmA/comments/1ycs5l/hi_reddit_im_brian_krzanich_ceo_of_intel_ask_me/cfjchh8

in my 30 years i think i have seen the forecasted end of Moore's law at least 5 or 6 times... so i tend to be a skeptic when people say it will end.. At any one point we can typically see about 10 years out.. with pretty good clarity in the 3 to 5 years and much less clarity 5 to 10 years.. but so far in that 10 year horizon.. we don't see anything that says it will end in that time frame..


Wrong (the first question!): http://intelstudios.edgesuite.net/im/2013/archive/qa1/archive.html:

William Holt said:
“Well, let me start with saying that I'm not about to start predicting the end, since anybody who's tried that has been wrong. So I'm not going to try that. The other thing that I'd refer back to is, you know, Craig many years ago said when asked this kind of a question is that, yes there's a wall out there.. somewhere, potentially; and he was going to run into it as fast as he could. So we have no intention of slowing down. If we slow down, it will just be because we can't keep up. So we'll see. The goal is to keep pushing that wall out, and that's what we're doing right now, and as far as hitting, we're not going to slow done because we see it on the horizon.” --William Holt, IM'13

FYI, Intel already knows that it will outperform Moore's Law at 7nm. Running into Moore's Wall full steam ahead is the most wise thing any CEO could ever decide!
 
Last edited:
FYI, Intel already knows that it will outperform Moore's Law at 7nm. Running into Moore's Wall full steam ahead is the most wise thing any CEO could ever decide!

Absolute nonsense. Intel cannot possibly know whether they will outperform Moore's Law until it knows exactly when it will ship 7nm parts. 14nm was delayed, 10nm and 7nm could both face delays. It's a fact of life when performing cutting edge R&D- you can only estimate launch dates, you can't know. Especially when the launch in question is 4 years away (at the very least).
 
Right now I am seeing "spending a little bit more" as silly, even with my extremely outdated system. Reason simple that Skylake is coming "some time soon" and I don't think it's too smart to spend €300+ JUST on a CPU (which I never did in the past) for architecture which WILL be outdated 6-12 months down the road. A 4690k would make more sense, in fact right now I am even debating whether I should actually build Ivy Bridge really "cheap" from second hand parts to tie me over 12 months til Skylake.

Yup, I am looking at the hexacore this way, 20% more performance and energy consumption and double the cost. Seems like a tough buy to me.
 
Absolute nonsense. Intel cannot possibly know whether they will outperform Moore's Law until it knows exactly when it will ship 7nm parts. 14nm was delayed, 10nm and 7nm could both face delays. It's a fact of life when performing cutting edge R&D- you can only estimate launch dates, you can't know. Especially when the launch in question is 4 years away (at the very least).

I meant in terms of cost per transistor improvement, not TTM, but their 2 year beat rate is still possible:

We have done no changes or shift to our 10-nanometer schedule but we won’t really talk about 10-nanometer schedules until next year.

That was in July, BTW. So others might do delays, but apparently not Intel, like 450mm:

The 450, let's start with that. We haven't changed. We've said that actually our 450 is similar in the latter half of this decade, right? So, we're still saying that. You're going to see gives and takes on 450 spending.

These are long, drawn-out programs over multiple years. And so I think don't grade the whole program by one shift in when we buy a tool or when we move out some spending, in some cases.

I certainly wouldn't jump on the doom and gloom bandwagon because of one data point. FYI, 14nm, which is both denser and more bleeding edge, is still faster than TSMC's 20nm shift from 28nm. Unless 10nm also suffers from yields, which is obviously possible, but Intel is not going to talk about that unfortunately, but just look at all the quotes I've given combined with the fact that Skylake, and don't forget how Intel talked about its 14nm ramp which would be the fastest ever, did not suffer from Broadwell's monstrous delay, then there's no reason to put Cannonlake anywhere but a year after Skylake.
 
I meant in terms of cost per transistor improvement, not TTM

But Moore's Law has a fundamental time element to it. If the time to market is too long then it doesn't fulfil Moore's Law, plain and simple. Any other "interpretation" is either a misrepresentation or misunderstanding of the fundamental principle.
 
Would not count on "some time soon" being anytime soon.

Everyone on the planet knows there are only so many more nodes left before it is game over, and only the most foolish of CEOs is going to rush to that end-point full steam ahead.

Haswell refresh popped up on the roadmap out of thin air and delayed broadwell by essentially a year, plus the delays in 14nm itself. Who is to say we won't suddenly catch wind of a "broadwell refresh" for next xmas season and Skylake gets pushed to fall 2016?

Stranger things have happened.

shoosh! you're going to ruin the fun! hitting this wall seems quite scary actually
 
But Moore's Law has a fundamental time element to it. If the time to market is too long then it doesn't fulfil Moore's Law, plain and simple. Any other "interpretation" is either a misrepresentation or misunderstanding of the fundamental principle.

I already the time part of Moore's Law in my reply. BTW, since your reply is quite an ad hominem, I'll say that I'd also find it interesting if he would take that outperform Moore's Law slide and replace the node names with a time scale.

But clearly, you simply can't argue that Intel is slowing down (because then you'd also have to be consistent and point your finger to every other company). The timing part may become of a few months later, but they're still moving forward while TSMC is slowing down, and that's what counts. It took TSMC a lot of effort to reduce the cost per 20nm transistor to be below 28nm, but it remains to be seen by how much, and we'll also have to see if they can change the flat forecast of 10nm cost per transistor, because even if you moved at 1 node per year but the cost of one transistor stays flat, that's not advancing Moore's Law; you could as well have remained at the current node. So while the average timing from 22 to 10 might be longer than from 65 to 32, the accelerated cost per transistor reduction might (even if it's only partly) make up for that.
 
I already the time part of Moore's Law in my reply. BTW, since your reply is quite an ad hominem, I'll say that I'd also find it interesting if he would take that outperform Moore's Law slide and replace the node names with a time scale.

But clearly, you simply can't argue that Intel is slowing down (because then you'd also have to be consistent and point your finger to every other company). [...]

I'm not arguing that Intel is going to launch 7nm at a specific time, because I clearly have no idea what is going to happen in the next 4 years. Neither do you. And you know what? Neither does Intel!

I am pointing out that your claims that Intel "knows" that it will outperform Moore's Law are ridiculous. They can certainly predict and hope that they will, but there are always unknown unknowns. Never take anything as a certainty, especially when dealing with risky cutting edge technology.
 
Back
Top