[Eurogamer] i3 6100 review

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Aug 11, 2008
10,451
642
126
But this is comparing the FX8350 and the FX6300!! How can having one third more cores and at best maybe around 10% or maybe 15% greater clockspeed lead to a 2.5 times speedup??

Would be unlikely, but it is possible if there are background tasks using up a lot of resources.
For instance if other tasks are using 5x, the available resources from 6x would be 1, while the available resources from 8x would be 3. Kind of like a linear regression with a huge offset. I am not saying that is happening, in fact it seems quite unlikely, but the scaling might not be linear if other tasks are using up most of the resources of the 6 core.

Granted though, the data seems bogus to me too, as well as some of the i3 data.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
That's Haswell. Skylake has a newer IMC and uses a different memory technology. Are you sure I can extrapolate?

Yes because it's also historically accurate and since Skylake isn't a radical departure from the previous architecture (CPU or GPU) the likely hood of this being true for the HD 530 is extremely high. So unless HD 530 is designed unlike any other GPU out there past or present (it isn't) than it's pretty safe to extrapolate.

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/7

The last chart in that article shows a relatively small difference in performance between DDR3 @ 1866MT/s and DDR4 @ 2133MT/s with the DDR4 kit having a much higher absolute latency. So I doubt 2133 MT/s vs 2666 MT/s (at same absolute latency) would make a significant if any difference for an i3 which would need far less bandwidth than an i5 or i7 with more cores and threads to feed.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,250
3,845
75
I think Ryse is the only major game that has been released that's using CryEngine with no number. But one major unreleased game using it is Star Citizen. I wonder if there's any bandwidth benchmarks on that?
 

DrMrLordX

Lifer
Apr 27, 2000
21,631
10,841
136
Yes because it's also historically accurate and since Skylake isn't a radical departure from the previous architecture (CPU or GPU)

I'm gonna have to disagree, at least on the basis that memory speed seems to be making a fairly big difference for Skylake right now. It was almost universally reported that memory speed/latency made very little difference in Haswell performance. So, obviously there's something different between the two architectures in how they respond to memory bandwidth and latency, at least on the CPU side.

On top of all that, the data being discussed in this thread seems a bit odd, so additional tests for clarity would be nice.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
I'm gonna have to disagree, at least on the basis that memory speed seems to be making a fairly big difference for Skylake right now. It was almost universally reported that memory speed/latency made very little difference in Haswell performance. So, obviously there's something different between the two architectures in how they respond to memory bandwidth and latency, at least on the CPU side.

On top of all that, the data being discussed in this thread seems a bit odd, so additional tests for clarity would be nice.

Memory performance was actually pretty important for Haswell, most recommended grabbing low timing RAM running at least at 1600MT/s or 1866MT/s.

I don't see how the data is odd. If you are referring to the difference between the Haswell i3 and the Skylake i3 in Ryse they made a comment in reference to those results in the article.

In fact the anandtech article I linked regarding Haswell memory scaling proves this.
 
Last edited:

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Memory performance was actually pretty important for Haswell, most recommended grabbing low timing RAM running at least at 1600MT/s or 1866MT/s.

I don't see how the data is odd. If you are referring to the difference between the Haswell i3 and the Skylake i3 in Ryse they made a comment in reference to those results in the article.

In fact the anandtech article I linked regarding Haswell memory scaling proves this.

Yes but by haswell, and even before with Ivy and sandy bridge I think most people in the know consider 1600/1866 to be standard speeds. Hell even my i5-750 had ram at both these speeds at one point or another.

When high speed DDR3 is suggested I tend to think 2000MHz+ which even the Anandtech article showed diminishing returnes. Only one 2000+ speed had the right balance between bandwidth and latency to compete with 1866MHz iirc.

You also have to remember that the memory controller itself will have improved and the DDR4 bandwidth at high speeds is around double that of the 1600 DDR3, and it seems that Skylake can make good use of it.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
You also have to remember that the memory controller itself will have improved and the DDR4 bandwidth at high speeds is around double that of the 1600 DDR3, and it seems that Skylake can make good use of it.

That was my original point, I was just pointing out a historical trend to DrMrLordX.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
I was discussing this benchmark with Arachnotronic and ShintaiDK in the Skylake thread.

The Skylake i3 performance is all well and good, but to get that performance, you need faster (overclocked) DDR4 RAM, and in order to use overclocked RAM, you have to pay Intel's "chipset tax" for their most expensive chipset, the Z170. If you were going to go with a semi-budget locked i3, why would you want to pay for a Z170 mobo. And if you were, why not just go whole-hog and get an unlocked Skylake i5 CPU?

Some things that Intel does, don't make that much sense. (Other than they are incredibly greedy, and short-sighted.)

The x86 (desktop/laptop) empire is under assault from ARM like never before. And Intel just continues to spend money on R&D (billions!) to make a faster CPU, then they cripple them when used with non-enthusiast chipsets, resulting in performance that is barely or not even faster than the prior generation of CPUs.

Its not that much really, the Asrock Z170M Pro4S sells at newegg for as little as $100, H170 cost about the same, only H110 are cheaper, and im not need to tell that a Z170 board has a lot more than a H110 to make it worth it by itselft.
 
Aug 11, 2008
10,451
642
126
Perhaps, but the point remains that either the chip will be used in an OEM system, or a DIY system. In an OEM system, I think the chances are slim to none that they will use a Z170 motherboard and fast enough ram to give the best performance. There goes the already small gain from haswell down the drain.

For a DIY system, it seems absurd to be forced to buy an overclocking motherboard for a chip that you cant overclock. You cant even do multicore enhancement can you? Under these conditions, I would not even give a minute of consideration to the i3. The obvious choice is to spend the extra hundred bucks for an i5, and actually utilize the overclocking feature you are being forced to pay for. Not to mention getting 2 extra real cores.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
Lack of reviews sucks. From price/performance/efficiency point of view this "CPU/GPU combo" is a must upgrade for 775/1156/1155/1150/AMD dual-core office systems. Along with Windows 10 / HT efficiency improvements, this is looking actually quite good and worth messing around with.

I still woudn't recommend any i3 for serious 2016+ gaming though. But we need more reviews to clear this up. Definitely, this is the best i3 in years, heh.
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
I may be buying a B150 mobo (with DDR3) and i3 6100.

If you have some DDR3 to reuse, check out the Asrock H170 Combo, B150 Combo and B150M Combo-G.

Gives you the option of a DDR4 upgrade down the road, without swapping mainboards.

I hope to see full G4400 and G4500 reviews with iGP gaming.

Since the Pentiums have (finally) been upgraded to first-class citizens graphics wise, the graphics performance you're seeing from the 6100 also applies to them. The G44xx ones will be a bit slower due to lower frequency.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
2c_2t.png


Skylake i3 6100 2C/4T versus Haswell i7 4770K 2C/4T @ 3.70 Ghz. Enjoy :)
 

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
I just realized that some of the settings used for my 6600K may not have reverted to defaults... oops.

edit: yeah, that was it. It's running 1.232 max in LinX now. I'm re-running tests and re-taking screenshots now... sigh...
 
Last edited:

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
cpu_z.png


cine.png


geek.png


Skylake is quite a bit faster in compression/encryption/MT parts :)
 
Last edited:

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
But this is comparing the FX8350 and the FX6300!! How can having one third more cores and at best maybe around 10% or maybe 15% greater clockspeed lead to a 2.5 times speedup??

The 6300 has less cache and, as far as I know, the way threads are scheduled or apps are compiled is not favorable to a tri module part. Plenty of gaming data shows the tri modules being left in the dust by quad module parts, as far as I know.

I do take issue with them testing the 8350 at stock. No one runs FX chips at anything less than 4.5 GHz if they have any brains. That is where the sweet spot is for Vishera in terms of performance per watt per dollar.

Testing a quad module FX at stock is dumb unless it's a 9000 series. 4.5 is readily achievable with cheap motherboards like the UD3P and common tower air cooling. They also need to turn off the power-saving gimmicks because they can neuter framerates.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,631
10,841
136
I do take issue with them testing the 8350 at stock. No one runs FX chips at anything less than 4.5 GHz if they have any brains. That is where the sweet spot is for Vishera in terms of performance per watt per dollar.

That isn't necessarily true. Plenty of people have picked up discounted FX 8300, 8310, and (if they were lucky) 8320e parts all with listed TDPs of 95W. If on a 4+1 phase 760G or 970 board, you don't want to push your luck with those chips. 4.0-4.3 GHz is often the most the VRMs can handle without some modding, or even with board mods. Setting all modules to the max turbo for the chip and leaving it there is a perfectly acceptable solution in such situations.

Granted, that's sub-optimal, but if saving every penny counts, you'll find people doing it anyway.