6700K or 5820K?

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

X99 + 5820K or Z170 + 6700K

  • X99 + 5820K, no question!

  • Z170 + 6700K, without a doubt!

  • I'm not sure.


Results are only viewable after voting.

toyota

Lifer
Apr 15, 2001
12,957
1
0
Well, I am not.

BTW, why you wouldn't like your video response?
I edited my comment just to add the Crysis 3 comparison. same goes for the other games they tested too in that earlier video. my only point is that I would not trust anything Techno-Kitchen posts.
 

readers

Member
Oct 29, 2013
93
0
0
I edited my comment just to add the Crysis 3 comparison. same goes for the other games they tested too in that earlier video. my only point is that I would not trust anything Techno-Kitchen posts.

That number is off for sure, but don't think you were running 4k.

Very interesting, going to look up some 980 ti sli 4k crysis 3.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
IPC too, while I also picked x99 after seeing skylake's performance, the difference in IPC does exist, so not just a little bit of clock speed, also a little bit faster clock for clock per core.

I still maintain ash means nothing, I would not even have heard this game had it not have first public dx 12 benchmark.

Fair enough. I think AoS is less important as a game and more about what it represents, which is a sign that developers are finally moving toward new game engines were built from the ground up to scale with hardware beyond single core performance.

As a gamer its nice to see what looks like a spiritual successor to SP Forged Alliance. As a computer enthusiast its nice to see what looks like a new era in game software design.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
That number is off for sure, but don't think you were running 4k.

Very interesting, going to look up some 980 ti sli 4k crysis 3.
the res I was running was not relevant. the argument when people linked to that was that the 4790k was a cpu limitation when in reality its not.
 

readers

Member
Oct 29, 2013
93
0
0
the res I was running was not relevant. the argument when people linked to that was that the 4790k was a cpu limitation when in reality its not.

It is, when you change anything in a setup, it can change everything, single 980 ti on 1080p could mean completely different load to cpu than a sli 980 ti on 4k.

So far I have failed to find anything comparable, almost all 980 ti sli 4k crysis 3 on YT is with 6-8 core X99/X79, one video has a 4770k, but has the fps sooooo tiny I can hardly read. But the few moment I can, it shows 55ish fps.

https://www.youtube.com/watch?v=QVqBEPeN_Nk



More than likely you are correct, just want to find something better shows that.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
It is, when you change anything in a setup, it can change everything, single 980 ti on 1080p could mean completely different load to cpu than a sli 980 ti on 4k.

So far I have failed to find anything comparable, almost all 980 ti sli 4k crysis 3 on YT is with 6-8 core X99/X79, one video has a 4770k, but has the fps sooooo tiny I can hardly read. But the few moment I can, it shows 55ish fps.

https://www.youtube.com/watch?v=QVqBEPeN_Nk



More than likely you are correct, just want to find something better shows that.
again you are missing the whole point. some people used that video claiming the oced 4790k would drop into the 40s and was suffering in newer games. the system in that video is NOT performing bad because of the cpu.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
must be an SLI driver issue and additional overhead as I just tested there myself and dont drop below 61 and he is hitting 50 fps there because of the cpu. its always been known that SLI can need more cpu power and in that spot I am hitting 90% cpu usage with a single 980 Ti so he is probably pegging his cpu with 980 Ti SLI. a quad core cpu without HT would have crapped itself right there trying to push two high end gpus.
 
Last edited:

readers

Member
Oct 29, 2013
93
0
0
must be an SLI driver issue and additional overhead as I just tested there myself and dont drop below 61 and he is hitting 50 fps there because of the cpu. its always been known that SLI can need more cpu power and in that spot I am hitting 90% cpu usage with a single 980 Ti so he is probably pegging his cpu with 980 Ti SLI. a quad core cpu without HT would have crapped itself right there trying to push two high end gpus.

So take away really is that for single CPU quad core haswell is perfectly fine, for for dual high end GPU it can start to having some issue although not much to worry about now. No idea how Skylake fairs, can't find anything yet.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
So take away really is that for single CPU quad core haswell is perfectly fine, for for dual high end GPU it can start to having some issue although not much to worry about now. No idea how Skylake fairs, can't find anything yet.
quad is not fine if you want to have smooth framerates the whole time and always get 60 fps. Watch Dogs runs like crap in spots if I disable HT and as lots of little hitches and fps drops. Batman Arkham Knight is much less smoother with HT off too though no big hitches like in Watch Dogs. Crysis 3 can not stay above 60 fps in some spots without HT being on too. Some spots in Witcher 3 also run better and smoother with HT on. GTA 5 and even Dying Light will completely peg my cpu at times without HT although I see no ill effects of that in Dying Light and only a small fps penalty in GTA 5.
 

Tecnoworld

Member
Sep 28, 2015
49
0
0
I have my setup up and running. 5820k rock stable at 4.4ghz with 1.26v (I'm not interested in more for now) and memory at 2666mhz cas 14. Should I test the memory at lower cas? Do the othe parameters count a lot? They are at 15-15-35-n2 right now.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,330
251
126
You can try, just don't be too surprised if you go one too far and you end up having to clear CMOS. :p

2666 CL14, and if 1T, is pretty good, imo.
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
I'm a bit disappointed by the result I got in geekbench 3. CPU@4.4, speedstep disabled, memory at 2666Mhz, 13-13-13-29-2:

http://browser.primatelabs.com/geekbench3/3773388

What's wrong with my system? In their database I see similar systems scoring in the 5xxx-3xxxx range.

There is no way to tell with that particular benchmark at what frequency the CPUs were running at, so I wouldn't sweat it too much. I did notice you ran the 64 bit tests, from an earlier thread a member who purchased it also stated the scores were considerably higher running 64 vs. the free 32 bit tests, can you confirm?
 

Tecnoworld

Member
Sep 28, 2015
49
0
0
The cpu was at 4.4ghz (locked with speedstep off, monitored with cpu-z). Could you suggest other benchmarks? I downloaded sandra, but I have a hard time reading its results...it's a mess!

Yes, the 64bit version of gb gives higher results compared to 32bit, if that's what you were asking.
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
Asus Realbench is a varied benchmark that has a pretty large database of scores to compare to.
 

Head1985

Golden Member
Jul 8, 2014
1,864
689
136
Last edited:

YBS1

Golden Member
May 14, 2000
1,945
129
106

warzeta

Member
Mar 5, 2015
32
0
66
when you doing sli , is more likely pci-e lanes that bottlenecks and not the 4770k , that the advantage with 2011 and 2011-3
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
^ That one. 1.35V with 15-16-16-35 timings. I would also consider 1.2V DDR4 2800 as YBS1 advised you since chances are future CPUs might not support 1.35V DDR4 but will probably work with 1.2V DDR4. It would be a nice bonus to be able to reuse DDR4 should you decide to upgrade in 3-4 years. Having said that, I bet you can run 1.35V DDR4 3000 at 1.2V with lower clocks and/or looser timings. So it's not the end of the world.

Tell us which later CPU that had an DDR3 IMC didnt allow 1.65v even when their predecesors did?

None. Chances are as JEDEC spec for DDR3 said 1.5V +-5%, the same would apply with DDR4 and 1.2V. 1.35V would be already out of standard. But JEDEC spec doesnt mean anything in reality, in DDR3 we even had XMP kits with 1.8-1.9 volts!

So go for 1.35v, enjoy the tighter timings and be done with it. Even XMP DDR4 kits that go for 1.35v are just applying as much voltage as DDR3L kits, its totally fine.
 

dsl

Junior Member
Oct 13, 2015
1
0
0
Interesting thread, with a lot of great info.

I was looking at both and decided on a 5820K/X99 for my current Linux build. My use is similar to what Tecnoworld said for his: photo editing (darktable,lightzone,gimp,shotwell), video editing (Pitivi, OpenShot, Lightworks), and some audio work. Also web research (50-100 open tabs). No games at all, just a media work machine.

I'm looking at 32GB RAM, a 500GB SSD for the OS and media libraries and 5TB user data -- I'll need the drive space! But I do have a budget and am trying to keep costs down. Right now I'm looking at the ASRock X99M Extreme4 Micro ATX LGA2011-3 motherboard but am open to anything that's lower cost and still good for my purposes. And I have no idea which GPU to go with. Gaming is a non-issue, this is just for photography and video and needs good open source support.