• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

6700K or 5820K?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

X99 + 5820K or Z170 + 6700K

  • X99 + 5820K, no question!

  • Z170 + 6700K, without a doubt!

  • I'm not sure.


Results are only viewable after voting.
DX12 is automatically backward compatible with DX11.3, just like DX11 was automatically backward compatible with DX10. You just have a reduction of features.

Regarding the Windows 10 rollout adoption, so far it has been extremely successful. There are already tens of millions of PCs with Windows 10 installed on them..

DX11.3 is really the feature level. Else DX12 still runs with DX11.0 features.

As long as its backwards compatible, you dont get much benefits. Its essentially Mantle all over. You get the lower CPU overhead, but its the same game.

The change comes when you can make a DX12 only game that wont run on anything but DX12. Then you can start to utilize whatever hardware resources you freed.
 
Last edited:
0 tangible benefit. I run 2x PCIe 3.0 @ 8x and don't see performance suffering in a significant way.

But what if you want to add other devices that use PCI-e bandwidth? Say an SATA express SSD, a dedicated PhysX card for instance. More and more devices are tapping into PCI-e bandwidth, so if you have lots of PCI-e dependant devices, it makes sense to go with a CPU with the most amount of available lanes.
 
But what if you want to add other devices that use PCI-e bandwidth? Say an SATA express SSD, a dedicated PhysX card for instance. More and more devices are tapping into PCI-e bandwidth, so if you have lots of PCI-e dependant devices, it makes sense to go with a CPU with the most amount of available lanes.

Who uses dedicated physx though? Yeah if you wanna use PCIe SSD (which is expensive so the cost difference for the 40 lane CPU shouldn't matter to you) then you gotta think about that.

Most people though have one video card and maybe a sound card. For most people the performance benefit isn't there for their usage for the ~$200 cost difference. My point was only about the x16/x16 SLI or Crossfire scenario where x16/x8 doesn't reduce performance by anything you can really see outside of a benchmark.
 
DX11.3 is really the feature level. Else DX12 still runs with DX11.0 features.

As long as its backwards compatible, you dont get much benefits. Its essentially Mantle all over. You get the lower CPU overhead, but its the same game.

The change comes when you can make a DX12 only game that wont run on anything but DX12. Then you can start to utilize whatever hardware resources you freed.

You mean DX-11 games that can be run in DX-9 API are not using all the DX-11 features at DX-11 mode ??? 🙄

You can have a DX-11 game that can also run Mantle with full features of both APIs.
 
5820k just because in the UK it is alot cheaper and has more cores. I also think once clocked it will beat a 6700k. Oh also the 5820k has more PCI-E lanes.
 
As long as its backwards compatible, you dont get much benefits. Its essentially Mantle all over. You get the lower CPU overhead, but its the same game.

I don't see your point. DX11.3 does not have the CPU overhead lowering attributes of DX12. No major developer is going to waste time developing games primarily for DX11.3, when Windows 10 adoption is already at super high levels and DX11 hardware can still take advantage of DX12's CPU overhead benefits.

DX11.3 is really for smaller and less skilled developers that lack the capability to deal with low level APIs. All the major 3D engine developers are developing DX12 versions of their engines.

The change comes when you can make a DX12 only game that wont run on anything but DX12. Then you can start to utilize whatever hardware resources you freed.

Like I said earlier, DX11.3 is basically the highly abstractive version of DX12, so theoretically, any DX12 game is going to be capable of running DX11.3, just with much lower performance.
 
We still have to see the actual Windows 10 adoption rate.

And we can follow it here month by month.
http://store.steampowered.com/hwsurvey

It was 2.3% in July.

Everyone in the industry is very focused on the telemetry here. Because it will dictate when and if for a lot of things gaming wise.

Like I said earlier, DX11.3 is basically the highly abstractive version of DX12, so theoretically, any DX12 game is going to be capable of running DX11.3, just with much lower performance.

Its entirely different.

And DX11.3 is actually DX12.1 FL wise.
 
Last edited:
Add DX-12 multithreding in games and anyone going for a Quad(HT or not) in 2015 will run in serious trouble soon.

Dood, you gotta go quad man! Its 2010 man! Go quad! Duo's wont last man! 3-4 years later, quad is actually required. Its like direct x. The latest card supports the newest direct x, that no games will be using for 1-2 years. Perhaps I'm just bitter since I went quad in Jan 2010 and it wasn't really necessary until early 2012 simply because the best price/perf chips were all quad by then.

Also, look at those power consumption numbers for the overclocked haswell-e's. Power hogs man. Gotta save the earth man.
 
After more reading the really big benefit for Z170 is the extra PCIe 3.0 lanes offered by the chipset. You get a total of 36 lanes now. 16 from the CPU and 20 from the chipset. Intel says the chipset offers enough bandwidth for M.2 cards and USB 3.1 without dropping PCIe slot bandwidth to compensate. I noticed during my research that a lot of x99 boards offering USB 3.1 and wifi say it shares bandwidth with certain PCIe slots and that those slots will drop to x1 speed effectively killing SLI setups when other features are being used. Asus for example doesn't specify if this behavior exists with a 40lane CPU as well as the 28lane CPU. It may, and I cannot find any info one way or the other, be built into the motherboard that this happens by default on some of their boards.

Just more to think about depending on usage.
 
Last edited:
I voted: Both. And you should too.

6700k for adorable itx desktop jewellery system

X99 and 6+ core monster for heavy desktop productivity lifting.

Why should you get both? Think about it. Your DIY overclocking future depends on it.


the-iDarkness.jpg





OK, don't think about it. Just think about this:


Intel is still the big bouncy bazillion dollar Chipzilla. For now. They have missed (by years and years) the iGold mobile market. They still have a chance in the iFoolsGold market, but at horrible non-Intelian margins.

Yes, they make massive profits servicing the always expanding tracking & surveillance internet, including the government citizenwatchniche. But, enthusiasts can not live on locked Xeons alone.

So...before it's too late and the last standing desktop chip foundry falls in a comatose heap of over-priced, clock-locked server, tablet and laptoppy chips, we need to send Intel as much desktop funding as we can spare.

That's why if you really want to keep the hot 'n heavy desktop overclockin' party rocking like it's 2009, you need to buy more than one new system this year. In fact, why take the chance, buy 2 of each. Don't ask what Intel can do for you today, but what can you do for Intel tomorrow?

This Message brought to you by (buy) the Save Intel Foundation.
 
for just typical use and single card gaming then of course 6700k. 5820k if you just happen to do things that need more than 4 cores/8 threads. for top end multi gpu setups then I would go 5930k NOT 5820k.
 
Add DX-12 multithreding in games and anyone going for a Quad(HT or not) in 2015 will run in serious trouble soon.

You keep promoting the digital equivalent of Snake Oil.

There is no way that Game Developers are going to cut out 90% of their market.
 
I will take 5820K/X99 in a heartbeat.

I don't think the mobo vendors realise how dangerously close they are pricing their Z170 boards to a $200 X99 board.
 
Since I already have a 5820k I won't be switching. Plus I've had it for months now and the prices are still the same. Only thing I'd consider switching to would be a 5960x if I found a deal for around $500.
 
You keep promoting the digital equivalent of Snake Oil.

There is no way that Game Developers are going to cut out 90% of their market.

Just a few examples,

Crysis, how many Gamers did they upgrade their hardware to play the game ??
BF4 MT, how many Gamers did they upgrade their hardware to play the game ??

also, there is NO game that its been made targeting 100% of the market. And if you do have a quad, you will play the game but at lower fps.

It seems that People have forgotten that PC gaming is not like Consoles, newer PC games will need faster PCs and you will have to upgrade sooner or later. You do that with your GPUs, now it will be the CPU's time to be upgraded.
 
Neither. Get a Broadwell. 65W, yet comes with a big fat L4 cache that massively improves memory performance and improves gaming smoothness. And if DX12 asymmetrical multi-GPU ever takes off, you've got a kickass IGP.
 
Neither. Get a Broadwell. 65W, yet comes with a big fat L4 cache that massively improves memory performance and improves gaming smoothness. And if DX12 asymmetrical multi-GPU ever takes off, you've got a kickass IGP.
yeah let me know where you can buy one in the USA...
 
yeah let me know where you can buy one in the USA...

http://www.amazon.com/gp/offer-listing/B00YAEA0U2/ref=dp_olp_0?ie=UTF8&condition=all&qid=1438937862&sr=1-1 - $508.38 but with free shipping!!!

There's a few caveats:
"This is a brand new product and of made in Japan. The product will be shipped from Japan. The product may be sold out due to the parallel sales of the product. We will refund the money if the product or services are not satisfactory. We will make our best commitments to improve our services. Thank you. "

Other than the price, the uncertainty of stock, you're good to go!
 
I really doubt that in this age of downgraded games that we're going to see developers push a ton of extra objects in the PC version over the console versions. Maybe a PC exclusive can take advantage of it because they aren't shackled to the limitations of the consoles, but the intersection of big devs and PC exclusives isn't that big.

Next installation of Civ, sure, but that game is single-thread bound on AI turn times more than anything else.

Ashes of Singularity, maybe? SupCom did fine on my Q6600 until someone turned on AI, but it was the sad times of being CPU-bottlenecked by someone else's CPU.

I think in the next generation of consoles the 5820K will really pull ahead, but I see CPU performance requirements stagnating until then. Either way there will be better things than either CPU at that point... I hope.
 
I voted: Both. And you should too.

6700k for adorable itx desktop jewellery system

X99 and 6+ core monster for heavy desktop productivity lifting.

Why should you get both? Think about it. Your DIY overclocking future depends on it.


the-iDarkness.jpg





OK, don't think about it. Just think about this:


Intel is still the big bouncy bazillion dollar Chipzilla. For now. They have missed (by years and years) the iGold mobile market. They still have a chance in the iFoolsGold market, but at horrible non-Intelian margins.

Yes, they make massive profits servicing the always expanding tracking & surveillance internet, including the government citizenwatchniche. But, enthusiasts can not live on locked Xeons alone.

So...before it's too late and the last standing desktop chip foundry falls in a comatose heap of over-priced, clock-locked server, tablet and laptoppy chips, we need to send Intel as much desktop funding as we can spare.

That's why if you really want to keep the hot 'n heavy desktop overclockin' party rocking like it's 2009, you need to buy more than one new system this year. In fact, why take the chance, buy 2 of each. Don't ask what Intel can do for you today, but what can you do for Intel tomorrow?

This Message brought to you by (buy) the Save Intel Foundation.

It is not wise to overclock a machine that is used for productivity.
 
It is not wise to overclock a machine that is used for productivity.

It is also not wise to make gross generalizations. And yet peeps still do it.

Many have been telling me the hidden, yet extreme dangers of OCing for over 10 years. You'd think I would have learned by now with the failure rate of 0. The hidden data corruption rate of 0. I don't overclock to the extreme ('cept for some benching) - they get a battery of stability testing and then are battle-proven.

I use my rigs for video editing, animation, rendering and general development - all overclocked. When a project render goes bad, it ain't hidden. The failures I've seen have been all traced to buggy software and bad drivers. My overclocked systems increase my productivity by 30-50% - that's a risk worth taking. For me. Maybe not for others.

Thanks for the wise advise and enjoying the OC hobby as much you seem to, while completely missing the entire playfulness of my post.
 
the upgrade itch needed to be scratched.

I did a completely new build around a
5820k @4.4 stable
MSI X99A mobo
16gb of Corsair Vengeance DDR4
Gigabyte G1 980ti
256gb Samsung Evo 850

I kept my older computer which has a 3570k + 780 SC+ and am going to use it in my office since my present work computer is a pos.

skylake looks very good but i had some good combo deals at a local microcenter so I just pulled the trigger.

May look into a skylake y part later this year if I can get a fanless sp4. Need to justify it though since I have an sp3 🙁
Maybe I can give sp3 to wife or something.
 
Back
Top