If you have anything less than a 2500K and you want to AAA game with decent minimums, its upgrade time.
Do lower end cards like r7 360, or the 750TI get a considerable boost from a good cpu (versus something like a celeron, or an 860k)?
You need balance in any system. Still, those GPUs will hold you back more than a old GPU. They can barely sustain 30FPS.
So there would be no noticeable improvement even if one would play those games at lower settings?
Unless its an Core 2 or similar vintage. Anything remotely modern should be fine. What CPU do you have in mind?
I was just curious in general, as i was surprised that a cpu that is not even twice as fast in general (4670 vs 4100), can have such a big impact on performance when working with higher end gpu's.
Do lower end cards like r7 360, or the 750TI get a considerable boost from a good cpu (versus something like a celeron, or an 860k)?
If you consider Cinebench to be a valid benchmark, a stock 4670 is actually a little more than twice as fast as a 4100, scoring 6.41 and 3.12 respectively.
Still, that's almost a linear performance hit in terms of cpu speed. Which part of the process causes this, considering games are much more gpu heavy?
Lol, modern day Rollowell if we are changing goal posts, then i guess you didnt know there are used and cheaper FXs out there 🙄
Isn't that kinda an oxymoron or are you saying 4 threads and that's it?Witcher 3 doesn't scale beyond four threads, but it also loves hyperthreading.. Any increase from m0ar cores, likely stems from the larger caches of those CPUs more than anything.
Isn't that kinda an oxymoron or are you saying 4 threads and that's it?
http://wccftech.com/intel-skylake-6700k-6600k-amd-fx-8370/I'm saying it uses four threads and that's it. HT can improve performance in other ways, like reducing memory latency and mitigating thread stalls. So the boost from HT enabled CPUs in that graph isn't because the game itself is using those threads, but probably because the code is latency bound.