ITT: We list current generation CPUs that are in most need of improvement

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
And in other cases? Any other games tested in that Anandtech review?

In games that are not as CPU intensive, the A8-7650K is a bit faster than the R7 240.

The reason is that in those games the GPU becomes the bottleneck. And the A8-7650K iGPU is about 10% stronger in core than a R7 240 fully turbo (ie, 384sp @ 720 Mhz vs. 320sp @ 780 Mhz turbo).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
1. Braswell N3000: At 4W, I think Intel could do a lot better. (Even the N2807 on 22nm has faster clocks than this 14nm processor)

I missed the Celeron G470 Sandy Bridge days from 2013. In terms of marketing, Braswell Celeron N3050 is the successor for 2016 year, which replaced both J1800 and N2840 in 2014-2015.

Yes, Celeron G470 (2Ghz Sandy Bridge 1C/2T) sounded like it was a good chip

In fact, I was thinking 1C/2T Broadwell or Skylake @ 4.5W would be really interesting for a N3000 replacement:

http://forums.anandtech.com/showthread.php?p=37869125#post37869125

One reason is that the N3050 is a 6W chip so it can't exactly replace the 4W N3000 in all applications. However, a 4.5W Core based chip should be able to (or if necessary reduce specs to make 4W).
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,676
136
In games that are not as CPU intensive, the A8-7650K is a bit faster than the R7 240.
Wait, so in many games the integrated $5 GPU is just as fast or even faster than a $70 dGPU!? I'm shocked!

I'm starting to think AMD should bundle a free R7 240 with every X4 860K.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Wait, so in many games the integrated $5 GPU is just as fast or even faster than a $70 dGPU!? I'm shocked!

1.) Athlon x 4 860K is $70 (on Amazon)

And A8-7650K is $96 (on Amazon). Plus this APU needs DDR3 2133 which adds $10 more to the cost compared to DDR3 1600.

Now how do you figure that iGPU costs the consumer $5? Its already $36 more when the RAM is factored on top of the $26 premium for the iGPU. This before making the adjustment that the A8-7650K is a lower CPU bin than Athlon x 4 860K.

Using A8-7670K with DDR3 2133 in place of A8-7650K with DDR3 2133 would increase the price premium to $38 over Athlon x 4 860K, but at least the CPU bin is almost as good as the Athlon x 4 860K.


2.) R7 240 @ $70 is a very poor price for a dGPU. (PNY Nvidia GT 730 GDDR5 is regularly $55 or $58 shipped after $10 rebate ) and is much better in terms of GPU core and memory bandwidth (384 Kepler cores @ 902 Mhz with 40GB/s bandwith vs 320 GCN cores @ 720 Mhz/780 Mhz with 25.6 GB/s bandwidth).

14-133-542-TS


3.) Anandtech only tested five games for that review: Alien Isolation, Total War Attila, GTA V, GRID : Auto Sport, and Shadows of Mordor.

I would say at the minimum a game like Battlefield 4 64 player should also be tested (by someone) simply due to it being very popular. EDIT: Some additional games to add as well---> http://store.steampowered.com/stats/

from Steam Link above said:
Top games by current player count

Current Players Peak Today Game

442,901 877,218 Dota 2
238,734 709,278 Counter-Strike: Global Offensive
105,916 122,020 Fallout 4
57,893 61,405 Team Fortress 2
40,565 53,538 Warframe
38,737 45,239 Sid Meier's Civilization V
35,072 37,972 Garry's Mod
30,162 48,313 Grand Theft Auto V
29,608 41,801 ARK: Survival Evolved
25,039 34,404 The Elder Scrolls V: Skyrim
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding the issue of fixing the APU's throttling behavior according what is happening in the following post it should be possible. (re: the throttling happens despite relatively low power consumption and plenty of cooling)

http://www.tomshardware.com/answers/id-2263550/amd-a10-7850k-throttling-igpu-load.html

Dear All,

I'm turning to you for help.

My brand new out of the box AMD A10 7850K is throttling when the iGPU is under load during gameplay. The CPU frequency drops from 3.7 GHz to around 3.0 GHz, causing low frame rates thus unplayable games.
(I did not buy this APU for massive playing, only occasional gaming. I'm actually playing with SpaceEngine which isn't as demanding towards hardware as BF4 for example.)

It throttles technically immediately after the iGPU gets the load. For example when I minimize the game window, CPU freq. goes back to 3.7 GHz and when I maximize it, the freq. is down at 3.0 GHz in the same second. (Happens also when stressed with AIDA64.)

I figured from other threads that the problem is throttling due to heat/TDP limit reach. I don't really understand why this is happening when I get the following reading from AMD Overdrive (under full load by AIDA64): thermal margin: 45*C/113*F (gets stabile after 15 minutes of stressing - plenty of room here).

Power use of the whole APU does not exceed 50W under full load according to AIDA64 (TDP is 95W of this APU as far as I'm aware).

The whole system is put in a Thermaltake case that has more venting holes than solid material. I'm also using a Zalman Optima CNPS10X for the APU and three 12 cm vents for cross ventilation of the case. The VRM modules, the RAM and the NB also get a direct flow of ambient air (25*C/77*F).

Temp readings under full load (stabile after 15 minutes stressed for 1 hour):
- APU: 42*C/108*F
- system: 32*C/90*F

Settings:
I turned off every limiting factor in BIOS I could find, such as AMD power management, cTDP, AMD Cool&Quiet, Turbo Boost, etc.
Tried many combinations of CPU/iGPU/RAM/NB speed that were stable (without voltage tweaks).
Set Windows7 power options to "performance" (detailed CPU settings are not available where other threads descibe it, I can only configure cooling profile).

My hardware setup:
AMD A10 7850K (no OC, I'm using it @ 3.7 GHz and 1.3125V)
Gigabyte G1.Sniper A88X (BIOS version F8)
Kingston HyperX Beast 2400 MHz running @ 2133 MHz
Samsung SSD
Corsair 500W PSU

^^^^ That is a really nice kit he has too. Better FM2+ hardware than most people.
 
Aug 11, 2008
10,451
642
126
Yea, well, if you are the company with 15% or whatever marketshare, I would think you would want your products to "just work" without having to be tweaked by the user.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Apparently the CPU also throttles when the APU is in dual graphics mode (see poster voted as best reponse in that Tom's hardware thread I linked in post #80).

Edit: even using the R7 250 in dual graphics it still throttles. Get the 260 or higher card and disable the iGPU to stop the CPU from throttling.

Makes sense to me that would happen too.

In fact, I wonder if this one reason why we see poor performance with APU and dual graphics. (re: adding an additional GPU can remove graphics bottleneck...which could shift to CPU, but this would only help if the CPU does not throttle).

Therefore, I think at this point I would be great to see AMD fix at least these two things:

1. APU CPU throttling when iGPU is in use.

2. Price on low end dGPU like R7 240 and R7 250. (re: to make more competitive with Nvidia and actually provide and incentive to use in dual graphics).
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Another reason to fix CPU throttling for APU (when iGPU is in use) would be DX12 with "large" discrete card added. Supposedly the iGPU will be useable in this scenario even if it is not being used for dual graphics.

So reasons (up to this point) for CPU throttling fix include:

1. CPU intensive games when the APU is used as a standalone.
2. Dual graphics (re: adding additional GPU can remove graphics bottleneck, but CPU running full speed would be necessary to take advantage of this)
3. iGPU in DX12 when large discrete card (ie, non dual graphics) is used.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yea, well, if you are the company with 15% or whatever marketshare, I would think you would want your products to "just work" without having to be tweaked by the user.

This especially with popular games.

Of the five games Anandtech tested in the A8-7650K review (linked in post #73) only GTA V was in the Steam top 10 (it was ranked #8)

And that was the game that did 10 FPS better when substituting a smaller 320sp dGPU for the larger 384sp iGPU.

---------------------------------------------------------------------------------------

P.S. Here is how the other four games used in the Anandtech A8-7650K review ranked on the Steam List found here (click "view all of the top 99 most played games")

Alien Isolation: Not ranked in top 99.

Total War Attila: 36th

GRID Autosport: Not ranked in top 99.

Shadows of Mordor: Not ranked in top 99.
 

Ranulf

Platinum Member
Jul 18, 2001
2,331
1,138
136
On price/performance on intel's lower end:

i3-4130 3.4ghz 54w vs 4130T 2.9 34w

$10-20 more for the lower power "green" chip. It will be interesting to see where the skylake version falls for pricing.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Another reason to fix the APU throttling beyond the three reasons listed in post #84 would be GDDR5.

If GDDR5 prices drop sufficiently perhaps a board manufacturer could make use of the GDDR5 option in Kaveri/Godaveri and install as soldered down memory.

The higher bandwidth would remove the iGPU bottleneck in many scenarios increasing the advantage of the higher (non-throttling) CPU clocks.

So up to this point there are at least four reasons to remove the CPU throttling:

1. CPU intensive games when the APU is used as a standalone. (see post #73 as an example)
2. Dual graphics (re: adding additional GPU can remove graphics bottleneck, but CPU running full speed would be necessary to take advantage of this)
3. iGPU in DX12 when large discrete card (ie, non dual graphics) is used.
4. GDDR5 (re: the additional memory bandwidth can remove graphics bottleneck, but CPU running full speed would be necessary to take advantage of this).
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Another reason to fix the APU throttling beyond the three reasons listed in post #84 would be GDDR5.

If GDDR5 prices drop sufficiently perhaps a board manufacturer could make use of the GDDR5 option in Kaveri/Godaveri and install as soldered down memory.

The higher bandwidth would remove the iGPU bottleneck in many scenarios increasing the advantage of the higher (non-throttling) CPU clocks.

If you need special memory you'd be better off going back in time and having a separate gpu integrated onto the motherboard at that point. That would be the step between a low powered on die gpu, and a discrete solution.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
GDDR5 latency is about twice as bad as standard DDR3. That would be bad for the x86 cores.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
If you need special memory you'd be better off going back in time and having a separate gpu integrated onto the motherboard at that point. That would be the step between a low powered on die gpu, and a discrete solution.

But then you need to pay for two separate memory pools, two separate dies, and a more complex PCB.

GDDR5 latency is about twice as bad as standard DDR3. That would be bad for the x86 cores.

I thought that in nanosecond terms they were roughly the same? Higher number of memory cycles, but the higher frequency balanced that out. (Certainly the PS4 seems to do fine.)

I would have loved to see a GDDR5 Kaveri with 130W power limit and higher GPU clock speed, but I doubt that it will happen at this point.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I thought that in nanosecond terms they were roughly the same? Higher number of memory cycles, but the higher frequency balanced that out. (Certainly the PS4 seems to do fine.)

According to the following article that is true. (This, in contrast, to the specs we see on DDR3 vs. DDR4 where the CAS timing is definitely looser on DDR4 for a given clockspeed. Example: DDR3 @ 2133 with CAS 10 or 11 vs. DDR4 @ 2133 with CAS 15)

http://www.redgamingtech.com/ps4-vs-xbox-one-gddr5-vs-ddr3-latency/

But what about the actual latency of DDR3 vs GDDR5?

It’s worth remembering that if you do a little bit of Googling around, you can easily come across the specs of various GDDR5 memory. Hynix for example place PDF’s of their products freely available on the internet.

The latency on GDDR5 and DDR3 memory does vary some, but the typical CAS latency of most sticks of DDR3 is between 7 and 9 (see the earlier Tomshardware link for more information). GDDR5 RAM meanwhile is a typical latency of about 15. Once again, all of this depends on the make and models of the RAM.

So – there’s the answer right? Let’s say 8 CAS vs 15 CAS? No, it’s not. We have to remember that the speeds are for all intents and purposes – relative. If you take the CAS of both, and then multiply it by the clock speed – you get the ns of delay. CAS of 15/1.35 = 11ns.

I’ll save you the trouble and say that it’s between 10 – 12 for both DDR3 and GDDR5 varying heavily on the clock speed of either the DDR3 or GDDR5 AND in the timings they’ve chosen.

Also, take into account what Mark Cerny has said about GDDR5 latency:

“Latency in GDDR5 isn’t particularly higher than the latency in DDR3. On theGPU side… Of course, GPUs are designed to be extraordinarily latency tolerant so I can’t imagine that being much of a factor”.

And here is the link to the original article quoting Mark Cerny:

http://www.eurogamer.net/articles/digitalfoundry-face-to-face-with-mark-cerny
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
GDDR5 latency is about twice as bad as standard DDR3. That would be bad for the x86 cores.

If GDDR5 were used, the designers should be able set bandwidth at whatever level they need to cover the iGPU and CPU without going too high. This would keep CL as low as possible.

P.S. The Anandtech article I linked in post #87 mentions quad channel is available. If it is indeed possible to use this 256 bit bus for GDDR5 that would help the designers choose a low CL. (re: wider buses allow tighter timings to be used for any given bandwidth target)
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
I thought that in nanosecond terms they were roughly the same? Higher number of memory cycles, but the higher frequency balanced that out. (Certainly the PS4 seems to do fine.)

That might well be the case. The PS4 does seem to indicate that it CAN be used as a memory solution for general-purpose computing. Whether or not it's a good idea, or how GDDR5 responds to random reads/writes vs sequential is another issue.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Rather than going 256 bit GDDR5, AMD could use 256 bit for DDR3 as well.

256 bit DDR3 2133 = 68 GB/s bandwidth @ CAS 10 or 11. (So good bandwidth and low CL)

Trouble is (like 256 bit GDDR5) the FM2+ socket doesn't have (according to this article) enough pins for quad channel DDR3, so new socket (either PGA or BGA) needs to be developed.

Another option that *might* be possible would be to use the existing FM2+ socket design for dual channel (128 bit) GDDR5. Then solder the GDDR5 memory on the board. Then make a specific Godaveri SKU validated to work with both the conventional DDR3 FM2+ boards and the special FM2+ boards with 128 bit GDDR5 soldered on. Even better would be if the existing Kaveri and Godaveri processors could work with special 128 bit GDDR5 boards.
 
Last edited:

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
Yea, well, if you are the company with 15% or whatever marketshare, I would think you would want your products to "just work" without having to be tweaked by the user.
And that's really bad, even more considering that the OEM's are screwing every chip with insane voltages... Intel got their share too... seems that leaving the MoBo business might be bad on some time due the lack os standards. VIA has better time since they still make the motherboards

PS: Agreed with the Sempron 2650...
If AMD wanted to show up a strong Dual Core, they just launched that Sempron at 1.65 Ghz.... or an Athlon with 2.4 Ghz Dual Core.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
I would also add that Kaveri is essentially dead in the water. It's fine for right now if you want an AMD APU, but it has no future on any platform of which I am aware. Carrizo/Stoney Ridge will show up in mobile configurations and Bristol Ridge is on desktop, and that's it for Construction cores. AMD isn't changing anything about it, since the 7870k is probably the last Kaveri you'll ever see.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I would also add that Kaveri is essentially dead in the water. It's fine for right now if you want an AMD APU, but it has no future on any platform of which I am aware. Carrizo/Stoney Ridge will show up in mobile configurations and Bristol Ridge is on desktop, and that's it for Construction cores. AMD isn't changing anything about it, since the 7870k is probably the last Kaveri you'll ever see.

I do realize when March 2016 comes that Bristol Ridge on AM4 will be the newest thing for construction core APU.

However, even when Bristol Ridge launches I'm sure there will be a lot of Godavari still in the channel (and probably for quite some time after that).

So assuming GDDR5 pricing has dropped to tolerable levels if there is anything AMD can do to enable GDDR5 on existing APUs then I think that would be a good thing. (This, and fix the throttling issue in Windows)

P.S. For a top ten gamer I think I would rather have 4GB GDDR5 than 8GB DDR3 (especially if it is 1600 speed). And for a higher end motherboard 8GB GDDR5 would work.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding the GDDR5 mode on the Kaveri memory controller, wouldn't this be nothing more than the memory controller we find on existing video cards?

Example: R7 250 (Oland) which is available in both DDR3 and GDDR5 versions.