Question The FX 8350 revisited. Good time to talk about it because reasons.

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
Is it only in that particular game or others too? also, is the FX chip OC?
It is indeed overclocked via multiplier - 4.5GHz for now, and I either pair it with 16GB 2133MHz or 32GB 1866MHz. And yes, Witcher 3 is the only game I have played so far, where the extra threads have made more of a difference than strong single thread and better memory bandwidth of the Ryzen. For example, the Ryzen is noticeably better in Fallout 4, where the weak IPC of the FX struggles more at times.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
The FX will do better at Battlefield 5 64p because 4/4 CPUs have frame pacing issues on those maps when things are hectic. Assassin's Creed Odyssey will merk both. EDIT: probably Origins too, since it uses Denuvo.
 
Jul 27, 2020
13,329
7,919
106
Curious question: Does FX-8350 work better with Nvidia drivers or AMD drivers? Ironic if Nvidia drivers perform better. That would be like AMD treating their child as a bastard.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
Curious question: Does FX-8350 work better with Nvidia drivers or AMD drivers? Ironic if Nvidia drivers perform better. That would be like AMD treating their child as a bastard.
I don't have a decent AMD card at this time. Though I intend to pick up a 6600 if it gets even cheaper. I have taken to using overkill cards like 3060ti and 2070Super, with the FX, despite it being a PCIE 2.0 x16 platform. Because the GTX 1650 Super I was using, can't even max out Witcher 3 at 1080p, a 2015 game.

When I owned one up until building a Ryzen 1500x in 2017, I used it with a RX 580 8GB and Freesync monitor. VRR was a real saving grace for the FX in games like Fallout 4, where it could not always hold 60fps for me.

Generally speaking, Nvidia is better in DX11 and AMD better in DX12, when using weak CPUs. Due to driver overhead differences.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,632
1,193
136
waiwatiwaitwaitwaitwatiwait: is someone saying that there is somewhat active development of construction core processors on less than bleeding edge nodes going on?
The core after Excavator and Excavator's shrinks were dropped from a high-performance insert. With Zen completely taking up the high performance side.

First instance:
Next-generation "Low Power" x86 cores; Not Leopard or Margay (cancelled in early 2015 officially) and the development appeared around late 2015 around designs complete of Zen.

Second instance
Which around 12FDX/7LP announcement in late 2016 and switched to the second overarching concealed codename project: Next-generation "Ultra Low Power" x86 cores.
Zen2 as the big core at 7LP and ULP1 as the small core at 12FDX.

The work appears to cover both "22FDX" and "12FDX" but more recent changes in AMD|GF appears to have a focus towards "12FDX". Also, 3rd form of Bulldozer/Roadwork/Construction cores target was the 14nm generation in general. Early simulations would be with expected 14nm design rules. The only 22FDX product that ever neared release was a refresh of the 60 GHz Nitero chip.

AMD didn't adopt 28HP, instead waited for 28SHP for production parts. The identical version to that is waiting for high mobility strained FDSOI wafers. Which appears first in 12FDX and will be backported to 22FDX later.

Of which around the 2016-2018 timeframe:
ULP CPU Cores
ULP GPU Cores
ULP SoC architecture
ULV Branch Predictor
ULV Cache Design
etc.
All popped up associated with the NG ULP cores.

The general evolution of the Low Power cores before indicate a general rise in frequency: Bobcat = 1.75 GHz -> Jaguar = 2.4 GHz. So, an ultra low power design with low delay in architecture and a low delay in process, means especially low FO4/gate/wire delay. Getting around the power consumption is the Vdd scaling offered by FDSOI. Small islands of repeated schedulers, PRFs, executions units is better for this than a single large island of schedulers, PRFs, execution units like that in Zen.
srandzn.jpg
(I can't read spaghetti, but this should provide the idea... Clustered = Non-monolithic, Standard SMT = Monolithic)

Architectural threading:
1998-2004 = Clustered Microarchitecture (Original grounds up K8 design - David Witt/James Keller patents)
2005-2007 = Cluster-based Multithreading
2008-2012 = Chip-level Multithreading
2013-now = Cluster-based Multithreading

Clustered Microarchitecture = Shared Retire/Rename because it is a single processor core.
Cluster-based Multithreading = Shared Retire/Rename because it is a single processor core, but with SMT-enabled for increased utilization of second execution core.
Chip-level Multithreading = Two Retire/Rename because it is two processor cores.
Repeat of Cluster-based MT ...

In Cluster-based Multithreading the retire/rename isn't fully duplicated over both schedulers/execution/PRFs, it is one unit with SMT.

Basically, this line:
"The core hardware can stay lean by supporting execution resources and bandwidth for a single thread, instead of scaling up to cover SMT throughput. As a result, the core remains small and enables a higher-frequency design"
but add [execution] to core.

Scale-out for SMT(TLP), rather than scale-up for SMT(TLP).

Of which is based off an earlier single-threaded architecture.
"High Frequency, Wide Issue Microprocessor" from 1997:
amdclusteredarchitecture.png

Under the:
K8 patents the register file is duplicated and the L0i removed. (As well as J.K. patents showing off a smaller core variant with just one integer core)
K9 patents the AGUs were shifted to Instruction Window 2 which directly connects to the Load/Store unit. Like the way Bobcat does...

Of the above architecture, both execution cores can run their own slice of single-thread. AMD's processors do happen to be OoO.
Scale-out for ILP, rather than scale-up for ILP.

Now combine the two:
Thread0 => Cluster0+Cluster1
or
Thread1 => Cluster0+Cluster1
or
Thread0+Thread1 or Thread1+Thread0 => Cluster0+Cluster1
Dynamically load the core with TLP or ILP. The core is most efficient when it is off or fully loaded.

Now all we have to do is wait for that "Ultra Low Power" x86 core to come out. Knowing where the origin is and knowing where the project is going: Safe to conclude that it will be FX at a lower TDP and price. (If FX used the originally planned K10 core)

Bobcat vs Bulldozer:
4.9 mm2 * 2 (Double core count) * 2 (Half the lean for High-Performance) => ~19.6 mm2 ~~ near actual of 21 mm2.

Jaguar vs Excavator (same node, same libs)
3.1 mm2 * 2 (Double core count) * 2 (Half the lean for High-Performance) => ~12.4 mm2 ~~ near actual of 14.48 mm2.

Jaguar vs Zen vs Ultra low power (similar node, different libs)
1.8 mm2 * 1.5 (Double execution core count) * 1.5 (increased frequency with extra-lean) / 0.9=> ~3.645 mm2 ~~ likely very near actual mm2.
Zen = 5.5 mm2 ... ULP same ILP/TLP and higher frequency at less area. Higher frequency comes from the superior architecture design and process node.
Curious question: Does FX-8350 work better with Nvidia drivers or AMD drivers? Ironic if Nvidia drivers perform better. That would be like AMD treating their child as a bastard.
Questionable answer: FX-family processors work best under open-source operating systems and open-source drivers.
 
Last edited:
  • Like
Reactions: AnitaPeterson

bigboxes

Lifer
Apr 6, 2002
37,916
11,829
146
Wow. Memory lane time. Love it. Never tried out the Bulldozer stuff. In 2009, I moved to the dark side and got me a i7 920 for $200. 2014, I upgraded to an i7 4790K. Made the switch back in last year with the 5950X. I still have all my AMD CPUs from the 2000's, though. AMD just wasn't competitive with Intel once the Core processor came out. Glad things have changed in that department.

Thanks for the thread @DAPUNISHER
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
Hmm...interesting. So I should compare the FX-8350 to Ryzen 1500X on Phoronix. Would that be a fair comparison?
The 1500X was a substantial upgrade from my FX 8350. No more dips in games, and everything felt snappier. I swapped to a Ryzen 1600 so I could give the 1500X to my friend when his MSI gaming 970 died like mine did. I don't know what became of his FX 8350 after that. But I know he still uses the 1500X for a server.

I am fairly certain that windows evolving has helped FX too. All those reviews were originally on win 7. I recall reading in one of the articles from 2012, a statement from AMD that said FX was best with the upcoming windows 8, and that 7 was not fully leveraging it. The latest version of win10 pro is showing it in its best light me thinks.
 

Asterox

Golden Member
May 15, 2012
1,002
1,735
136
What are you even on about? First, comparing it to an 8 thread Ryzen is a well DUH! Second, the 1400 wasn't the "worst" model, there was a 1200 and 1300x that were 4/4. And seeing how my FX 8350 handles Witcher 3 better than my 3200G, it would be better than both of those 1st gen Ryzen.

Well it was long time ago five years, so huh i almost forgot about them. :laughing:

But again, same comparison is very logical 8 Threads R5 1400 vs 8 Threads FX 8350.

It's not my fault that the FX-8350 is blah, and in several games it turns out worse even vs 4 Threads/Cores Zen 1000 series.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
Well it was long time ago five years, so huh i almost forgot about them. :laughing:

But again, same comparison is very logical 8 Threads R5 1400 vs 8 Threads FX 8350.

It's not my fault that the FX-8350 is blah, and in several games it turns out worse even vs 4 Threads/Cores Zen 1000 series.
I honestly don't know what point you are making? If it is that Ryzen is better than FX, well again, DUH! :p
 

Abwx

Lifer
Apr 2, 2011
10,600
3,071
136
Well it was long time ago five years, so huh i almost forgot about them. :laughing:

But again, same comparison is very logical 8 Threads R5 1400 vs 8 Threads FX 8350.

It's not my fault that the FX-8350 is blah, and in several games it turns out worse even vs 4 Threads/Cores Zen 1000 series.
On CB the 8350 is 16% faster than a Ryzen 1300X but slower than a 4C/8T Zen 1.
In Integer, wich is representative of gaming perf, it is faster than both 4/4 and 4/8 in 7 ZIP, FTR it s 5% faster than a 1500X that has a full L3 and 21% faster than a 1400, actually on the long term it should roughly match if not exceed those CPUs assuming that the game is not mainly dependent on a particular thread.

On the games charts it s not far from a 1400 despite games being less Mthreaded at the time...

 

NostaSeronx

Diamond Member
Sep 18, 2011
3,632
1,193
136
No, that isn't happening.
It is happening, but it isn't labeled under Family 15h or under the Bulldozer tree or as a high-performance core. It is a new core primarily that will be fabbed at GlobalFoundries. Replacing the 14nm Zen processors(practically, all of them) that are going EOL next year.
amdgf.png

There is also a roadmap beyond 12FDX:
12FDX = FEOL + Mx
12FDX-3D = FEOL + Mx + FEOL + Mx
12FDX-M3D+3D = FEOL + M1-3 + FEOL + Mx + FEOL + Mx
Scaling down the dies without cost-prohibitive shrinks => Reducing the end cost.

New products sell, old products rot. Growth of order volumes are attributed to new products, decline of order volumes are thus old products.
I recall reading in one of the articles from 2012, a statement from AMD that said FX was best with the upcoming windows 8, and that 7 was not fully leveraging it. The latest version of win10 pro is showing it in its best light me thinks.
This is generally not actually the case. The "Scheduler Fix" actually ruined scaling and power-savings. It made it less likely for the half-load boost to occur. Which in majority of cases was more efficient than the full-load boost.

AMD FX is best when the scheduling architecture from the operating system actually doesn't assume it is a bare standard SMT processor.

FX-8300 & FX-8310 3.3/3.4 GHz = half-load boost(2 modules loaded) => 4.2/4.3 GHz (900 MHz boost) versus = full-load boost(4 modules loaded) => 3.6 GHz/3.7 GHz (300 MHz boost).

It was also noticeable in the top versions of the APUs;
A10-4600M => 2.3 GHz => 3.2 GHz(1 module loaded)
A10-5750M => 2.5 GHz => 3.5 GHz( ... )
FX-7600P => 2.7 GHz => 3.6 GHz( ... )
FX-8800P => 2.1 GHz => 3.4 GHz( ... )
FX-9800P/FX-9830P => 2.7/3 GHz => 3.6/3.7 GHz( ... )

It is unnoticeable on FX-8350 which is 4.0 GHz + 200 MHz(half-load) or 100 MHz(full-load). The assumption of client-workloads being heavily FPU-invested was overblown.
Really! there's that many games for PC.... wow... just wow!!!!!!!!!!!!!!! o_O
Pulling these statistics out of nowhere, no source of relative information:
~40% of ~50,000+ are asset flips = ~20,000+ asset flips
~30% of ~50,000+ are low quality clones(sometimes different engine) = ~15,000+ clones
~30% of ~50,000+ are novel games ranging from awful quality(FFF-tier) to "wow one guy did this?" quality(SSS-tier) = ~15,000+ wide range of varying novel-variation of same genre.

Also... not to mention that there is like ~5000 CYOA/Interactive Novels on PC.
 
Last edited:
  • Haha
Reactions: Hotrod2go

NostaSeronx

Diamond Member
Sep 18, 2011
3,632
1,193
136
As simplified as possible:

Segment zero:
"Bulldozer 1" that released => "Bulldozer" & "Piledriver" <-- No Shrink
"Bulldozer 2" that released => "Steamroller" & "Excavator" <-- 28nm to 14nm; evolutionary design iterations.
"Bulldozer 3" that didn't release => "Tunnelborer" <- Straight to 14nm generation; revolutionary design.

Segment one:
"Bulldozer 3" was put into small team limbo by "Zen" and "K12"

Segment two:
"Leopard" plus "Margay" was cancelled and "K12" was canned but also put in limbo by early 2015. Edit: looked further back and this was actually done in late 2014(killed LP cores), not early 2015.

Segment three:
What was originally "Bulldozer 3" a.k.a "Tunnelborer" was shifted from HP Cores to LP Cores and renamed.
"Bulldozer 3"/"Tunnelborer" 2014 -> "Not Bulldozer 3"/"Not Tunnelborer" Late 2015. With this instead being called next-generation "Low Power" x86 core, thus reviving LP cores.

Segment four:
GlobalFoundries shows off dual-track roadmap: 7LP and 12FDX in 2016.

Segment five(white text):
The renamed LP Core was further shifted to ULP Cores, no codename was announced for it under LP Cores, so no codename yet for ULP Cores. With it now being called next-generation "Ultra Low Power" x86 core. With a bunch of Ultra Low Voltage design/architecture/methodology being applied to it.

Example:
“Bobcat” AMD's new low-power x86 core architecture
"Jaguar": A next-generation low-power x86-64 core
"Zen": A next-generation high-performance ×86 core

With the core that iterates on AMD's clustered microarchitecture after K10(Bulldozer), given IEEE titles of prior cores, would follow the above as;
"----------": AMD's new/next-generation ultra-low-power x86 core architecture.

Low-power cores at AMD had been getting faster in clocks:
Geode LX 130nm => Peak: 600 MHz
Bobcat(canned) 65nm => Peak: 1 GHz
Bobcat 40nm => Peak: 1.75 GHz
Jaguar 28nm => Peak: 2.4 GHz
~600 MHz average increase each low-power generation... Leopard = ~3 GHz, Margay = ~3.6 GHz, ULP = ~4.2 GHz as well as min: 2.8 GHz, 3.2 GHz, 3.6 GHz

So, even if it is "ULP" it doesn't necessarily mean it is slow or has low-performance. These cores instead aim for a gradual increase of performance that doesn't increase area and power.

Basically, AMD has avoided using certain marketing terms with "Zen":
"FX"
"Opteron"
"Sempron"
"G-series"

Given the timeline of AMD switching from FX/Opteron to Ryzen/EPYC, it happens to coincide with the ramp up of development for the ULP core. As another product, the ULP core, will be using them.

Given the mass production cues by GlobalFoundries, 12FDX is set for next year. Which is where we will see AMD revisit FX.

FX-8350 (125W-AM3+) => https://browser.geekbench.com/processors/amd-fx-8350
Ryzen 3 2300X (65W-AM4) => https://browser.geekbench.com/processors/amd-ryzen-3-2300x
Ryzen 3 4100 (65W-AM4) => https://browser.geekbench.com/search?utf8=✓&q=Ryzen+3+4100
(I am avoiding the ones with GPUs and with a lower price.)

FX-480 or FX-580 (25W-AM4), ~0.9x the absolute perf of the Zen CPUs. The reduction of total numbers denotes a lower-end SKU. FX has three numbers while Ryzen/Athlon, currently have four numbers.

FX-480 => Under Zen2(Renoir)
FX-580 => Under Zen2(Lucienne)
ST = 0.9x and MT = 0.9x, with the FPU with being 128-bit for FX rather than 256-bit like Zen2. Zen2 cut-down and new FX should have similar SIMD results per core.

2023 => Single CPU in package
2024 => Dual CPU in package or CPU+Discrete GPU(Malta's 3D DRAM-on-GPU production line) in package

FX-xxx(Up to 8-clusters) or FX-xxx Duo(Up to 2x8-clusters) or FX-x8x with Integrated RX x1x through x4x(Up to 1x8 CPU + 1x8 GPU). The Opterons seem to have 4P configs, socket(SP3? = 4x CPU per socket):slot(PCIe? = 1x CPU in BGA per Slot):cartridge(Extra PCIe? = 4x CPU in BGA per cartridge) in the private demos.

There is also the return of AMD's lost FIVR: https://ieeexplore.ieee.org/document/5433981
AMD's 32nm Development Node(2010)
fivr1.jpg
Peer-review's 32nm Production Node(2015)
fivr2.jpg

AMD indicated Fully Integrated Switched Voltage Regulation is unsuitable for Vp7=1.1 and Vp0=1.2 ~~ which is about where current/old AMD CPUs operate at. They later point out for Vp7=0.75V, Vp0=1.2V in 2014 that there is a 30% power gain available to cores with the FIVR.

Hence, why the stock voltage would be 0.75V with the boost voltage upwards. So, the cores could consume 30% more power that the system would otherwise lost. Vout for extremely low voltages 0.35v-0.55v prefer SC-type over LDO-type.
 
Last edited:
  • Like
Reactions: AnitaPeterson

Hans Gruber

Platinum Member
Dec 23, 2006
2,006
1,010
136
BF5 breaks every old CPU including the Intel stuff. I had to junk my 3570k because of that game. The FPS was good enough but the FPS counter didn't account for dropped frames and choppiness that benchmarks do not account for. 6 core 12 thread solved all those issues. I know someone who dumped his 7700K because of BF5 when the 7700K was less than two years old. 4 core 8 threads was not enough for BF5.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,963
1,680
136
The FPS was good enough but the FPS counter didn't account for dropped frames and choppiness that benchmarks do not account for. 6 core 12 thread solved all those issues.

Just curious. How much RAM was involved? In my experience having "too little" memory can have pretty much the same effect.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
27,420
17,146
146
Just curious. How much RAM was involved? In my experience having "too little" memory can have pretty much the same effect.
@Hans Gruber is on point about how demanding BF5 gets, and how you won't see that from any mainstream source's benchmarks.

RA Tech showed gameplay with an Ivy i5 and the FX 8350. Direct competition in MSRP. Though realistically the FX could already be found for $50-$75 less regularly. Mine was under $100, making it cheaper than a locked Haswell i3 when I bought it. Even the dumb talking point forum trolls used about having to buy aftermarket cooling with FX were well, dumb. I STILL have the Hyper 212 I bought for mine, all these years later. It is currently on a Z490 system. It is NEVER a bad idea to buy a quality cooler that gets adapter kits for use with future boards. Which most good cooler companies have done for years now.

Back to the game results - The FX was not flawless during his session, having an occasional frame pacing issues due to how CPU demanding 64p is. Contrastingly, the i5 really struggled.

Hence, it would not surprise me that a 12 thread CPU was needed to completely smooth things out. That was my experience with DRM infested Assassin's Creed Odyssey. My overclocked 4770K was having issues with intense combat. Even the Ryzen 2600 system I replaced it with for HTPC duty, could struggle in the most intense situations. Took a Ryzen 3600 to completely smooth the game out, playing on an HDTV without VRR.

Far from the first time I have heard of someone ditching 7th gen because of lack of threads. 7th gen aged like warm milk. And within 2yrs the $300+ 7700 was effectively a $100 i3. Heck 8th gen was rushed out before the year was even over. The sheer hubris, and the reviewers backing them up with their terrible testing, shameful.