Question The FX 8350 revisited. Good time to talk about it because reasons.

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lightmanek

Senior member
Feb 19, 2017
508
1,245
136
Are the Phenom II X6's still pretty good? To my understanding, at least in the past, they actually did a bit better than their successors, the Bulldozer CPUs.
For legacy software they mostly are better, but lack of modern instructions limites Phenoms from running certain games and programs altogether.
 

DrMrLordX

Lifer
Apr 27, 2000
22,543
12,412
136
Are the Phenom II X6's still pretty good? To my understanding, at least in the past, they actually did a bit better than their successors, the Bulldozer CPUs.

No. There are many platform reasons not to use Thuban as anything but an airgapped hobbyist machine.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
No. There are many platform reasons not to use Thuban as anything but an airgapped hobbyist machine.
Platform reasons? Could you elaborate? My undestanding is that if the motherboard can run an FX Bulldozer/Piledriver CPU, it can also run a Thuban?
 

NTMBK

Lifer
Nov 14, 2011
10,400
5,636
136
Are the Phenom II X6's still pretty good? To my understanding, at least in the past, they actually did a bit better than their successors, the Bulldozer CPUs.

Fun to mess around with, but they're missing SSE4.1 and SSE4.2. That makes them incompatible with a lot of modern software.
 

DrMrLordX

Lifer
Apr 27, 2000
22,543
12,412
136
Platform reasons? Could you elaborate? My undestanding is that if the motherboard can run an FX Bulldozer/Piledriver CPU, it can also run a Thuban?

Missing SIMD instructions and likely vulnerable to Spectre.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,803
1,286
136
Pretty much everything earlier than Summit Ridge is obviously a bad choice for a modern system. Since, they missed out on Adaptive Clocking(Steamroller), AVFS(Excavator), and Pure Power(Zen).

AVX2 is definitely needed to maximize performance for linux/gcc/llvm x86-64-v3 ports.
However, Zen4 is clearly going to be the best CPU in the long run given support of x86-64-v4 ports. Better to save up and get something that will last till x86's death.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,803
1,286
136
Any ETA on that? 5 years? 10 years?
Why would I have an ETA... but totally off-topic now:

But, I guess it will be based on how fast AMD switches over to another ISA with higher flexibility:
As an AMD RISC-V CPU/GPU micro-architect/RTL designer, CPU-RISCV (Compiler Developer), etc
- High speed GPUs
- RISC-V RV64 CPUs
- RV64 ISA Extensions M, A, B, F, V
- CPU Principles of Branch Predictors and register renaming, out of order execution, speculative execution.

Watch this for now:
- https://github.com/riscv-admin/graphics
Which should allow mainstreaming of:
- https://riscv.org/wp-content/uploads/2019/12/12.10-10.46a-ThinkSiliconLightningTalk.pdf
"(If) the main CPU is also RISC-V based, it is possible to dynamically off-load the main CPU of some tasks making some of the GPU cores appear as additional system cores"
Rembrandt-esque design with potential RISC-V CPU+GPU with the above capability would have 8 main CPUs and 24 off-load CPUs (Schedulers = Cores).

On legacy support box86/box64 is not only ARM now, but also Power, so a RISC-V port is just somewhere in the time stream.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
I agree that a Phenom II X6 is not a good fit for any modern gaming system. Obviously a new Zen 3 or Alder Lake system would be the proper thing to have, though many somewhat older Intel platforms would also be fine for the most part (Coffee/Comet/Rocket Lake, as well as X99).

My question was more along the lines of, what has better performance today, a 1090T / 1100T, or something like the FX 8350?

Though for a very basic system with no modern gaming or demanding compute, using a light Linux distro, either would be fine I am sure. I have a s939 Athlon X2 running Lubuntu just fine. The system has an SSD though :p
 
  • Like
Reactions: AnitaPeterson

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Are the Phenom II X6's still pretty good? To my understanding, at least in the past, they actually did a bit better than their successors, the Bulldozer CPUs.

In certain cases they were a bit faster then the original Bulldozer. Piledriver was pretty much equal.

Any ETA on that? 5 years? 10 years?

I doubt x86 is going anywhere soon.

But. It's restricted to the PC platform, especially since Apple went ARM, which may become less and less relevant as time goes by. People are increasingly buying phones and tablets rather then full fledged PCs. Perhaps with good reason, since most have no use for Windows/Linux more advanced features, and can't even be bothered to even learn how to change the wallpaper. Most are more interested in getting something that Just Works™ without too much configuration.

So rather then dying outright it may just become more niche.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
No. There are many platform reasons not to use Thuban as anything but an airgapped hobbyist machine.

That's actually what I'm doing. For retro software it's very good. Particularly since XP doesn't want to play nice with Bulldozer.

Due to energy costs I did have to move my 2D retro gaming onto an E350-based system. Can't really justify Thubans power consumption for those old titles.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,219
29,881
146
Piledriver is faster than any Phenom, and 10-15 percent faster than Bulldozer on average.

RA Tech compared them - stock - same clockspeed - overclocked. Phenom's IPC kept it dang competitive at similar clocks, but Piledriver was a good overclocker. Tweak both for best performance, and the 8350 walks away from its older stable mate. As the years have gone by, the 2 extra threads have mattered more as well.

 

Thunder 57

Diamond Member
Aug 19, 2007
3,494
5,796
136
Piledriver is faster than any Phenom, and 10-15 percent faster than Bulldozer on average.

RA Tech compared them - stock - same clockspeed - overclocked. Phenom's IPC kept it dang competitive at similar clocks, but Piledriver was a good overclocker. Tweak both for best performance, and the 8350 walks away from its older stable mate. As the years have gone by, the 2 extra threads have mattered more as well.


I would agree Piledriver was faster. With Bulldozer though it was more hit or miss. Unfortunately we didn't get Steamroller or Excavator on desktop, but those were pretty decent. Nothing compared to Zen though.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,219
29,881
146
So...what you're telling me is...the Butler did it. :p

I have not been using my FX8350 lately. With an SSD, 32GB of 1866MHz, and RTX 2070Super, it has zero problems with daily driver stuff. Of course, there are games that definitely expose it. Though a 1440p VRR monitor helps smooth things out some, the FX can certainly remind you how bad its single core performance is. I want to try Halo Infinite and Gears 5 with it.
 
Jul 27, 2020
24,268
16,925
146
I have not been using my FX8350 lately. With an SSD, 32GB of 1866MHz, and RTX 2070Super, it has zero problems with daily driver stuff.
Your GPU is 40% faster than mine. Try running Matrix Awakens demo.



Set following values in INI file to make it easier for your GPU:

sg.ShadowQuality=1
sg.GlobalIlluminationQuality=1
 
  • Wow
Reactions: AnitaPeterson

NostaSeronx

Diamond Member
Sep 18, 2011
3,803
1,286
136
Always makes you wonder what could have been if AMD just kept shrinking and tweaking and adding cores to the existing K architecture in Thubian instead of dumping all kinds of R&D into the Dozerpile.
My understanding is that K-projects were always going for actual CMT2.
Early K8 designs were all clustered architectures with 2x K6-IV execution cores.
Early K10 was derived off the earlier(K8 by David Witt/Jim Keller) clustered architecture implementation. Of which, it adds CMT2(both clusters can run different threads) and removes the integrated FPU(K6's Multimedia/Floating Point Unit) from each of the execution clusters.
K8/K9 clustered patents with K6-esque execution core (Int/Mem/FPU). Where as K10 ditched the duplicated FPU.

This the K10 CMT patent;
cmt-k10.png
notice that in Bulldozer's early days there was only ever ONE retire queue.

The change to where the architecture element went from clusters to CPUs is when the retirement logic went from shared to dedicated.

---
Meet the Bulldozer genius

Moore was the first Bulldozer chief architect and then became a senior fellow on another project: https://ieeexplore.ieee.org/document/4771772
He is also the one who coined "Cluster-based Multithreading" in 2005.

Going through it though by 2009 Butler/Moore weren't singing praises of a brand new Cluster-based Multithreading architecture, but rather a brand new Conjoined Cores architecture.
"Chuck Moore, chief technical officer of AMDs technology development group, said a new chip, code-named Bulldozer, '“is designed from the bottom up to take advantage of low-power technologies.”' =>> Each chip has conjoined cores, the big management portions of the chip, which share some real estate and architecture."

In this case, they aren't using Cluster-based Multithreading in which IEEE:
This new micro-architecture contains two processor cores that implement chip-level multi-threading (CMT).
----
The more simple what if/wonder about is if AMD actually implemented Cluster-based Multithreading as intended.
bulldozerlin1.png
Linear scaling => Clustered
bulldozerlin2.png
Singular core => Clustered

AMD's internal numbers for multithreaded scaling:
SMT = ~1.3x scaling, +5% area
CMP = ~1.7x scaling, +100% area
Cluster-based Multithreading = ~1.8x scaling, +50% area; The 1.8x scaling can also be used against monolithic gains: Zen's ~+52% * 0.8 => ~+41.6% single-threaded improvement just by actually doing clusters instead of cores.

Roadmap-wise the removal of Cluster-based Multithreading was set sometime before November 2008. Since, by 2009 they were already saying it was tightly linked cores.

If it launched as intended with the correct threading/architectural layout, AMD would have been at least two years ahead of Haswell's 4 ALU implementation. AMD would have also been able to dodge most if not all the negatives that popped up.

Of which, Cluster-based Multithreading was well researched:
"Note that the cycle-time of these clustered architectures is much smaller than that of the centralized SMT. Indeed, Palacharla and Jouppi [12] estimate that the cycle-time for an 8-issue processor will be twice as long as a 4-issue processor when using 0.18um technology. In the light of their observations, clustered SMT, with two 4-issue clusters, may have a frequency that is twice higher than centralized SMT." - A Clustered Approach to Multithreaded Processors - 1998

"Clustering is an architectural technique that allows the design of wide superscalar processors without sacrificing cycle time, but at the cost of longer communication latencies. Simultaneous multithreading architectures effectively tolerate instruction latency, but put even more pressure on timing-critical processor resources. This paper shows that the synergistic combination of the two techniques minimizes the IPC impact of the clustered architecture, and even permits more aggressive clustering of the processor than is possible with a single-threaded processor." - Clustered Multithreaded Architectures – Pursuing Both IPC and Cycle Time - 2004

"The corresponding clustered multi-threaded (CMT) architecture is highly competitive with un-realizable SMT processors, achieving 90-96% of the cycle-level performance of a partitioned SMT (which improves on the base SMT), while dissipating about 50% of its energy." - Partitioning Multi-Threaded Processors with a Large Number of Threads - 2005

Post-RISC architectures were all clustering up as well:
Initial design was simple CMT4 of 3-wide VLIW(Int+Mem+FP) clusters with 5 temporal threads

Meanwhile... CMT2 in the FPU is just sitting there passive aggressive like.
fpucmt2.png
Shared Retire, Shared Rename, Independent Repeated Scheduler/Execution Units/Physical Register Files, Shared Load/Store.
 
Last edited:
  • Like
Reactions: GodisanAtheist

Abwx

Lifer
Apr 2, 2011
11,786
4,695
136
That may be true, but it's more of a footnote than anything. How many people bought an Excavator on AM4? It was too little far too late. In that benchmark it still couldn't catch an i5-2500k from six years earlier.

The contenders were the 2C/4T i3, if it wasnt for the outdated process and power comsumption it was somewhat compétitive.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,219
29,881
146
That's basically a nice way of calling it garbage. It was an interesting idea and I thought AMD might have really been on to something, but it was a flop. Those were some dark years for AMD.
When members start discussing the features and arch of the latest CPUs, this is me reading the discussion from up in the cheap seats.

unga.jpg

Despite that, my hand's on experience with many of them can be extensive. And that practical experience does not always sync up with the technical discussions. Doesn't matter to me how many white papers or whatever are cited. Or how many bar graphs a member spams from deficient major reviews. If my experience doesn't reflect theirs' I have the hubris to state, it is because I use it far more extensively than they ever have. The FX is the poster child for it. Years of roasting it, and endless bar charts, and they are mostly inadequate. Play the games, use the system as a daily driver, then post results. It would never happen, because that is not going to maximize revenue.

One current example of a divergence from the norm is Jayz2cents experience with 12th gen. He has been using it as his personal system, and he has had issues. He is worried about it not being solid enough for his live streams, which is something he demands perfect stability for. He is considering ditching 12th gen altogether. That's definitely not the official narrative Intel wants told. And while it is likely one of the components, it is something he doesn't usually suffer, making it notable enough to do a video on.

FX was always better than what all those reviews show. But you have to stop looking at fps, and have a frame time graph running, then get deep into the games, to see where those extra threads shined. FX always looked bad because of how games get tested, and where they are tested. Makes the old i3s look better. But get into Witcher 3, Crysis 3, BF:V, or other games with high CPU demands at times, and the i3 starts having a bad time. While the FX6300&8350 were providing superior gaming experiences. Again, those bar charts were never going to show it. It was Richard from Eurogamer/DF that was the first sizable reviewer to show how the i3 was great in Witcher 3 until it wasn't. And how the 8350 was doing a much better job. Those 2 CPUs were priced very closely to each other at the time. AMD obviously resorted to price cuts to move inventory.

The benchmarking usually did a terrible job of reflecting how FX overclocking could be a big boost too. All you would read is power and heat not worth it, for such minimal returns. Certainly there were games where it did almost nothing. But there were also games where a very average 4.5GHz DDR31600@2133MHZ overclock, could improve performance as much as 25 percent. I don't include NB overclocking because despite claims it can help frame pacing, I never saw anything worthwhile from it.
 
Jul 27, 2020
24,268
16,925
146
And while it is likely one of the components, it is something he doesn't usually suffer, making it notable enough to do a video on.
It has to be one of the components. Or some trouble with E-cores. My friend with i5-12400 and H610M has yet to report a single deal breaking issue. I'm the first one he discusses any issues with when he starts having them in the IT realm.