Question The FX 8350 revisited. Good time to talk about it because reasons.

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Abwx

Lifer
Apr 2, 2011
10,931
3,423
136

VirtualLarry

No Lifer
Aug 25, 2001
56,315
10,031
126
eeeee
One current example of a divergence from the norm is Jayz2cents experience with 12th gen. He has been using it as his personal system, and he has had issues. He is worried about it not being solid enough for his live streams, which is something he demands perfect stability for. He is considering ditching 12th gen altogether. That's definitely not the official narrative Intel wants told. And while it is likely one of the components, it is something he doesn't usually suffer, making it notable enough to do a video on.
That's really interesting, because when I built my 12th-Gen Pentium Gold G7400 rig, first mobo I used was completely dead-dead, not a peep, not sure why, it was ASRock's cheapest board on the market at the time for LGA1700.

Replaced it with an Asus Z690 mATX, ran better, and with 2x8GB Silicon Power DDR4-3200, pretty generic stuff, but when I was testing it, I got a BSOD one time.

I didn't think that much of it, exactly, although it did make me worry a little bit about selling off the rig later on, if there would be problems.

Now I know I'm not alone with stability issues with 12th-Gen. (BTW, G7400 was not overclocked at all, not possible in this config.)
 
Jul 27, 2020
16,127
10,191
106

The comments are interesting. Some having problems with DDR5. One guy solved his problems by switching from one XMP profile to the other. I guess it's luck. Didn't the Ryzen 5000 series have USB problems that drove people nuts?

Replaced it with an Asus Z690 mATX, ran better, and with 2x8GB Silicon Power DDR4-3200, pretty generic stuff, but when I was testing it, I got a BSOD one time.

Now I know I'm not alone with stability issues with 12th-Gen. (BTW, G7400 was not overclocked at all, not possible in this config.)
Should have done an extensive memtest to rule out the RAM. Also, ASUS was responsible for the reversed capacitor on their Z690. Something is really wrong with their validation. I mean, I'm not an engineer but seriously, how can you miss a component placed in reverse???
 

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
That may be true, but it's more of a footnote than anything. How many people bought an Excavator on AM4? It was too little far too late. In that benchmark it still couldn't catch an i5-2500k from six years earlier.

Yes, but main point is that only Excavator is hm ok vs everything below it is kinda of lame and miserable.As goes for Cinebench R15, Piledriver cant beat old Phenom II X4.


For what it is A8-7600, or Steamroller is hm ok APU if you can get it for cheep in FM2+socket.

Blah, i found one AM4 Excavator/Athlon X4 970 localy for 20euro. :grinning:

 

Ranulf

Platinum Member
Jul 18, 2001
2,345
1,164
136
It has to be one of the components. Or some trouble with E-cores. My friend with i5-12400 and H610M has yet to report a single deal breaking issue. I'm the first one he discusses any issues with when he starts having them in the IT realm.

My guess would be he's running win10 and no e cores on that cpu.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,670
3,788
136
FX was always better than what all those reviews show. But you have to stop looking at fps, and have a frame time graph running, then get deep into the games, to see where those extra threads shined. FX always looked bad because of how games get tested, and where they are tested. Makes the old i3s look better. But get into Witcher 3, Crysis 3, BF:V, or other games with high CPU demands at times, and the i3 starts having a bad time. While the FX6300&8350 were providing superior gaming experiences. Again, those bar charts were never going to show it.

I would agree that Con cores aged well for games like BF. My Ivy Bridge i5 was a bottleneck in BF1.
 
  • Like
Reactions: DAPUNISHER

DrMrLordX

Lifer
Apr 27, 2000
21,609
10,803
136
Didn't the Ryzen 5000 series have USB problems that drove people nuts?

Yes. Though it was more of a platform issue with a range of certain AGESA versions. I think people with Matisse on newer AGESA versions could have the same problems.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
FX was always better than what all those reviews show. But you have to stop looking at fps, and have a frame time graph running, then get deep into the games, to see where those extra threads shined. FX always looked bad because of how games get tested, and where they are tested. Makes the old i3s look better. But get into Witcher 3, Crysis 3, BF:V, or other games with high CPU demands at times, and the i3 starts having a bad time. While the FX6300&8350 were providing superior gaming experiences. Again, those bar charts were never going to show it. It was Richard from Eurogamer/DF that was the first sizable reviewer to show how the i3 was great in Witcher 3 until it wasn't. And how the 8350 was doing a much better job. Those 2 CPUs were priced very closely to each other at the time. AMD obviously resorted to price cuts to move inventory.

The benchmarking usually did a terrible job of reflecting how FX overclocking could be a big boost too. All you would read is power and heat not worth it, for such minimal returns. Certainly there were games where it did almost nothing. But there were also games where a very average 4.5GHz DDR31600@2133MHZ overclock, could improve performance as much as 25 percent. I don't include NB overclocking because despite claims it can help frame pacing, I never saw anything worthwhile from it.

Yea the FX8350 was way better for BF MultiPlayer vs the Core i3 at the time. I gave huge battles in this very forums to point out the difference between the two but only few would listen at the time.
In BF3 the FX8350 was even very competitive against the Core i7 3770K at the time.



In BF4 with Mantle the FX8350 was a beast, capable of more than 120 fps.

 
Last edited:

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,426
20,402
146
@AtenRa

I recall, I was an avid reader of your APU content. Always liked IGP, whether on the board or CPU.

I weighed in now and then. Mostly with the WTFudge Mate? style of posts. Because the usual suspects would be flaming FX, and that is what would pop in my head. I was not having the bad time I was suppose to be having with FX. My 8320e was, in fact, tremendous bang for buck. Plus I was using a Samsung that supported freesync over HDMI.

My only complaint from those years, was that the 8350 burned up my MSI 970 Gaming, and the RMA replacement. The other tech that worked with me had the same combo, and his 970 gaming burned up too. Don't know what the problem with that model was; he didn't even overclock. The thermal pads on the VRMs would get so hot that they would start to liquify and soak through the board to the other side. The Gigabyte 990FX I have now is much better suited to use with the 8350. Though I have not subjected it to the Total War: Warhammer marathons the 970 had to endure.

I have been playing Witcher 3 on the 8350 lately, and it does locked 60fps except for cutscenes. Probably just need to edit that? I use a wraith prism with it, but I am planning to go 2133MHz with my 1866 ram, and shoot for 4.7GHz with a Noctua D15. Should be fun to see how bad Cyberpunk'd will beat it up.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
I have been playing Witcher 3 on the 8350 lately, and it does locked 60fps except for cutscenes.

The Witcher isn't as CPU heavy as one might think. Even my Athlon X3 445 (with GTX970) can do 30FPS @ 1440p. It'll run at 100%, but it'll run.

After all they got it running on the switch with some compromises.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,426
20,402
146
The Witcher isn't as CPU heavy as one might think. Even my Athlon X3 445 (with GTX970) can do 30FPS @ 1440p. It'll run at 100%, but it'll run.

After all they got it running on the switch with some compromises.
For most of areas of the game. Novigrad however, induces frame pacing issues on older 4/4 and below CPUs. How does the X3 do there? Does it stay at a smooth locked 30 while riding Roach through the busy streets and marketplace? I sold my Athlon II X4 setup or I'd try that out.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
Novigrad however, induces frame pacing issues on older 4/4 and below CPUs. How does the X3 do there? Does it stay at a smooth locked 30 while riding Roach through the busy streets and marketplace?

Not entirely, there are occasional drops. But it's playable. I think the extra physical core, even if somewhat weak, does wonders compared to 2C/4T CPUs. Hyperthreading can only do so much.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,426
20,402
146
Updating with FX 8350 v. Ryzen 3200g in Witcher 3. To quote Mortal Kombat - FIGHT!

Spoiler - FX wins, but not a flawless victory. How many reading this would have picked it going in?

The FX is overclocked to 4.5GHz with 16GB of 2133 now. The Ryzen is stock 4GHz boost with 16GB of 3000. Both run the game from the same newer gen WD Black HDD. 3060ti for both.

I don't know when the last performance patch for PC was, but it can use up to 12 threads. As I mentioned in the PC gaming forum, this game turns 8yrs old in 2 days. This game must have been serious business for i3 an i5 users and below. At least with everything maxed in the settings and post processing.

The 3200g takes longer to finish loading everything initially. It maxes out far more often, and there are far more frame drops and pacing issues. Don't get me wrong, I can adventure through most of Skellige and rarely have that happen. But at the Witcher castle, and the real CPU test, The city in Novigrad when the place is poppin' and NPCs crowds are heavy, it is fully loaded, and the experience, while playable, is not what I would consider acceptable. There was another issue even in Skellige you won't find mentioned in reviews, more on that in a moment.

The FX also takes a bit time to load initially, but significantly less than the Ryzen. The 5700g is flawless and none of these observations apply to it. It does 1080p locked 60 max settings while yawning. While the FX might have a second or two of maxed out threads and drop to 58 from a locked 60, where it is particular busy, and I am sprinting. In that same situation, the Ryzen full on hitches and drops into the low 50s. 100% usage is far more common too.

Now to a difference that won't show up in bar graphs and frame time charts. An issue where the FX wins again. Stuff popping in, most notably while sprinting or on Roach. The 5700g yawns, and the FX does a much better job with it. The 3200g even when holding a good frame time, does it. I have no idea how an X3 can be playable, not to mention, enjoyable in that situation? Less demanding settings with background crowds turned down?

I can't see any way an i5 was EVER a better pick for this game, even when it launched. Conclusion: CPU reviewers suck.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
28,426
20,402
146
One more, pre exercise stack fueled rant. :p

Witcher 3 is a great example of how uncapped frame rates obfuscate the gaming experience. RA Tech questioned the HUB and GN revisits of FX because they were so bad. My experiences support his findings. That the Ryzen can pump out higher max FPS than the FX without a frame rate limit in Witcher 3, means nothing to me. It will look great on paper, the Max and average fps will be better than FX. But spending 5hr sessions in game on both systems, I'd much rather play on the FX.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
@Insert_Nickname said it is playable on his AMD X3 system. I was expressing a bit of wonderment that it is so capable, based on how demanding it can be.
Ah, okay. Makes sense. Thanks.

Remember only at 30 FPS. Given it's running on the Switch, I'm not that surprised. The K10 core in the X3 is also slightly more powerful then the Jaguar cores in the PS4/Xbox too and running at higher frequency too (3.1GHz). Crowds are of course set at minimum, so that probably helps too.

The third core apparently helps a lot. This actually happens more then one might think. Having a core taking care of the OS really helps more then tests/reviews might show.
 

DrMrLordX

Lifer
Apr 27, 2000
21,609
10,803
136
Witcher 3 doesn't require SSE4.x? Interesting. Lots of modern software will bug out on CPUs without full SSE4.x support. K10 doesn't support SSE4.1 for example.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Really irking me. We literally got all the K'x's sorted out by AMD just a while ago.

K10 Microarchitecture = Bulldozer
Bulldozer cores (microprocessor based on K10 microarchitecture)
The design and verification of the 45 nm clock skew and clock tree test chip for the K10 (Bulldozer)

K9 Microarchitecture = Greyhound
Mike Clark@AMD (October 26, 2021): "From there, I did the Greyhound (K9) core, I was the lead architect there, which was a derivative of K8."

Also, Witcher 3 mandatory instructions appear to only need up to SSE3.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,670
3,788
136
Really irking me. We literally got all the K'x's sorted out by AMD just a while ago.

K10 Microarchitecture = Bulldozer
Bulldozer cores (microprocessor based on K10 microarchitecture)
The design and verification of the 45 nm clock skew and clock tree test chip for the K10 (Bulldozer)

K9 Microarchitecture = Greyhound
Mike Clark@AMD (October 26, 2021): "From there, I did the Greyhound (K9) core, I was the lead architect there, which was a derivative of K8."

Also, Witcher 3 mandatory instructions appear to only need up to SSE3.

I don't know why it irks you so much when you are wrong. BD was never referred to as K10. K9 might have actually been a thing though, but of course you provide no evidence.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
I don't know why it irks you so much when you are wrong. BD was never referred to as K10. K9 might have actually been a thing though, but of course you provide no evidence.
Officially none of them were labelled with K'x'y' after Greyhound.
amd2.png

Before the switch to Family 10h Software Optimzation Guide, the actual Software Optimization Guide was Deerhound/Greyhound via DH/GH Software Optimization Guide.
amd1.png

However, information and valid sources point towards:
K9 = Greyhound
K10 = Bulldozer

Lead Architect of K9(Greyhound) - Mike Clark, for Greyhound being K9.

Greyhound was never called K10 at AMD.
Bulldozer was always called K10 at AMD.

So, if you want to be 100% accurate then. It is either Greyhound or K9, but not K10 since that is exclusively used for Bulldozer only at AMD. Just another symptom of one news outlet saying K10 because F10h and the misinformation echo chamber taking it away.

General idea you are shooting out Thunder 57:
From K7 architecture:
-> Architecture 1/Never leaked-only patent = K8
-> Architecture 2/Never leaked-only patent = K9
-> Fred Weber's Architecture = K10
-> Architecture 1/Leaked by Mitch Alsup and patented = K11
-> Architecture 2 = K12 (Greyhound)

Rather it was K7 (Argon):
Architectures that never came to be.. doesn't impact the count
Fred Weber's Architecture = K8 (Hammer)
Architectures that never came to be.. doesn't impact the count
Architecture that releases = K9 (Greyhound)
Bulldozer 45nm/32nm are both listed as K10 so, both iterations regardless of changes = K10 (Bulldozer)

The only reason Greyhound/Bulldozer are K9/K10 is because their microarchitecture project names K9/K10 were keep overs as for how old they were.

Dirk Meyer explanation in 2006 for changes from 2004 to => 2006:
K8 Revision Hound => K9 Performance/Greyhound/Family 10h
K8 Revision Leo => K9 Mobile/Lion/Family 11h
--- From a separate source shortly after ---
Bulldozer started under K10 in 2004, only getting Bulldozer around the same time as Greyhound.

Fred Weber => Phil Hester pretty much killed the K-iteration naming scheme. Opting for using core names or hexadecimal family numbers instead of K9/K10 post-2005.

Andy Glew:
July 2002-June 2004: Advanced Micro Devices (AMD), Sunnyvale, California. K10 architecture
kaigai233_02l.gif

Andy Glew:
Nov 15, 2009, 9:54:22 AM
"There were several K10s. While I wanted to work on low power when I went
to AMD, I was hired to consult on low power and do high end CPU, since
the low power project was already rolling and did not need a new chef.
The first K10 that I knew at AMD was a low power part. When that was
cancelled I was sent off on my lonesome, then wth Mike Haertel, to work
on a flagship, out-of-order, aggressive processor, while the original
low power team did something else. When that other low-power project was
cancelled, that team came over to the nascent K10 that I was working on.
My K10 was MCMT, plus a few other things. I had actually had to
promise Fred Weber that I would NOT do anything advanced for this K10 -
no SpMT, just MCMT. But when the other guys came on board, I thought
this meant that I could leave the easy stuff for them, while I tried to
figure out how to do SpMT and/or any other way of using MCMT to speed up
single threads. Part of my motivation was that I had just attended ISCA
2003 in San Diego, where several of outstanding problems in big machines
had been solved, and I was scared that Intel would come out with
something if we did not."

Chuck Moore got the architecture after Andy Glew left and himself joined and was there for the name change from K10 to Bulldozer. After which sometime around 2008 he became Senior Fellow for Accelerated Compute, handing K10(pre-2005)/Bulldozer(post-2005) to Mike Butler.

Timeline of Greyhound becoming K9:
2004 - AMD internally cancelled Ultra-Deep Pipeline Architecture (Original K9 that was cancelled).
2005 - Dirk Meyer and Phil Hester opts for evolutionary iteration to K9
Original K9/Ultra-Deep Pipeline Architecture was officially cancelled(told press that K9 5 GHz wasn't coming out), but DM/PH stated K8H(2004) had evolutionary improvements from K8 thus wasn't K8 but rather K9:

K7 - 32-bit
K8 - 64-bit
K9 - 128-bit (FE/LS/FPU changes for single-op 128-bit)

Family 10h && Greyhound = K9
Family 15h && Bulldozer = K10

Timeline of weirdness;
Under Andy Glew 2002+:
Processor = Core
Cluster A/B can only respectively run Thread A/B

Under Chuck Moore 2005+:
Processor = Core
Cluster A/B can respectively run Thread A/B and collaboratively run A or B.

Under Mike Butler 2008+:
Processor = Module
Core A/B can only respectively run Thread A/B.
 
Last edited:

Thunder 57

Platinum Member
Aug 19, 2007
2,670
3,788
136
Officially none of them were labelled with K'x'y' after Greyhound.
View attachment 61693

Before the switch to Family 10h Software Optimzation Guide, the actual Software Optimization Guide was Deerhound/Greyhound via DH/GH Software Optimization Guide.
View attachment 61692

However, information and valid sources point towards:
K9 = Greyhound
K10 = Bulldozer

Lead Architect of K9(Greyhound) - Mike Clark, for Greyhound being K9.

Greyhound was never called K10 at AMD.
Bulldozer was always called K10 at AMD.

So, if you want to be 100% accurate then. It is either Greyhound or K9, but not K10 since that is exclusively used for Bulldozer only at AMD. Just another symptom of one news outlet saying K10 because F10h and the misinformation echo chamber taking it away.

General idea you are shooting out Thunder 57:
From K7 architecture:
-> Architecture 1/Never leaked-only patent = K8
-> Architecture 2/Never leaked-only patent = K9
-> Fred Weber's Architecture = K10
-> Architecture 1/Leaked by Mitch Alsup and patented = K11
-> Architecture 2 = K12 (Greyhound)

Rather it was K7 (Argon):
Architectures that never came to be.. doesn't impact the count
Fred Weber's Architecture = K8 (Hammer)
Architectures that never came to be.. doesn't impact the count
Architecture that releases = K9 (Greyhound)
Bulldozer 45nm/32nm are both listed as K10 so, both iterations regardless of changes = K10 (Bulldozer)

The only reason Greyhound/Bulldozer are K9/K10 is because their microarchitecture project names K9/K10 were keep overs as for how old they were.

Dirk Meyer explanation in 2006 for changes from 2004 to => 2006:
K8 Revision Hound => K9 Performance/Greyhound/Family 10h
K8 Revision Leo => K9 Mobile/Lion/Family 11h
--- From a separate source shortly after ---
Bulldozer started under K10 in 2004, only getting Bulldozer around the same time as Greyhound.

Fred Weber => Phil Hester pretty much killed the K-iteration naming scheme. Opting for using core names or hexadecimal family numbers instead of K9/K10 post-2005.

Andy Glew:
July 2002-June 2004: Advanced Micro Devices (AMD), Sunnyvale, California. K10 architecture
View attachment 61731

Andy Glew:
Nov 15, 2009, 9:54:22 AM
"There were several K10s. While I wanted to work on low power when I went
to AMD, I was hired to consult on low power and do high end CPU, since
the low power project was already rolling and did not need a new chef.
The first K10 that I knew at AMD was a low power part. When that was
cancelled I was sent off on my lonesome, then wth Mike Haertel, to work
on a flagship, out-of-order, aggressive processor, while the original
low power team did something else. When that other low-power project was
cancelled, that team came over to the nascent K10 that I was working on.
My K10 was MCMT, plus a few other things. I had actually had to
promise Fred Weber that I would NOT do anything advanced for this K10 -
no SpMT, just MCMT. But when the other guys came on board, I thought
this meant that I could leave the easy stuff for them, while I tried to
figure out how to do SpMT and/or any other way of using MCMT to speed up
single threads. Part of my motivation was that I had just attended ISCA
2003 in San Diego, where several of outstanding problems in big machines
had been solved, and I was scared that Intel would come out with
something if we did not."

Chuck Moore got the architecture after Andy Glew left and himself joined and was there for the name change from K10 to Bulldozer. After which sometime around 2008 he became Senior Fellow for Accelerated Compute, handing K10(pre-2005)/Bulldozer(post-2005) to Mike Butler.

Timeline of Greyhound becoming K9:
2004 - AMD internally cancelled Ultra-Deep Pipeline Architecture (Original K9 that was cancelled).
2005 - Dirk Meyer and Phil Hester opts for evolutionary iteration to K9
Original K9/Ultra-Deep Pipeline Architecture was officially cancelled(told press that K9 5 GHz wasn't coming out), but DM/PH stated K8H(2004) had evolutionary improvements from K8 thus wasn't K8 but rather K9:

K7 - 32-bit
K8 - 64-bit
K9 - 128-bit (FE/LS/FPU changes for single-op 128-bit)

Family 10h && Greyhound = K9
Family 15h && Bulldozer = K10

Timeline of weirdness;
Under Andy Glew 2002+:
Processor = Core
Cluster A/B can only respectively run Thread A/B

Under Chuck Moore 2005+:
Processor = Core
Cluster A/B can respectively run Thread A/B and collaboratively run A or B.

Under Mike Butler 2008+:
Processor = Module
Core A/B can only respectively run Thread A/B.

You're so full of bull your eyes are brown. I stopped reading after you called K12 something else. What was it supposed to be other than an ARM CPU?

K10 was Phenom. K9 never existed for obvious reasons. There was K8L.

And you just magically pulled up old slides from 10+ years ago? Nonsense.

OK I read a bit more. "told press". You have slides from over 10 years ago but can't provide evidence of "told press"? Nice homework there.

Bulldozer was also never 45nm, rather iterations of it began at 32nm and went to 28nm.
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
You're so full of bull your eyes are brown. I stopped reading after you called K12 something else. What was it supposed to be other than an ARM CPU?
Obviously, you aren't getting it.

The original K8 core was from David B. Witt, which was then handed over to Jim Keller, then they decided to go with Fred Weber's Hammer architecture.
K7 (Argon) -> K9 (Hammer)
We don't see AMD skipping K-numbers regardless.

David Witt&Jim Keller's architecture being canned didn't lead to K7 to K9, so why would Mitch Alsup's canned architecture cause AMD to leap to K10.

As if the prior architectures caused them to bump up.
K7 (Argon)
K8 (David Witt core)
K9 (Jim Keller core)
K10 (Fred Weber core) -> K10 "Hammer"
K11 (Mitch Alsup core)
K12 (etc.) -> K12 "Greyhound"
K13 (Andy Glew) -> K13 "K10"
K14 (Chuck Moore) -> K14 "K10"
K15 (Mike Butler) -> K15 "K10/Bulldozer"

However, this isn't the case. Cores that never came to be do not bump up the numbers.
K7(Argon) -> K8(Hammer) -> K9(Greyhound) -> K10(Bulldozer)

The only reason Greyhound/Bulldozer can be attached to K9/K10 is because they were planned all the way back then.
K10 was Phenom. K9 never existed for obvious reasons. There was K8L.

And you just magically pulled up old slides from 10+ years ago? Nonsense.

OK I read a bit more. "told press". You have slides from over 10 years ago but can't provide evidence of "told press"? Nice homework there.
K7 = Argon
K8 = Hammer
K9 = Greyhound
K10 = Bulldozer

2002 - Usenet:
k9-greyhound.png
2010 - AMD:
greyhound.jpeg
2021 - AMD:
mikeclark.jpeg
2006 - AMD:
"ちなみに、以前、AMDが顧客にRev. Hと説明していたコア世代は、現在はHoundとなっているようだ。機能拡張の結果、もはやK8ではなくなったという位置付けなのかもしれない。"
"By the way, the core generation that AMD used to explain to customers as Rev. H now seems to be Hound. As a result of the enhancement, it may be positioned as no longer K8."
HOCP removal of K from press:
amdbacktrack.png

2004 -> 2006+
K8H = K8 Rev. Hound => K9 Performance "Greyhound" Core
K8L = K8 Rev. Leo => K9 Mobile "Lion" Core

Not once had AMD officially supported calling or called any of the cores within Agena/Deneb/Thuban/Llano "K10". It was only ever K9, Greyhound(Greyhound+/Husky), or Family 10h(12h).
Hardware architectural name: Greyhound, K9
Software development name: Family 10h
Marketing name: Stars CPU cores (Desktop), Greyhound CPU cores (Server)

The source of "K10" by the way is satire I'll let you find the TEE HEE(Th.Val.) man himself.
k8lisfake.png
Called it K8L = You've been got.
Called it K10 = You've been got.
Bulldozer was also never 45nm, rather iterations of it began at 32nm and went to 28nm.
Sandtiger = 45nm Bulldozer CPU
- Eight Bulldozer cores(1 die) or Sixteen Bulldozer cores(2 dies)
Eagle(Falcon) = 45nm Bulldozer APU
- Single Bulldozer core + Integrated M9x-RV71x

Sometime between July 2007 when they announced the above cores and December 2007~January 2008. They decided to switch the original envisioned(AG/CM) K10-Bulldozer with Mike Butler's envisioned K10-Bulldozer.

2002-2005 => Predecessor K10 via Andy Glew
2005-2008 => Original K10 Bulldozer via Chuck Moore
2008-2011 => Replaced K10 Bulldozer via Mike Butler

K9 Mobile "Lion" -> K9 Performance "Greyhound"
Power-optimization only -> Addition of Performance-optimization.
Around the turn over of Fred Weber to Phil Hester, they dropped K-number; since then stopped using K for everything: -march:K8 to -march:Fam10,-march:Fam11

Then original modular architecture successor to "Lion" and "Greyhound",
Mobile "Bobcat" -> Performance "Bulldozer"
Basically, if anyone gets Family 11h and Family 10h, anyone should understand what was suppose to happen. Of course, I'll explain in short:
~2007 => Bobcat and Bulldozer were to share macros synthesized for different markets to achieve faster time to market for both.
Hence, why anyone can find these:
bobcatbd45nm.png
Hence, why Sandtiger 45nm had eight dual-cluster cores, while Orochi only had four dual-core modules.
 
Last edited: