Speculation: AMD's response to Intel's 8-core i9-9900K

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

How will AMD respond to the release of Intel's 8-core processor?

  • Ride it out with the current line-up until 7nm in 2019

    Votes: 129 72.1%
  • Release Ryzen 7 2800X, using harvested chips based on the current version of the die

    Votes: 30 16.8%
  • Release Ryzen 7 2800X, based on a revision of the die, taking full advantage of the 12LP process

    Votes: 17 9.5%
  • Something else (specify below)

    Votes: 3 1.7%

  • Total voters
    179

rbk123

Senior member
Aug 22, 2006
748
351
136
Fair point, but not all of us feel that way. Intel has been consistent for the last 12 years - pushing the frontiers of performance and technology on the PC. .
This is the fundamental point where the anti-Intel crowd will disagree with you vehemently. They stopped pushing the frontiers for the last 10 years and have been milking tiny improvements and gouging high end chips for as long as they can.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Not only did I not say it was better or even ok, on the contrary I implied it resulted in bad behavior. As for irony, your self-appointed roll of fighting for truth, justice and the Intel way has you trying to kill all the AMD FUD, yet oddly enough, you don't lift a finger trying to kill any Intel FUD. The fact you think the FUD is so heavily 1-sided, speaks volumes on your built-in bias.

What anti-AMD FUD needs to be called out? Note: a couple of posts above yours, that I am questioning the supposed 15% Intel IPC advantage? I don't like exaggerations from either side.

IMO, Ryzen and Coffee Lake cores are highly competitive clock for clock, with pros/cons for each. I think Intel's supposed IPC advantage is negligible outside of gaming.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
What anti-AMD FUD needs to be called out? Note: a couple of posts above yours, that I am questioning the supposed 15% Intel IPC advantage? I don't like exaggerations from either side.

IMO, Ryzen and Coffee Lake cores are highly competitive clock for clock, with pros/cons for each. I think Intel's supposed IPC advantage is negligible outside of gaming.

Just to be clear, this is where I got the 15% IPC difference from:
https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72
Technically, CFL is ~17% faster per clock than Zen 1, but I used 15% as Zen SMT scales slightly better than HT and I was referencing MT performance, not ST. Obviously if you exclude all AVX2 tests then the IPC difference is less, as these charts show, which may be the angle you're coming from?
YuvqEF9.png
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
That's easy, actually; it consumes less than TDP at stock clocks.

Source: https://www.anandtech.com/show/1185...-lake-review-8700k-and-8400-initial-numbers/5

The bad rep Intel gets is people using AVXx stress testing figures to compare to AMD (which clearly does a poor job of running AVX code as 'efficiently' as Intel due to AMD playing catch up in that realm, and implementation. Like I opined in another thread, Coffeelake is very efficient at Ryzen clocks, but the opposite is far from the case, if not impossible altogether.

Edit: If Mr. ABWX slaps his usual -23% VR efficiency deficit on top of these figures, like he does all his Ryzen power consumption calculations, you can imagine how low these numbers can be.

Anandtech uses the internal CPU sensors for power usage, so VRM efficiency doesn't apply here. Additionally, Anandtech's own numbers posted in the 2700x review contradict the previous power measurements:


97910.png


Either they used a different program to run when measuring power or perhaps they used a different motherboard that allowed for higher turbo clocks and thus higher power? I don't know, maybe Ian could tell us. It would seem weird to use one program to test the 8700k power and then 6 months later use a different program. My bet would be on they used a different mobo that allowed for more liberal breaking of TDP (thus allowing higher performance). Not familiar enough with all the intel motherboard models to know.
 
  • Like
Reactions: coercitiv

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Anandtech uses the internal CPU sensors for power usage, so VRM efficiency doesn't apply here. Additionally, Anandtech's own numbers posted in the 2700x review contradict the previous power measurements:


97910.png


Either they used a different program to run when measuring power or perhaps they used a different motherboard that allowed for higher turbo clocks and thus higher power? I don't know, maybe Ian could tell us. It would seem weird to use one program to test the 8700k power and then 6 months later use a different program. My bet would be on they used a different mobo that allowed for more liberal breaking of TDP (thus allowing higher performance). Not familiar enough with all the intel motherboard models to know.
Turbo is not disabled in this test. Even multicore enhancements might be enabled, it seems.
What I find annoying is how come there's no mention of actual testing methodology on there. If it is, it should be clearly highlighted with a subheading format or something.

https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Just to be clear, this is where I got the 15% IPC difference from:
https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72
Technically, CFL is ~17% faster per clock than Zen 1, but I used 15% as Zen SMT scales slightly better than HT and I was referencing MT performance, not ST. Obviously if you exclude all AVX2 tests then the IPC difference is less, as these charts show, which may be the angle you're coming from?
YuvqEF9.png

This issue comparing different architectures, is what software do you compare them on? The clock for clock Techspot test I linked earlier was using off the shelf software/benchmarks, and showed give and take outside of games.

https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/page5.html
In applications such as Cinebench R15 we see that the single core performance is down just 3% but where SMT is well leveraged AMD can be up to 4% faster.

We found that AMD was 3% slower in the Corona benchmark but much the same for our Excel, V-Ray and video editing tests. Then while it was 15% slower in HandBrake it was also 8% faster for the PCMark 10 gaming physics test. Of course, there is still the matter of gaming and I bet a few AMD fans were hoping we could put most of the gaming performance deficit down to clock speed. Sadly though, that's not the case.

I couldn't find Stilts exact test suite but it seems to be a group of math heavy open source benchmarks, compiled with the latest Intel compiler so, it can heavily leverage AVX2.

If I had to say which situation more accurately reflects practical usage, I would have to say techspot's testing of off the shelf software does.

But ultimately when choosing between CPUs with different strengths and weaknesses, each user should look at the bench marks for the workloads they personally do, to see which architecture better meets their needs.

So IMO, there is no clear winner on the desktop. They both offer great CPU cores (for the first time in a decade).
 

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
This issue comparing different architectures, is what software do you compare them on? The clock for clock Techspot test I linked earlier was using off the shelf software/benchmarks, and showed give and take outside of games.

https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/page5.html


I couldn't find Stilts exact test suite but it seems to be a group of math heavy open source benchmarks, compiled with the latest Intel compiler so, it can heavily leverage AVX2.

If I had to say which situation more accurately reflects practical usage, I would have to say techspot's testing of off the shelf software does.

But ultimately when choosing between CPUs with different strengths and weaknesses, each user should look at the bench marks for the workloads they personally do, to see which architecture better meets their needs.

So IMO, there is no clear winner on the desktop. They both offer great CPU cores (for the first time in a decade).

I agree with the last statement. The main difference between techspot's tests and Stilt's (Besides application choice), is that Stilt forced single threaded tests every time whereas techspot mostly used multi-threaded testing. There's nothing wrong with either, but you just need to know that they are different and how. Mostly, Ryzen has better SMT (hyperthreading) effectiveness so where they lack in IPC, they make up for in SMT for heavily multi-threaded tests. In the end, I think it all kind of balances out which is why both are a great choice, but it also explains (in part) why the intel CPU's showed higher "IPC" advantage in techspot's tests, because most games aren't heavily multi-threaded.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
This is the fundamental point where the anti-Intel crowd will disagree with you vehemently. They stopped pushing the frontiers for the last 10 years and have been milking tiny improvements and gouging high end chips for as long as they can.
This is speculation, at best. In reality, there was tick-tock, for example; in a period where AMD wasn't even in the picture. Plus, we don't know what revolutionary technologies Intel are working on in-house from all their numerous projects all these years. Finally, x86 has that legacy compatibility burden hanging around its neck. You stray far from the tree (itanium) and you'll find yourself on a desert oasis. People also forget that the technologies we're all crying for costs money. Intel's perceived R&D extravagances and forays into tech misadventures all require financing. That money has to come from somewhere.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I agree with the last statement. The main difference between techspot's tests and Stilt's (Besides application choice), is that Stilt forced single threaded tests every time whereas techspot mostly used multi-threaded testing. There's nothing wrong with either, but you just need to know that they are different and how. Mostly, Ryzen has better SMT (hyperthreading) effectiveness so where they lack in IPC, they make up for in SMT for heavily multi-threaded tests. In the end, I think it all kind of balances out which is why both are a great choice, but it also explains (in part) why the intel CPU's showed higher "IPC" advantage in techspot's tests, because most games aren't heavily multi-threaded.

Thanks for the additional info. HT/SMT has a more mixed bag in gaming loads, there are even games that go faster, just by disabling HT/SMT, though that tends to be more rare today than it was in the early days.

Though, here as well, I do think running multi-core with HT/SMT enabled is more indicative of real world performance differences than 1 core tests, since no one is actually trying to run anything anymore on 1 core, so multicore/HT/SMT impact matters in the real world.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Prpblem with the Stilt software is that he get only 1.5% from Summit to Pinacle Ridge while the measured difference is 3% elsewhere, second is that he use a few discutable softwares that skew the results, Caselab is publicly stating that they favoured Intel and that it was a deliberate choice to not use Intel code path for anything other than Intel.

http://www.caselab.okstate.edu/research/euler3dbenchmark.html

Also the use of Embree, an Intel designed renderer is not neutral from the start, same with Linpack wich is also an Intel software...

https://embree.github.io/
 
  • Like
Reactions: lightmanek

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Thanks for the additional info. HT/SMT has a more mixed bag in gaming loads, there are even games that go faster, just by disabling HT/SMT, though that tends to be more rare today than it was in the early days.

Though, here as well, I do think running multi-core with HT/SMT enabled is more indicative of real world performance differences than 1 core tests, since no one is actually trying to run anything anymore on 1 core, so multicore/HT/SMT impact matters in the real world.
I don't agree. There's too much mixed-bagging going on in that review, with higher thread count and bigger caches going to AMD. It certainly doesn't conform to the reality of the average Intel user using those k chips. Even the reviewer acknowledged the skew:
The 1800X and 2700X have an advantage being that they are 8-core/16-thread CPUs while the 8600K is at a disadvantage as it's a 6-core/6-thread CPU, so please keep all that in mind as we proceed. Let's get to the results.
And, it doesn't help when a desktop chip consumes as much as an HEDT chip:
Power consumption really was all over the place. In some workloads the 2700X only used slightly more than the 1800X, while in others it used quite a bit more. What’s key to note here is that second-gen Ryzen CPUs are at least on par with the Skylake-X series in terms of performance per watt and were often a little better.
I'm sure you all remember how much fun was being poked at SKX HEDT chips for their power consumption. How did AMD get a free pass once again for this? Sometimes, people act like some of us don't read reviews or something. Intel could have easily released the 7820x, and the 7800x on desktop last year, especially with the 1800x consuming as much as the 7820x around the 4Ghz mark. AMD releases this more than a year later and people want to act like it's the best thing since sliced bread. Far from it.
 

Zucker2k

Golden Member
Feb 15, 2006
1,810
1,159
136
Prpblem with the Stilt software is that he get only 1.5% from Summit to Pinacle Ridge while the measured difference is 3% elsewhere, second is that he use a few discutable softwares that skew the results, Caselab is publicly stating that they favoured Intel and that it was a deliberate choice to not use Intel code path for anything other than Intel.

http://www.caselab.okstate.edu/research/euler3dbenchmark.html

Also the use of Embree, an Intel designed renderer is not neutral from the start, same with Linpack wich is also an Intel software...

https://embree.github.io/
So, that is two codes out of...……………..?
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
This is the fundamental point where the anti-Intel crowd will disagree with you vehemently. They stopped pushing the frontiers for the last 10 years and have been milking tiny improvements and gouging high end chips for as long as they can.
This is the fundamental point where anyone with an unbiased sensible brain would disagree for obvious reasons. A corporation exists to profit and grow profit. There's zero reasons for them to ever push the frontier unless they have to for competitive reasons which is why corporations almost never do w/o competition. This goes for every corporation. Major ones in computing are : AMD, Nvidia, Intel.

You can easily pin this down by exampling the dichotomy between enterprise vs. consumer. There are wildly different profit margins. As a result, "innovations" happen moreso in enterprise and stay in enterprise and never get pushed down to consumers. Both AMD/Intel are guilty of this because they're corporations. There are features in enterprise that are over two to three decades old that have never graced a desktop computer because doing so would destroy enterprise MARGINS.

So, lets stop playing pretend and partisan. Both AMD/Intel engage in this practice. However, Intel was far ahead and milked hard because there was little to no competition. I have no doubt that AMD would do the same if afforded a similar opportunity. Corporations aren't your friends. They don't care about being fair. It's about profits. Capitalism. The quicker an individual understands this at the consumer level, the quicker they're make far more valued and unbiased decisions. Whoever provides the best value gets my $$$. What type of special idiot would I be to defend and parade around a corporation? for free...
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Prpblem with the Stilt software is that he get only 1.5% from Summit to Pinacle Ridge while the measured difference is 3% elsewhere, second is that he use a few discutable softwares that skew the results, Caselab is publicly stating that they favoured Intel and that it was a deliberate choice to not use Intel code path for anything other than Intel.

http://www.caselab.okstate.edu/research/euler3dbenchmark.html

Also the use of Embree, an Intel designed renderer is not neutral from the start, same with Linpack wich is also an Intel software...

https://embree.github.io/
If we are going to talk about testing "neutral" software, we might as well throw out all benchmarks that utilize AVX2/512 - Blender, x265, 3DPM, NAMD, etc, because all those show Zen in a poor light, right? That's extremely myopic because far more modern day applications can take advantage of Intel's higher 256b vector SIMD capabilities than people would like to admit.

Even if Embree is written by Intel, the actual performance variance is too small to affect the average result. Oh, and The Stilt has a column labelled ER, or extremities removed, meaning the average result after removal of the best and worst results on each platform. That pretty much excludes Linpack(which by the way is the industry standard for measuring peak theoretical FPU performance,), and still then Skylake(-X) enjoys a 14-15% lead in IPC.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
I don't agree. There's too much mixed-bagging going on in that review, with higher thread count and bigger caches going to AMD. It certainly doesn't conform to the reality of the average Intel user using those k chips. Even the reviewer acknowledged the skew:

And, it doesn't help when a desktop chip consumes as much as an HEDT chip:

I'm sure you all remember how much fun was being poked at SKX HEDT chips for their power consumption. How did AMD get a free pass once again for this? Sometimes, people act like some of us don't read reviews or something. Intel could have easily released the 7820x, and the 7800x on desktop last year, especially with the 1800x consuming as much as the 7820x around the 4Ghz mark. AMD releases this more than a year later and people want to act like it's the best thing since sliced bread. Far from it.

Serves the extreme clock meme chasers right. 14nm has a physical limit. Even when 7nm comes, I guarantee you, there will be a peanut gallery of hecklers yacking on and on about clock speeds.
iedm-2017-gf-7nm-power-vs-frequency-2f6t.png

It seems both Intel and AMD figured out a certain class of consumers are unable to do value based calculations and will do anything for higher clocks. Automotive companies realized this with how booming the after-market industry was and began integrating such features into their cars.

So yeah.. Both AMD/Intel integrated increasing levels of dynamic overlocking into their processors. The end result is wild swings in power utilization. It's like having a busload of rabid OCer's enslaved on your die constantly trying to minimize power efficiency.

All of you scream ridiculously about higher clocks. Once you finally get it packaged in, you want to cry about it? Higher clocks have diminishing returns on power utilization up to the wall of a particular process size family. Nothing at these 'dude bro' clocks is efficient or the best thing since sliced bread. It's why you don't see them in the enterprise that is extremely concerned with efficiency/value. You don't see this stupidity in enterprise GPUs. You don't see it in enterprise CPUs. The clocks are always lower than consumer processors because they center them on the sweet spot of power efficiency.
The overclocker enthusiast market went mainstream and to absurd levels. It existed some time ago because clocks were around 300Mhz. The 20/50mhz boost was the difference between a lagging experience that was unusable to a usuable one.

Both Intel and AMD are capitalizing on a legacy that is largely misunderstood by a group caught up in a mainstream (TAKE MY MONEY) movement.. So, enjoy your riced out civics w/ NOS boost... Spec is not what these people are after ... Thus the X/K markers.
1611420078_large.jpg


Bonus points for slowly rusting/corroding high maintenance AIO/water meme cooling.
maxresdefault.jpg

Even more bonus points if you manage to spring a leak that ruins 1000s of dollars of computing equipment because MUH CLOCK SPEEDZ :
images
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
5,156
5,544
136
This is the fundamental point where anyone with an unbiased sensible brain would disagree for obvious reasons. A corporation exists to profit and grow profit. There's zero reasons for them to ever push the frontier unless they have to for competitive reasons which is why corporations almost never do w/o competition. This goes for every corporation. Major ones in computing are : AMD, Nvidia, Intel.

You can easily pin this down by exampling the dichotomy between enterprise vs. consumer. There are wildly different profit margins. As a result, "innovations" happen moreso in enterprise and stay in enterprise and never get pushed down to consumers. Both AMD/Intel are guilty of this because they're corporations. There are features in enterprise that are over two to three decades old that have never graced a desktop computer because doing so would destroy enterprise MARGINS.

So, lets stop playing pretend and partisan. Both AMD/Intel engage in this practice. However, Intel was far ahead and milked hard because there was little to no competition. I have no doubt that AMD would do the same if afforded a similar opportunity. Corporations aren't your friends. They don't care about being fair. It's about profits. Capitalism. The quicker an individual understands this at the consumer level, the quicker they're make far more valued and unbiased decisions. Whoever provides the best value gets my $$$. What type of special idiot would I be to defend and parade around a corporation? for free...
A critical point.

Off topic question. Why do we, humans that is, a lot of times, only have 10-20 years of a natural resource available?
 

rbk123

Senior member
Aug 22, 2006
748
351
136
This is the fundamental point where anyone with an unbiased sensible brain would disagree for obvious reasons. etc....
Yep. Everything you said was true and it's why there are so many irate consumers on here. And yet there are those who want to disguise it and say Intel was innovative and great for the consumers over the past decade +.
 

rbk123

Senior member
Aug 22, 2006
748
351
136
This is speculation, at best. In reality, there was tick-tock, for example; in a period where AMD wasn't even in the picture.
Please spare me the Intel justification propaganda. ub4 summed it up correctly.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I don't agree. There's too much mixed-bagging going on in that review, with higher thread count and bigger caches going to AMD. It certainly doesn't conform to the reality of the average Intel user using those k chips. Even the reviewer acknowledged the skew:

This was all about using 15% from Stilts test as a means of comparing between processors based on core counts, like Epsilon did:

4.7 x 8 x 1.15 = 43.24 (1.15 is the relative IPC advantage)
3.7 X 12 = 44.4

IMO a number derived only based off single core, opens source with lastest Intel compiler, is less representative IMO,than one using off the shelf software with SMT/HT enabled. But neither is really generally applicable.

IMO there are no blanket IPC numbers.

You really need a formula like

Clock Speed X Core Count X IPC of specific load(and accounting for HT/SMT).

So in the case of comparing a 8C 9900K to a 1920x, it depends heavily on what you are doing. Playing games, the 9900K will clearly be better, doing 3D rendering, the 1920x will have sizable lead.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
If we are going to talk about testing "neutral" software, we might as well throw out all benchmarks that utilize AVX2/512 - Blender, x265, 3DPM, NAMD, etc, because all those show Zen in a poor light, right? That's extremely myopic because far more modern day applications can take advantage of Intel's higher 256b vector SIMD capabilities than people would like to admit.

Even if Embree is written by Intel, the actual performance variance is too small to affect the average result. Oh, and The Stilt has a column labelled ER, or extremities removed, meaning the average result after removal of the best and worst results on each platform. That pretty much excludes Linpack(which by the way is the industry standard for measuring peak theoretical FPU performance,), and still then Skylake(-X) enjoys a 14-15% lead in IPC.

Where did i say that Blender, 3DPM, X264/265 are irrelevan.?

Whatever if Intel has an advantage in X265 but then what is the need for Embree, Caselab and Linpack if regular softwares already take advantage of SIMD instructions ?..

Btw, Corona use Embree as engine :

10-630.3017271215.png


It s 100% concidental that it show the lower perf out of four renderers for AMD vs Intel..
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
A critical point.
Off topic question. Why do we, humans that is, a lot of times, only have 10-20 years of a natural resource available?
Because we create institutions, groupings, and societies and sink into norms that are very short term oriented... Because we as human beings have lifetime limits and perception limits that impose a natural sense of urgency/selfishness beyond the broader more selfless and timeless universe.

This broad truth has been stated and exampled many times over but can't override the individual who will retort : "That's fine and all but what's in it for me?". So, the visionaries among us who see beyond the current condition define and create these grand ideas/technologies/institutions/ civilizations which, on inception, are great, lots of vision and foresight, and they have a good run... However, half way through the population [staff of a company] becomes filled with people who stop building and take liberties with the excess (perceived excess).. bask in the glory sorta speak. In this moment (which lasts far beyond a moment) they fail to realize or care that such actions have set the whole ship upon a decline due to inefficiency and stagnation.

Take a look around the universe... Everything is always in motion... When it's not, bad things happen.
We realize this as human beings at a deep level but we don't always know where to put this energy most efficiently especially in good times. This is why we have boom/bust economic cycles. This is why corporations grind to a halt in innovation during good times and at scale. This is why Intel is in a funk and every tech company before it that grew large and had great success. AMD would be in the same boat if the roles were reversed.

Why do we, humans that is, a lot of times, only have 10-20 years of a natural resource available?

We as human beings like to predict things forward. The people who have the power/money to make decisions and control the course of things are guaranteed to become disconnected with the engine of society as the wealth inequality gap increases. This gap and also their bias towards making projections that result in positive outcomes for themselves cause them to make vastly incorrect projections at the peak of a civilization/resource count. This is why, if you look back all throughout history, you'll often see grand achievements/architecture/art/etc developed right at the peak and subsequent decline of a civilization. The power/$$/those at the top don't take stake in the underlying engine/resources that power civilization/institution/etc (because it has a personal consequence which will result if they cut back) and grossly misallocate at the worst time. When there is a big enough gap and a lack of cohesiveness, this trickles down all the way to the line worker. No one cares at the moment in which there should be the most caring.

This is why there's an inside joke that a company is on deathwatch 5-10 years after they construct a grand HQ in the bay area... This is why there's economic cycles. This is why companies that have grand success grow fat and lazy and mismanage resources at exactly the wrong time. This is why disruption exists. This is why there's decay. Death/Rebirth. This is why the tech companies of one age die in another.

The circle and cycle of life ... Until we find a way to disrupt it
original-22735-1524457461-20.png


On topic .. slightly
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
This is the fundamental point where anyone with an unbiased sensible brain would disagree for obvious reasons. A corporation exists to profit and grow profit. There's zero reasons for them to ever push the frontier unless they have to for competitive reasons which is why corporations almost never do w/o competition. This goes for every corporation. Major ones in computing are : AMD, Nvidia, Intel.

Actually it's not that one sided. A lot of the slow down is related to having plucked most of the low hanging fruit on the architecture side, and being in an era diminishing returns. Note that Ryzen didn't Leapfrog Intel *Lake desktop processors, it caught up.

There is still incentive to innovate, and that is to drive upgrades. Some enthusiasts would upgrade almost annually if the performance jump was there. The incentive is obviously not as strong as having a competitor stealing market share, but it still exists.

Look at the process side. Competitive pressure on process has done nothing but steadily increase, and yet Intel's process advancement has gotten progressively worse (14nm late, 10nm ridiculously late). That's an inverse relationship between competition and advancement.

Naturally having negligible competition can lead to inflated pricing (HEDT/Server) and questionable behaviors (disabling PCIE lanes in 7820x for no good reason). But it doesn't mean progress stops.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Yep. Everything you said was true and it's why there are so many irate consumers on here. And yet there are those who want to disguise it and say Intel was innovative and great for the consumers over the past decade +.
They along w/ many other companies like AT&T had free reign to gouge the mess out of people including myself and that's exactly what they did. This is why I made it an investment strategy for some time to buy stock in the companies that pissed me off the most. I figured, if they were wrecking me and there was nothing I could do about it, I might as well get a cut of the action.