I guess I will write more later upon finding time...
Pure performance wise, AMD has done an amazing job in every respect. No one can sniff at that. It is AMDs Conroe, albeit only vs AMD themselves.
Only if you ignore like all the review data, throughput per core is massive, throughput per watt is massive. This market (HEDT )is the one AMD will find it hardest to complete in. In 6 months this core and infrastructure design is going to be in every market AMD/Intel compete in. The scariest one for intel is actually laptop, 8 core Zen idles amazing well, 4 will do even better and isn't going to require a chip set while offering 2x NVME and 4xUSB 3.1 while having a significant better GPU.
This is far more then "Conroe themselves". If you believe that care to make a wager for what the market looks like in 12 months? By your logic AMD's market share across the board wont move and nether will intels prices/sku's because they have only conroe'd themselves...........
For anything Media/DAC/Encryption related, Ryzen is a sure bet. MT performance in a lot of applications is absolutely awesome. You have got to admit. It gives BDe a very good challenge at a fraction of the cost.
So the contradictions in logic start.
Any actually data to back this one up? I already know of OEM's (6 months ago infact) saying that the 32 core part is extremely impressive, how does that work across 8 CCX's if cache coherence is poor between 2?
and mem latencies are very poor.
The jury is still out on this one, i have seen no good methodical data to back this up. Separating what software test report how it changes over memory clock and latency and its actual impact on performance hasn't even begun to be tested.
Along with SMT and driver problems, they will produce sucky results in many mem sensitive and nonoptimized benchmarks. Games, for certain. Going forward this should improve somewhat but don't hold your breath on it. AMD should've known all of this.
So they did and the places where they can "fix" it themselves its all patched, like in Linux Kernel. But whats funny here is that you have taken two completely unverified, completely contestable points you can't back up and then fed them into a straw man of "nom optimized benchmarks".
But then we "shouldn't hold our breath" because its so hard for MS to change scheduler and the behavior thats causing such performance difference between high performance and balanced power settings.
More like Debunk your post.....
AMD has a strong internet fanbase. I consider myself a fan.
Really because you look like a
white anter . Making the generic im a fan, but then at every point, not taking even the neutral position but taking the negative position.
Then there are what we call AMD fanatics, far removed from reality. Like their counterparts from Intel, they don't understand science, data or reason. Their purpose is just to try and spin everything AMD to the best, craziest, light existable.
Much like you, because you have made a whole bunch of claims so far that you can't backup with fact!
The fact is by simply disabling SMT nets you 40% of the average gaming performance difference between Ryzen and 6900k/7700k
The fact is on the very few reviews that did gaming power consumption where AMD results where poor system power was massively down and then you find reviews like this:
https://youtu.be/V5RP1CPpFVE?t=4m41s / videos of the benchamrk runs here
https://www.youtube.com/watch?v=BXVIPo_qbc4&t=0s where AMD is completely competitive across the board.
Then you interesting data like this ( form the man you credit with generating awesome data)
https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775732
All of a sudden things dont look so bad.
Posting support in hoardes, doesn't bolster the accuracy of your belief.
And yet look at the data.............
They were seriously delusional on many fronts for the past 4 months on here, creating this hugely wishful hype that has inturn made Ryzen look average upon release. They pushed unrealistic expectations in everyone's face, which has hurt AMDs image in the end.
Hardly, the only disappoint part has been overclocks, they will hopefully improve over time. I think the amount of time AMD gave reviewers to test iss also an decision they might be regretting because in the space of 2 days the issues that are largely in in software infrastructure space are being found and the doom and gloom of 1080P gaming reviews wouldn't have happened.
Upon reviews, they post frenetically trying to make the same excuses to defend AMD, excuses we've heard since Phenom. This is a sorry state.
Its going to be interesting to look back on your post in 2 weeks, 1 month, 1 year , 5 years ( because thats how long many people have had sandy bridge CPU's for) and see just where the dice lands.
1. Blender/POVRay was AMDs best case. Selected marketing. All of Horizon was pure marketing.
Complete BS not supported by actual Data, AMD never gave POVray results, how about 7zip or many of the beenchmarks the stilit ran? but we all know only winrar and LINPACK matters right because everyone uses those all the time ......
2. Doing everything altogether in one uarch, it's obvious the platform has A LOT of teething issues, and clocks were problematic. No wonder the delays. The platform is a beta. End users and reviewers NEVER have to wait for all this to be sorted. It is judged how it is sold
.
Are you just as judgmental to Intel ( appears not) because DDR4 memory support was a mess across multiple platforms for them to..... Just more white anting from you.
3. Low Power Plus is Low Power Plus! I heard so much irrational pseudoscience nonsense in the buildup here. Every one of it has been debunked by data now.
yep 4.1ghz single core turbo for their first 14nm CPU. The data of your post doesn't appear to support your position of being and "AMD Fan" because you know clocking at stock just as high as intel's 8 core parts is a complete fail.......
It is obvious power or process is absolutely no where close to Intel. That 1800X is choking being pumped +30W from the model below.
What does broadwell-E power clock curve look like mate? The highest clock chip is always going to have to worst perf per watt. For someone who constantly has looked down on people the fact you think this is somehow relevant just show that you either have no clue or deliberately are being dishonest.
Clocks/volts/currents are ceiling, OC minimal, XFR a gimmick suited to mobile and power way above 90W, and above Intels 140W chips when properly tested.
Thats the point of XFR, more goal posting moving white ant BS. Either you choose to be ignorant as to what XRF was designed to do or you are being deliberately dishonest, XRF ignores TDP and power consumption if system stability and cooling are in order.
No, sorry to all irrational pseudoscience. No magic 0.9v 4GHz at less than 80W because of a Neon FPU.
And no data to backup anything in your post so far either.
4. Piledriver vs Exc tests for IPC show 2% average difference now.
Complete BS showing either 1. of your lack of knowledge or 2. your continually white anting, goal post moving dishonest BS! I'll let you pick which one of the two it is!
It is completely workload dependent, Piledrive benefits massively from having an L3 cache ( look at thinks like SPEC results) and Excavator benefited massively form anything that can be kept within its L1/L2 ( look at things like prime95 etc) . equating them as the same is to ignore the actual data.
And Ryzen ST isn't 1-7% like the hype, but 10-20% behind Intel.
But again you continue with your white anting goal post moving BS ( AMD FAN hey!). Maybe you should go look an the stilts ST data all @ 3.5ghz.
So against Haswell across all tests Zen does very well, not quite as well against skylake,
But remove results for tests where 256bit op make a difference and Zen is even better, remove the few out statistical anomalies from both sides ( 2 positive for Zen 2 positive for skylake) and look where we end up.......
Funny what happens when you actually analyse data, Then remember that their will be next to 0 optimization for Zen so far compared to the aggregate of haswell + skylake optimization.
5. For the average guy, Ryzen is certainly not the gamers CPU. 4C, high IPC is still king. Intel has better buys, especially for futureproofing. Excuses don't mitigate that CPU load tests - which give a proper picture at all ranges - show it well behind.
You may want to check your data again,
And seriously. Argue all you like but...110fps vs 100fps is NO DIFFERENCE to a gamer! I played competitive FPS for years since Quake. Charts showing +100FPS are only good to ascertain the technical 'better' but not for actual playability.
Look at that goal post moving and white anting again, now all gamers are competitive gamers... ROLF, Also look how after just 2 days gaming performance data is trending upwards........
6. BitsandChips fed all the wrong zealous hype trains. Seems apparent they just wanted to cash in. Their latest linking a 1% runtime variation in CB to 'Neural Net Prediction'is equally ludicrous. It's called margin of error, for Christ's sake.
I cant stand bitsandchips but you just look desperate here, there is a very clear trend line for Zen while the 6700 is having 1% variances. Either way it will be very easy to disprove, so why dont you disprove it?
7. HEDT doesn't care for price or power. It cares about absolute performance. Which is, still, ruled by Intel.
yes we are all running 6950K's are we..... ROFL
You should have been around for the K7 / Prescott days then, you would know that's simply not true, the amount of 2500/2600+ that AMD sold ( best perf per $) while intel had the performance crown with with 3.2ghz HT P4 is what kept AMD alive while K8 was delayed.
I am a hardware and gaming Enthusiast , go look at some of the data i have provided in threads, the hours i spend generating it. My last CPU was an IVB 3770K @ $450 AUD. there is no way i am spending $1469 on a 6900k. So that leaves me really at 485 for a 7700k or 469/569 for 1700 or 1700X. Now i know that over the next 3-5 years games are going to only use more threads not less, im also happy to overclock, im my opinion the CPU i would recommend gamers especially if they are going to keep it for 3-5years has to be the 1700 by a significant margin.
Any serious gamer would put the money saved into more SSD (who wants to load games from spinning rust!) and More GPU!
AMD has now given Intel a challenger for certain workloads, however.
If by that you mean basically everything that isn't benefiting from 256bit ops then yes.
8. TheStilt did an awesome job! Should be renamed TheKanterStilt.
Now that's actually a quality data point.
Maybe you should actually look at his data then......
Fing hell this post took me 3 hours to write........lol