moinmoin
Diamond Member
- Jun 1, 2017
- 4,934
- 7,619
- 136
Hey, at least "Most powerful in the market" comes in second. That has to be good for something sometime.If you think that's bad, look at the line for "lower TCO".
Hey, at least "Most powerful in the market" comes in second. That has to be good for something sometime.If you think that's bad, look at the line for "lower TCO".
Hey, at least "Most powerful in the market" comes in second. That has to be good for something sometime.
Now that you mention that, the list is clearly missing a "Balance between air and water cooling" entry.I guess? It just encourages the kind of thinking that leads to 400W server CPUs:
Intel Announces Socketed 56-Core Cooper Lake Processors
Intel tries to upstage AMD's EPYC Rome announcement with its new 56-core processor that comes in a socketed form factor.www.tomshardware.com
Sadly those still may not be the "most powerful on the market".
Of course you have AMD to blame for it. Though you don't see that in official MSRPs, but ever since Naples hit, and even more so since Rome hit, Intel have practically been giving away top-end platinum segmentated products for high silver - low gold prices.
What I am more concerned with/irritated by, is "business as usual", even in light of security concerns/heat generation/performance of Intel CPU's, business still is not really adopting Rome , and due to security concerns in Intel, even Naples should be preferred. Even if they are making inroads to the data center, in light of all these facts, it should be happening faster. When I retired, all servers had a 3 year life span. PC's, servers, everything got refreshed in 3 years. So Naples should have started the conversion, and Rome accelerated it.
Mind you, my company alone had several square MILES of datacenter floor, and were security conscious to the max. The smallest patch had to go in, no security flaw was allowed to remain in any server.
I wish I had a way to find out what they are doing now, but since the phone is virtually useless to me(being deaf), I can't call my friends in the DBA group that knew everything that was going on in the datacenters.
I don't want to mention the company name, but they were in the health care business, and as such, security is paramount.My two cents...
Security, and what the market will bare is one of those things that always comes as an utter shock to me. Your average consumer is only willing to pay an average of 15-20 cents (USD) for security. Corporations are somewhere on the 25-30 cents range if I'm recalling the latest metrics (pre-coffee) correctly.
But, then, it all depends on the company's risk posture. The long term cost and impact of reputation damage from a breech isn't immediately quantifiable to many institutions, so they opt for solutions that show a more meaniungful ROI in standard market metrics. They may be right, though, as the consumer markets seem overly forgiving of their negligence.
All that being said, I really do hope that AMD continues to innovate, and bring security with it. Breaches will only increase, and software solutions ultimately mean nothing when resident on vulnerable hardware.
- Korp
All that being said, I really do hope that AMD continues to innovate, and bring security with it. Breaches will only increase, and software solutions ultimately mean nothing when resident on vulnerable hardware.
Attackers usually don't bother making exploits themselves, the buy them (see e.g. the market for zero-day exploits) if they aren't available for free in the wild already. The excuse that something is hard to exploit when it's already known how to exploit is little more than pseudo security.Thing is, it's incredibly difficult to make an actual exploit from the hardware flaws like Spectre and Meltdown. It's more theoretical at this point than anything else. An attacker won't even bother, not when there's plenty of vulnerable software out there that can be exploited.
Attackers usually don't bother making exploits themselves, the buy them (see e.g. the market for zero-day exploits) if they aren't available for free in the wild already. The excuse that something is hard to exploit when it's already known how to exploit is little more than pseudo security.
The misconception most people have about exploits is that they'd be limited to some specific weaknesses. That's not how they are used though. They are better described as suite of exploits that are used in combinations or chains depending on what hardware and software is available to be exploited and what patch status they are at. It pays more the higher the coverage is. Many of the Spectre and Meltdown vulnerabilities are open purulent wounds that Intel doesn't even bother to fully resolve anymore, as such they are an easy if possibly costly target. If not already there, once it's worth the investment exploits for the many vulnerabilities definitely will happen. And you won't hear about it anymore since it will be used for actions the public is not privy to.It's one thing to make an proof of concept like researchers do, it's another to make an actual usable exploit.
And you won't hear about it anymore since it will be used for actions the public is not privy to.
A defense in depth security should strive to prevent any possible future exploit of theoretical threats.But you are still talking about theoretical threats at this point.
I get your point, but after these 2 years where every week you've heard about a different large or global scale company getting hacked of TONS of confidential user data, I don't think any sort of security issue can be taken lightly, especially when it takes only one 'genius' to figure out from one (or more like many) of these, how to get it to work with some exotic eploit. After all, it's all about Intel, which enjoys some 95%+ market share among enterprise office PCs (with mostly clueless users and lazy complacent admins) and servers.Which is why it's worth it to enable the mitigations. But you are still talking about theoretical threats at this point.
What is perplexing to me is that AMD is gaining marketshare at a much slower rate than they did during the K8 heyday.
I guess? It just encourages the kind of thinking that leads to 400W server CPUs:
Sadly those still may not be the "most powerful on the market".
Intel just has to put on the Xeon marketing materials: MOST POWER EVER.No, but they're the most power-using, and that's got to count for something
No, but they're the most power-using, and that's got to count for something
Now that you mention that, the list is clearly missing a "Balance between air and water cooling" entry.
Don't forget, that the '''''discounts''''' are targeted more towards the decision maker himself, if need be ;pKinda makes me wonder who, in 2020, is really making the decisions when it comes to outfitting a server room. If you have to use water cooling on all your server CPUs, then how can you possibly save enough money on targeted discounts to deal with the power draw and the expense of maintaining all that cooling equipment? Even really good industrial water cooling equipment has to be serviced eventually. Far more often than you have to service an HSF in a climate-and-particulate controlled server room.
This thread saved me the search for previous similar one because I have something I have said often already
True. Just go to an OEM site and see what they offer. More on that later.
In company I work for it's much more than 3 years. I have access to a Virtual machine on a server to do some compute (CPU) and long running stuff. That server runs a broadwell based Xeon.
This brings me right to the next point. Since they also run more important stuff on that server and me gobling up lots of CPU irregularly for hours lead to them reducing the cores for this VM from 8 to 4. 8 Wasn't great already but 4 is a joke so we started discussion for new hardware for my needs. And there is so much red tape and cluelessness it's infuriating.
I speced out a reasonable 32-core TR workstation which would cost around $8000 (with expensive GPU for deep learning). But no, can't do that because it's not standard and non-standard hardware can't be connected to the network (which is mandatory for accessing the data). hence it must be a server of a specific type from either of 2 big OEMs. The guy getting quotes has 0, absolutely 0 clue about hardware. Shocking. he said the only standard-compliant server with GPU they can offer would be a dual 28-core xeon platinum (sic) and you can only buy them in 2 nodes. He didn't mention a price but that thing would probably cost around 100k, is in some way completely overpowered but still slower in single-thread workloads. Discussion isn't finished but this is what AMD is competing with. Just having a good CPU at a good price doesn't mean all that much. This stuff can't be made up. BTW cloud is not an option because company policy also prohibit any kind of sensitive data to leave the company network.
Send them to my house, and I will show them what an EPYC 7742 will do.This is an important point, and I want to reiterate it. If most organizations are like mine, they have a contract with one or more vendors who provide higher tier support, supply their hardware, and define their hardware and software baseline, which cannot be deviated from.
So what you end up with is a scenario where, despite the existence of a superior product, you are stuck with the status quo because that product is tested, certified, and outlined in an agreement/contract.
It will take time for AMD to infiltrate these markets, more due to how slow the bureaucratic cogs turn than anything I suspect.
What is perplexing to me is that AMD is gaining marketshare at a much slower rate than they did during the K8 heyday.
I'd honestly put that down more on the age of the companies (and their nested bureaucracies) as well as the maturity (in a negative sense here) of the local IT markets than any geographical or cultural influence. Though it would be very interesting how other far east countries would far compared to China.I'm surprised no one has commented on the huge differences between geographical/cultural ares, so I'll do it. China seems much more amenable to new or novel solutions than the rest of the world. I'll leave the significance of that to the individual.
True, however Zen+ (Pinnacle Ridge) was only a small improvement over Zen (circa 3 percent higher IPC) - Zen 2 have delivered about 15 percent.