Question Which is faster for SQL Server data access and web page load: AMD Epyc (faster CPU) or Intel Xeon with Optane Persistent Memory DIMMS ?

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Say you're thinking of upgrading your servers and you're goal is fast access to SQL Server data (Database with lots of joins and tables rendering web pages over the web). In this case your ceiling of RAM will be 128GB.

While I appreciate the raw power and value of Epyc vs Xeon, it seems to me that for SQL Server and web apps, the slower Xeon CPU when paired with Storage Class Memory (SCM) is the clear performance winner. I'm not an expert in this realm, and did look to TPC-E for guidance (didn't find any really) but given these parameters I wanted to get feedback from anyone with knowledge in this particular area.

Thoughts anyone ?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,189
126
slower Xeon CPU when paired with Storage Class Memory (SCM) is the clear performance winner.


Its not only that, but ECC REG rams have been validated over and over again on Xeons.

EYPC has the advantage over cores and operation, which makes it overall a better system when your doing a lot of virtualization.
However for SQL, it has been validated for almost a decade on a Xeon, has mission critical deployments of it, and is just a better all choice for SQL operations until we see some time on EYPC.
 

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Yes but neither of these are the priority, nor in the parameters of my question. The focus of the question isn't max RAM (again, in my spec, the limit is 128GB) nor validation. The question is all about random access performance of pages against a SQL Server database, and the latency it takes from when a user clicks a button or link, and the time it's refreshed on their browser, nothing more.
 

sdifox

No Lifer
Sep 30, 2005
94,999
15,122
126
Yes but neither of these are the priority, nor in the parameters of my question. The focus of the question isn't max RAM (again, in my spec, the limit is 128GB) nor validation. The question is all about random access performance of pages against a SQL Server database, and the latency it takes from when a user clicks a button or link, and the time it's refreshed on their browser, nothing more.

Sounds like shooting an arrow then paint the target. How big of a db are we talking about and how much of the transactions are writes?
 
  • Like
Reactions: MrGuvernment

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Up to 500 GB of structured data (unstructured files and images are going to be on NVMe drives anyways, and would be considered cold data), so small potatoes by "Big Data" standards, but big enough to create havoc when dealing with hundreds of tables, thousands of joins, and indexes galore to keep everything tuned. We even use Elastic Search to off-load super high performance tasks such as search and data loading of large partitions that can take it onto effectively copies of the real data which gets shuffled back and forth from table to copy (referred to as a "shard" to be exact). All good for now, but scalability is always a thing, and if you're going to invest in hardware, always good to make the most future proof decision you can. Transactions per second are small relatively speaking, way less than would bottleneck today's NVMe for the next 5 years. The only metric of concen is page rendering based on hot to warm data access tiers.
 

sdifox

No Lifer
Sep 30, 2005
94,999
15,122
126
Up to 500 GB of structured data (unstructured files and images are going to be on NVME drives anyways, and would be considered cold data), so small potatoes by "Big Data" standards, but big enough to create havoc when dealing with hundreds of tables, thousands of joins, and indexes galore to keep everything tuned. We even use Elastic Search to off-load super high performance tasks such as search and data loading of large partitions that can take it onto effectively copies of the real data which gets shuffled back and forth from table to copy (referred to as a "shard" to be exact). All good for now, but scalability is always a thing, and if you're going to invest in hardware, always good to make the most future proof decision you can.


I just don't understand choosing Optane vs more memory or nvme storage space.
 

razel

Platinum Member
May 14, 2002
2,337
90
101
Web page load? Getting people wired connections and employing local server caching closer to where your users are.

If you are doing this for an enterprise where such a thing matters your money is best spent on optimization.

Even if you buy both server and benchmark, your money and time is seriously best spent elsewhere. One change in a web page design can change everything.
 
  • Like
Reactions: MrGuvernment

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,189
126
I just don't understand choosing Optane vs more memory or nvme storage space.

Ideally it should be always to max memory, then Optane, and then nVME, in that order id assume.
So the OP should be maxing memory out first before adding Optane or a nVME cache card.
 
  • Like
Reactions: MrGuvernment

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Ideally it should be always to max memory, then Optane, and then nVME, in that order id assume.
So the OP should be maxing memory out first before adding Optane or a nVME cache card.

128GB RAM is already maxed out (it's the limit allowed by the version of SQL Server each OLTP database server uses) which is why I mentioned 128GB being the limit. I guess it really doesn't matter if we almost never restart the servers and flush the hot cached data off RAM, but this isn't going to be the case always... but you bring up a good point. Maybe the (in)frequency is such that flushing non-persistant DRAM from time to time and putting up with the small delays every few months that come about as a result (about the cadence when hard boots are required) is a small price to pay for the benefits of non-Optane SCM bennies, one of them being that we can settle on an Epyc platform. I'd much rather go the Epyc route if I had my druthers, and not just for economic reasons. The thought of a company becomming political as Intel has, is very unsettling to me (I'll leave it at that, unless someone else is brave enough to pick up the mantle).
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
it's the limit allowed by the version of SQL Server each OLTP database server uses

Is that a licensing limit that checks how much RAM is in the machine, or is it a licensing limit on how much RAM the software will allocate on its own, or something else?

Also, are you aware that taking advantage of the non-volatility of Optane DCPMM requires software changes? If you have software that is unwilling to use more than 128GB of RAM, it is almost certainly not going to be able to benefit from non-volatile memory.
 

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Is that a licensing limit that checks how much RAM is in the machine, or is it a licensing limit on how much RAM the software will allocate on its own, or something else?

Also, are you aware that taking advantage of the non-volatility of Optane DCPMM requires software changes? If you have software that is unwilling to use more than 128GB of RAM, it is almost certainly not going to be able to benefit from non-volatile memory.

My understanding is that Optane SCM DIMMS are useed in block mode require no changes to the software stack, while still reaping a 10X improvement vs NVMe, where as to get 100x improvement you have to use DAX mode which does require a change to the underlying software. There's certainly no limit imposed by the ERP software using SQL so it's only the licensing of SQL Std that limits access (there's a little more allocated to column store indexing use but we're not currently employing that). My assumption as is yours if I get your point, is that the database engine doesn't see any distinction between an Optane DIMM and SRAM, hence you end up hitting 128GB pool regardless. Since this does deserve a bit more rigor than idle assumption, and since it was me that posted the question, I'll do the legwork to find out, and will post my findings here... stay tuned !
 

sdifox

No Lifer
Sep 30, 2005
94,999
15,122
126
My understanding is that Optane SCM DIMMS are useed in block mode require no changes to the software stack, while still reaping a 10X improvement vs NVMe, where as to get 100x improvement you have to use DAX mode which does require a change to the underlying software. There's certainly no limit imposed by the ERP software using SQL so it's only the licensing of SQL Std that limits access (there's a little more allocated to column store indexing use but we're not currently employing that). My assumption as is yours if I get your point, is that the database engine doesn't see any distinction between an Optane DIMM and SRAM, hence you end up hitting 128GB pool regardless. Since this does deserve a bit more rigor than idle assumption, and since it was me that posted the question, I'll do the legwork to find out, and will post my findings here... stay tuned !


How many instance of these sql servers are you running? Consolidation maybe cheaper if you have high instance count. The Epyc 7F series may suit your current usecase, particularly 7F52 with 256MB of L3 cache.
 
Last edited:

BizAutomation

Junior Member
Mar 1, 2020
7
0
11
Found a very interesting bit of information about SQL Standard Edition's 128GB limit, and how it still might exploited with persistant memory, without it affecting the 128GB limit. Needs a bit more use case decoding, but here it is:

All releases of SQL Standard edition restrict the buffer pool size to 128 GB, including SQL Server 2019. An interesting aspect of Standard Edition of SQL Server 2019 Standard Edition is the ability to use features such as Hybrid Buffer Pool that leverages persistent memory as both a storage device and memory without the persistent memory capacity contributing to the buffer pool limit. The persistent memory capacity is not counted as part of the 128 GB limit calculated solely on the buffer pool usage in DRAM.
 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,101
126



Seems not available yet.

The spec mentioned DDR-T, is it different from DDR4? Does it require new motherboard?
 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,101
126
Well, to use Intel Optane PMEM, you need to use Xeon Scalable processors and motherboard BIOS/firmware need to support DDR-T protocol. (interface is the same as DDR-4)

Apparently you can't buy the PMEM modules, plug into your system and hope it works.



 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,101
126
AMD team up with WD to compete with Intel


But the comments:

The likelihood of Intel extending Optane support to AMD processors is as likely as the Moon reversing its orbital direction. Hence AMD’s working with Western Digital and ScaleMP.

Compared to the use of Optane DIMMs to expand effective memory, the ME200 costs less money, is probably simpler to implement and is available for AMD EPYC as well as X86 processors. Optane-enhanced memory servers may well go faster though.
Back in 2016 ScaleMP said its software can pool Optane SSDs and DRAM as well as NAND SSDs and DRAM. We don’t hear so much about this now.

==


==

 
Last edited: