- Mar 3, 2017
- 1,777
- 6,791
- 136
For the record, we didn't write the software. We own licenses from the software company that owns it, so we're at their beck and whim regarding the improvements to the software itself.Long term, your company should really be looking at rewriting the software. That's insanely inefficient to depend on bruteforce computing power.
Windows. I don't think Turin is so groundbreaking that it requires a new version of Windows.What OS does it support? Buying bleeding edge hardware means you need us to date OS for support but many times these engineering CAD programs are slow to support OS updates. I was in a similar position as you a few years ago and we couldn’t go with the latest hardware because it required an OS version to support the hardware that the software wouldn’t support. Granted, the software probably could have been made to work on the updated OS, but when you are talking multi million dollar projects, people get real antsy at the words, “not officially supported “ showing up in the design resources plan.
Do you run inside of VMs? Or do you use Windows server and a bunch of individual sessions?We've spent time in the past evaluating other software that accomplishes the same task but can be run more efficiently, e.g. more parallelizable and thus can take advantage of cloud computing, but nothing stood out as a clear winner. FWIW, the speed at which we can run the analysis is typically not the bottleneck for the project because the project schedule give us years to path find and refine the design, which is plenty of time for dozens of iterations. Being able to squeeze in another 20 to 30% more iterations generally doesn't get us a significantly more optimized design. Additionally, being a rather conservative engineering profession (civil), we tend to lean away from overly optimizing the design for the performance objective because the objective when you design for earthquakes is inherently "fuzzy" to begin with. No two earthquakes are the same, and we only design the building using only a suite of eleven to fourteen ground motions that are based off of past earthquakes. If the fault that is near your building ruptures in a way that is completely unexpected from one of the ones you used to analyze the building, your design better be pretty robust and insensitive to the spectral characteristics of the shaking. If it were too sensitive to *how* you shake the building, i.e. if it's shaken one way it's safe but shaken another way it collapses, then it's a bad design. For these reasons, intentionally not overly optimizing the design to just those 11 or 14 ground motions is the right approach in my opinion.
Windows. I don't think Turin is so groundbreaking that it requires a new version of Windows.
We use VMs and Windows Server.Do you run inside of VMs? Or do you use Windows server and a bunch of individual sessions?
That kind of regulation won't work well when input and output voltages are close to each other - needed capacitance would be way more that can practically implemented in silicon. Zen LDO is just simple linear regulator, there is different resistances connected between Vin and Vout which are controlled by switch transistors - logic just switches on correct amount of resistance between input and output to achieve wanted voltage for any current demand. And efficiency isn't terrible for small voltage drops even with such a simple linear regulation - 20% voltage drop can have about 80% efficiency.You post schematics without even knowing what it is about to make people think that you are in the know, and you are saying things that are false, i never said that there 3 kind of regulators in Zen, but a single kind, namely capacitors that are charged by high speed switching mosfets and without inductances, learn to read before keeping trolling.
Should be fine then. It’s a bit more complicated on the Linux side with these types of situations. Hopefully you can hold out for Turin, but Genoa would be a beast itself and a gigantic upgrade over what you have currently.We use VMs and Windows Server.
Any chance they could support buying one or 2P 9554's ? I mean, $8000-$10,000 to get 2 9554's on ebay + 384 gig of DDR5 4800 registered(new) + a 2P motherboard(new). For ANY company, they can afford that. Then you can see what it will really do, and if it works out, save the rest for Turin.We use VMs and Windows Server.
Any chance they could support buying one or 2P 9554's ? I mean, $8000-$10,000 to get 2 9554's on ebay + 384 gig of DDR5 4800 registered(new) + a 2P motherboard(new). For ANY company, they can afford that. Then you can see what it will really do, and if it works out, save the rest for Turin.
He said it was small. If they won't touch ebay, maybe retail ? It would be a few thousand more that way.No decent sized business is going to touch Ebay for major hardware purchases.
No, @Hitman928 is right. We would never touch eBay. When I said small, I meant a few hundred people.He said it was small. If they won't touch ebay, maybe retail ? It would be a few thousand more that way.
It seems like the Notebook market is sustaining the entire Intel organization.They did have a billion in Operating Income in Client (they don't break that down by Desktop/Notebook).
OK, well, even if you bought the whole server from Dell or something, one would not be that much. my 2 9554s are world record breaking (its out there, a 2P 9554 config). They are awesome and will convince any sane IT manager.No, @Hitman928 is right. We would never touch eBay. When I said small, I meant a few hundred people.
In practice it looks more like 25-50mv, so the Vdrop or IR losses doesn't seem to be of big concern.And efficiency isn't terrible for small voltage drops even with such a simple linear regulation - 20% voltage drop can have about 80% efficiency.
I'll see what I can do. I am not sure of the urgency of the IT Department in upgrading, only that we are due. If they tell me it needs to happen, then I'll see if I can get them to only upgrade a portion of the racks rather than the whole lot. I just hope that AMD gives some kind of presentation at CES or at least within Q1 2024 so that I have something concrete to point to when I make my case to wait 6 months for Turin. I can't be pointing to an "obscure internet forum" as my source lolOK, well, even if you bought the whole server from Dell or something, one would not be that much. my 2 9554s are world record breaking (its out there, a 2P 9554 config). They are awesome and will convince any sane IT manager.
Have them this https://www.phoronix.com/review/amd-epyc-9654-9554-benchmarks/15 or other reviews like it. Well known and respected.I'll see what I can do. I am not sure of the urgency of the IT Department in upgrading, only that we are due. If they tell me it needs to happen, then I'll see if I can get them to only upgrade a portion of the racks rather than the whole lot. I just hope that AMD gives some kind of presentation at CES or at least within Q1 2024 so that I have something concrete to point to when I make my case to wait 6 months for Turin. I can't be pointing to an "obscure internet forum" as my source lol
CES 2024 is in 4 months. They are going to somehow present Strix there since they can't afford breaking the sacred AMD ExecutionTM.It looks encouraging that samples of these are out already.
Compared to Zen 4, desktop Zen 4 launch was ~Sep 2022, Phoenix announcement in Jan 2023, while shipping products only started appearing in volume in ~ June 2023.
It looks like AMD may compress this time with Zen 5 generation.
No.They are going to somehow present Strix there
Not trying to get you to divulge your trade secrets but some form of VM orchestration? Coz running dozens of VMs manually on a single server and switching between them sounds like hell for whoever is assigned that job.We use VMs and Windows Server.
Don't think there are many of thoseThey are awesome and will convince any sane IT manager.
I have no idea what VM orchestration even means, but I just know that we have an aggregate amount of CPU cores that are divided up into a handful of VMs, each running Windows. When people want to run these fancy analyses, we can remote desktop into the VM and run the analysis there. The engineers internally coordinate with each other to ensure that each VM isn't getting over-utilized because the analysis slows down a lot when you exceed 1 analysis run per core.Not trying to get you to divulge your trade secrets but some form of VM orchestration? Coz running dozens of VMs manually on a single server and switching between them sounds like hell for whoever is assigned that job.
Won't work on these types of people. They will ask specifically to show how fast THEIR software will run. My company wasted almost two months worth of development costs on a UAT environment in Azure Cloud, running our financial applications, to see if there were any tangible benefits. I was the one doing the benchmarking and the gains weren't that great, considering the money being spent and they could have gotten some really cool Epyc hardware for the same cost of doing that testing but no. In the end, they accepted those gains as good enough and moved production to cloud. I'm not happy about it but at least, it's miles ahead of the Ivy Bridge-E Xeons we had.Have them this https://www.phoronix.com/review/amd-epyc-9654-9554-benchmarks/15 or other reviews like it.
If the analysis doesn't need to run 24/7, you could just move to Azure/AWS Cloud, get results from the VMs and then shut them down to save on cost. Depending on how much time you save with the faster Epyc VMs, the running monthly cost could be pretty reasonable and your IT team wouldn't need to maintain physical infrastructure.Like that. It's really low-tech, I know.![]()
Yeah, good points all around. We've considered AWS or Azure in the past, but the cost-benefit analysis didn't pan out. Unfortunately, turning on and off the cloud computing spigot isn't quite as fine grained as you think. If we went with cloud computing, the IT department turns the spigot on from Day X to Day Y, where X and Y are the start and end dates of the design phase. It would not be the engineer turning on a cloud instance and then turning it off when the analysis is complete. As a result, there's a lot of idle time where we'd be paying for the cloud server but not getting anything out of it. It wouldn't be like a kubernetes-esque setup that fires up instances with demand. When we evaluated cloud computing a few years ago, prices weren't that great if you normalized on a per-design iteration basis either. We concluded that it was better TCO to own and replace every 5 to 6 years.If the analysis doesn't need to run 24/7, you could just move to Azure/AWS Cloud, get results from the VMs and then shut them down to save on cost. Depending on how much time you save with the faster Epyc VMs, the running monthly cost could be pretty reasonable and your IT team wouldn't need to maintain physical infrastructure.
I guess they want to keep their jobs. Personally, I would hand a 7945HX laptop to every engineer and let them do their analysis wherever they want and then collaborate in some shared workspace.IT department
Haha, that's not exactly viable because it can take a few days for the analysis to complete. Our laptops can handle most analyses, just not the nonlinear time history earthquake sims that we use the servers for.I guess they want to keep their jobs. Personally, I would hand a 7945HX laptop to every engineer and let them do their analysis wherever they want and then collaborate in some shared workspace.
That kind of regulation won't work well when input and output voltages are close to each other - needed capacitance would be way more that can practically implemented in silicon. Zen LDO is just simple linear regulator, there is different resistances connected between Vin and Vout which are controlled by switch transistors - logic just switches on correct amount of resistance between input and output to achieve wanted voltage for any current demand. And efficiency isn't terrible for small voltage drops even with such a simple linear regulation - 20% voltage drop can have about 80% efficiency.