- Jul 27, 2020
- 26,232
- 18,067
- 146
Have you set that up in your TR system?Gen5 is i think considered faster then DDR4 in a RAID-0 utilizing a full 16x PCI-E slot.
Have you set that up in your TR system?Gen5 is i think considered faster then DDR4 in a RAID-0 utilizing a full 16x PCI-E slot.
How? NAND flash has terrible latency. You could have 1TB/s NVMe drive and you still couldn't replace DRAM. Optane DIMMs were 100x faster in this regard and it was still significantly behind DRAM.im actually waiting for Ram to go obselete with Gen7 or Gen8 nVME's replacing RAM / Storage all together.
Gen5 is i think considered faster then DDR4 in a RAID-0 utilizing a full 16x PCI-E slot.
I think by Gen7 we may see the DDR slot disappear all together.
No not with PCI-E 5.0.Have you set that up in your TR system?
I am predicting by the time we hit Gen7 the PCI-E Lanes will be Gen.7 as well, and will probably work itself out as the PCI-E lanes are all tied directly to the CPU now.How? NAND flash has terrible latency. You could have 1TB/s NVMe drive and you still couldn't replace DRAM. Optane DIMMs were 100x faster in this regard and it was still significantly behind DRAM.
That changes nothing. NAND tech itself is limited to about 50-100us, and that's under random reads. It's literally 1000x the difference. Writes are worse because you need write-delete-write cycle, which is so slow that without DRAM buffer it would be absolutely unusable beyond 1-bit SLC SSDs. Under random writes, bufferless 2-bit NAND is in the slow HDD range for speed. Even DRAM-less SSDs use tricks such as using system DRAM or SLC caches.Its like how you look at SCI FI movies and see how everything is based off iso crystals or some sort.
I am thinking that will be the RAM / OS / Storage all in 1 part, and the CPU/GPU/Motherboard will be its own.
im actually waiting for Ram to go obselete with Gen7 or Gen8 nVME's replacing RAM / Storage all together.
Gen5 is i think considered faster then DDR4 in a RAID-0 utilizing a full 16x PCI-E slot.
I think by Gen7 we may see the DDR slot disappear all together.
How? NAND flash has terrible latency. You could have 1TB/s NVMe drive and you still couldn't replace DRAM. Optane DIMMs were 100x faster in this regard and it was still significantly behind DRAM.
They fail of course, but it's normal variations, and unpredictable one at that. But NAND lifecycles? It's predictable. It's low enough that while most of us may never experience it, it's within realms of plausibility.How many times have you had DRAM fail? I did once. And since any that is worth a crap has a lifetime warranty all I had to do was put in the RMA "Failed in Memtest86" and I got a free replacement.
I really think it had potential, but I get their point. If they succeeded, then eventually they would have had to compete in a low margin market, which is basically moderate amount of money unless you are #1.If anything I wish Intel/Micron didn't give up on Optane. The performance low low QD levels was unbeatable. Just a bit too pricey to be worth it for the mass market I guess.
I was going to post was @DavidC1 said but he beat me to it, twice. My first concern was the limited lifespan of NAND, but latency would be a huge factor that I did not even consider at first. CPU's need low latency.
How many times have you had DRAM fail? I did once. And since any that is worth a crap has a lifetime warranty all I had to do was put in the RMA "Failed in Memtest86" and I got a free replacement.
If anything I wish Intel/Micron didn't give up on Optane. The performance low low QD levels was unbeatable. Just a bit too pricey to be worth it for the mass market I guess.
Possibly more absurd than my P4 wish. You know they won't ever do it because of the "there's no market for it" excuse. But it would be the quickest way to establish a huge following especially in third world markets, enticing people there to ditch low end dGPUs.Novalake Celeron with 24 Xe4 graphics for $150 please!
At least with the P4, millions of people still alive who used to have a P4 may buy it again for nostalgia.
I guess because Intel advertised the hell out of it during that era.Why?
I guess because Intel advertised the hell out of it during that era.
And it can be a good integer crunching CPU with upgraded AVX-512 on 18A, cheaper to make due to smaller die size than the current architectures and easily hit 6.5 GHz due to lesser complexity. It could be the modern Celeron for the cash strapped.It was good in integer code with few branches and SSE2 but those were the highlights IMHO.
And it can be a good integer crunching CPU with upgraded AVX-512 on 18A, cheaper to make due to smaller die size than the current architectures and easily hit 6.5 GHz due to lesser complexity. It could be the modern Celeron for the cash strapped.
LOL. You know the "Bonnell" Atom? The one that still causes people to criticize E cores because they can't learn beyond what they read 17 years ago? The in-order design?And it can be a good integer crunching CPU with upgraded AVX-512 on 18A, cheaper to make due to smaller die size than the current architectures and easily hit 6.5 GHz due to lesser complexity. It could be the modern Celeron for the cash strapped.
You don't know much about CPU design do you? Not even on a high level sense?Possibly more absurd than my P4 wish. You know they won't ever do it because of the "there's no market for it" excuse. But it would be the quickest way to establish a huge following especially in third world markets, enticing people there to ditch low end dGPUs.
Never said it can't be done. It's just that Intel bigwigs will refuse to do it.Celeron with a fast iGPU can be done right now.
P55C is much more ancient than a P4. And Larrabee was supposed to be a programmable GPU running on a sea of tiny CPU cores. Had Intel managed to make it work and had it not been abandoned after Pat left, it could've made realtime raytracing fashionable much earlier than Geforce RTX. Intel's problem has always been that they find it hard to stick with their ideas and see them through to completion. They get spooked by anything mildly successful from AMD and then they reactively change their tune to one-up AMD, often with disastrous results. Hybrid cores is one such disaster.What you are proposing sounds exactly like Larrabee, except using the P4 as the core rather than a modern Pentium (P5). How'd that work out?
OK, I'm down with that. Let's give Sandy Bridge 35 pipeline stages, AVX-512, 6.5 GHz all-core clocks with boost clocks a few hundred MHz above that and we are goodWe already got modern P4, it's called Sandy Bridge. It takes the crazy part of P4 and optimizes it to be much more efficient. It's 2.5x the perf/clock.
It did work. It got so delayed that they repurposed it for a workstation accelerator.P55C is much more ancient than a P4. And Larrabee was supposed to be a programmable GPU running on a sea of tiny CPU cores.
Xeon Phi was more accelerator oriented, hence the low clocks. Also it was 1S only. It had HMC memory on package, which is BetaMax compared to HBM memory. They modified the core to support 4 threads and many other small details which would specifically boost performance in HPC. The architect for the chip said the core has been so heavily modified that they didn't call it Silvermont. They touched things like out of order buffers, so a redesign.Doesn't the current E Cores only Xeons like Sierra Forest can be considered a Xeon Phi of sorts? No GPU wannabe functionality, but it fits the definition of a sea of small cores. They also have socket compatibility with the P Cores only Xeons, which is something that Xeon Phi apparently intended but didn't managed to do in the sense that they used a different pinout and Motherboards.