• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Info Data movement is the real power hog

igor_kavinski

Senior member
Jul 27, 2020
331
153
76
For Next-Generation CPUs, Not Moving Data Is the New 1GHz - ExtremeTech

That's crazy. I had no idea that moving data was this power intensive. This explains perfectly how Apple minimized M1's power consumption so drastically. On this note, I've always wondered why desktop motherboards haven't transitioned to SODIMMs instead of DIMMs. CPU socket surrounded on all sides by four SODIMM slots should mean less distance for the data to travel and also less expense in creating the required traces on the motherboard. Do DIMMs exist purely for the sake of overclocking RAM, the cost of which rarely justifies the few percent gained in terms of performance?
 

moinmoin

Platinum Member
Jun 1, 2017
2,775
3,673
136
We knew since when Zen and more specifically Epyc with its huge IOD first launched that uncore is the area with the huge base/idle power usage going forward. (Non-Pro) Threadripper is a good bit more efficient than Epyc just due to only using have the amount of RAM channels and PCIe lanes.

It's also why all the low power usage numbers of e.g. ARM-based servers don't really mean much in the end since those (used to) have significantly lower I/O than Epyc.

In any case: As far as efficiency is concerned not cores but uncore resp. I/O is indeed the main battle field now and in the future. For mobile it means reducing and optimizing I/O for the minimum possible. For servers it means finding ways to power gate I/O that's not in use without affecting overall performance.
 
  • Like
Reactions: Tlh97

CakeMonster

Golden Member
Nov 22, 2012
1,046
112
106
If it makes a difference, I am a bit puzzled why with the evolution of DDR the module length hasn't been shrunk a bit between each generation and alternative placement hasn't been tried.
 

scannall

Golden Member
Jan 1, 2012
1,809
1,309
136
If it makes a difference, I am a bit puzzled why with the evolution of DDR the module length hasn't been shrunk a bit between each generation and alternative placement hasn't been tried.
Most likely inertia. OEM's in particular like things like connectors and slots to stay the same. Saves on retooling costs for example.
 

mindless1

Diamond Member
Aug 11, 2001
6,365
785
126
For Next-Generation CPUs, Not Moving Data Is the New 1GHz - ExtremeTech

That's crazy. I had no idea that moving data was this power intensive. This explains perfectly how Apple minimized M1's power consumption so drastically. On this note, I've always wondered why desktop motherboards haven't transitioned to SODIMMs instead of DIMMs. CPU socket surrounded on all sides by four SODIMM slots should mean less distance for the data to travel and also less expense in creating the required traces on the motherboard. Do DIMMs exist purely for the sake of overclocking RAM, the cost of which rarely justifies the few percent gained in terms of performance?
I'm thinking that article is focused primarily on servers, where there is little computation and its purpose IS moving data. It is very hard to imagine that more power is used moving data on a compute intensive desktop task, in the context of the article mentioning storage-side computing.

Why shrink the DIMM if the board has room for it? This allowed more chips per module at whatever DRAM density they used at the time. I don't think it's practical to try to put slots on every side of the CPU socket, where are your power and ground planes for CPU VRM going to be? They need inductors which have magnetic fields, guessing that might mess with data traces too nearby? Even if not, might require another couple layers on the board for the data and another ground plane.

There isn't much if any expense in creating the traces on the motherboard, all about what copper to remove and what to leave behind, except that if you have to add more layers that is more expensive.

Also consider that the expected service life of a desktop system is higher than a laptop, so cooler running memory and less PCB stress from a larger footprint DIMM is desirable.

Otherwise, I'm a bit tired of all the component shrinkage, especially with connectors like mHDMI, 3.5mm headphone jacks, mUSB, USB-C, even SATA, would all be more robust if the connectors were larger with a large part of the problem being the small area used for the solder pads. Seems like it is as it should be though in a laptop, tablet or phone, where space is at a premium.
 
Last edited:

CakeMonster

Golden Member
Nov 22, 2012
1,046
112
106
Connectors is one thing, but shorter RAM slots could have facilitated more practical form factors and mobo designs.
 

mindless1

Diamond Member
Aug 11, 2001
6,365
785
126
^ I don't want a new form factor, am set for life for cases if it stays ATX. :)

Kinda vague too, suggesting some change would automatically be "more practical" when it evolved to be what it is, and making everything higher density means more heat density and difficulty routing traces, ultimately an increase in cost with a decrease in expected service life and for what, so my desktop case sitting under my desk, can be a few inches shorter when I'd have no use for that space otherwise? Plus I like having enough room in a case to pull individual components out, without having to remove many if any others to gain clearance to do so.

Things are how they are for good reasons when system portability isn't a high priority, so we see with shorter RAM slots for laptops, soldered on for tablets phones, etc. Who would suspect that desktop mobo manufacturers never heard of SO-DIMM or soldering direct to PCB?

They could design one yesterday (especially with SO-DIMM being readily available in the market for laptops) and usually chose not to, except for the crowd who wants a limited-to-non-upgradeable, tiny all integrated system. Since those do exist, the more that sales reflect market segment growth, the more manufacturers will make them.
 
Last edited:
  • Like
Reactions: lightmanek

Bigos

Member
Jun 2, 2019
45
69
61
The article starts with a statement that 2/3 of power is spent on data movement and 1/3 on computation. But it is blatantly false - according to the Rambus slide that is shown (which focuses on what Rambus does - which is data movement between DRAM devices and CPU) of the power which is spent except computation, 1/3 is spent on DRAM power itself and 2/3 is spent on the fabric and on-CPU phy dedicated to DRAM. The compute power is not even measured.

I would just assume that the whole article is based on false assumptions and is thus useless. Especially since Rambus is the cited source, the company that sells solutions to communicate between DRAM and CPU - they try to sell their solutions so they overstate the problems that they solve.

(each time I state "CPU" I mean a separate silicon device that contains compute that accesses DRAM using a separate fibric; you can replace "CPU" with "GPU" or any other device with requirements for large amounts of DRAM)
 

ASK THE COMMUNITY