William Gaatjes
Lifer
Originally posted by: Modelworks
Something has to process the instructions from the OS to the various chips on the board. Direct access to on board chips is not available on the x86 platform. In the embedded world it is done all the time, but that is a totally different architecture. If you use cpu time to get data from your proposed dpu then it would have to take less time than the cpu could do it for itself. Right now cpu and the data they need from storage are not a bottleneck for any application except copying of files and for that their are already controller cards.
Afcourse there is no direct access for programs, even in embedded systems since ARM is coming up strong and fast it is possible with for example embedded linux to have a totally pre-emptive OS. But the OS can do anything you want it too. As long as the OS knows of the existence of a certain hardware feature and has the code to use it. Just drivers and algorithms. But that is what i mean and Idontcare explained it perfectly : prefetchers.
Combine the knowledge of the OS when data is needed with a specialized core that takes care of the data and it will speed up greatly. That and a one stop caching system for reads and writes will improve it. See it like this : you need data on your hdd after you modify it.
The DPU gives you the modified data from the large local DRAM cache while this data is still being written to the HDD or SSD. I am sure it sounds familiar now. Because that is exactly what the cpu does with it's main memory and local on die cache. But since these chunks of data are relative small the hardware can handle it. With the boatloads of data coming from HDD or SSD , and the OS just servicing requests from program's , it is the OS who knows best what will be needed.
Now afcourse the HDD uses it's local onboard cache for this feature as well. As do some raid controllers. As does the OS. All i am saying is, get rid of the seperate caching systems and device one central version.
Applications that need large amounts of data spend more time waiting for the cpu to process the data than they do reading it from storage.
We are coming to a part of pc history where specialized cores take over. When the GPU becomes finally not scary anymore , the processing of data will speed up greatly while the read and write speeds of storage will make the performancegap between hdd and ssd and the cpu/gpu/memory combo even larger.