• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question about NVIDIA CUDA 6 Unified Memory

Revolution 11

Senior member
http://anandtech.com/show/7515/nvidia-announces-cuda-6-unified-memory-for-cuda

Seeing this, I have several questions about the article. Nvidia has implemented "complete unified memory support", yet only implemented in software. What would a hardware implementation even look like?

If Volta is supposed to be the hardware implementation, does that mean that the CPU can use stacked DRAM or VRAM if system RAM is not enough?

I am thinking a hierarchy of system RAM, stacked DRAM/VRAM, and then the page file in order of access. If a workload requires 10 GB of RAM and the PC only has 8, can the GPU give up its 2 GB to complete the work?
 
As far as I can see its not like the GPU VRAM can be used as a cache for partial data either. So its not like an 8GB array to be sorted can be passed to a 2GB card in sections so that the larger memory of the hosting system can hold the results but at slower speeds due to the cached nature of it. Just sounds like all they are doing is automatically handling the copying of the data.
 
Last edited:
Back
Top