- May 1, 2012
- 340
- 0
- 0
http://www.extremetech.com/computin...i-will-fully-share-memory-between-cpu-and-gpu
Pretty much Kaveri CPU and GPU share the same bandwidth instead of mirror copying the data (trinity) which means super speed.
Seems realistic and all but Trinity isn't even available! more typical AMD slides and promises.
it all seems good but I guess I can only hope for the best.
AMD is hosting its Fusion Developer Summit this week, and the overarching theme is heterogeneous computing and the convergence of the CPU and GPU.
Pretty much Kaveri CPU and GPU share the same bandwidth instead of mirror copying the data (trinity) which means super speed.
Seems realistic and all but Trinity isn't even available! more typical AMD slides and promises.
it all seems good but I guess I can only hope for the best.
The industry is definitely moving towards a more blended processing environment, something that began with the rise of specialty GPGPU workstation programs and is now starting to integrate itself with consumer applications. Standards like C++ AMP, OpenCL, and Nvidia’s CUDA languages harness the graphics cards for certain tasks. More and more programs are using the GPU for certain tasks (even if it’s just drawing and managing the UI), and as developers jump on board it should accelerate even more towards using components to their fullest on the software side. On the hardware side of things, we are already seeing integration of GPUs into the CPU die and specialty application processors (at least in mobile SoCs). Such varied configurations are becoming common and are continuing to evolve in a combined architecture direction.
The mobile industry is a good example of HSA catching on with new system-on-a-chip processors coming out continuously and mobile operating systems that harness GPU horsepower to assist the ARM CPU cores. AMD isn’t just looking at low power devices, however — it’s pushing for “one (HSA) chip to rule them all” solutions that combine GPU cores with CPU cores (and even ARM cores!) that process what they are best at and give the best user experiences.
The overall transition of hardware and software that fully takes advantage of both processing types is still a ways off but we are getting closer everyday. Heterogeneous computing is the future, and assuming most software developers can be made to recognize the benefits and program to take advantage of the new chips, I’m all for it. When additional CPU cores and smaller process nodes stop making the cut, heterogeneous computing is where the industry will look for performance gains.
Last edited: