NTMBK
Lifer
hUMA means nothing graphics wise. Its computational only.
Actually, would hUMA not make massive "Megatextures" (to use the Carmack word) much easier to deal with? (Note: this is just a guess, as I am not a games developer. 🙂 )
hUMA means nothing graphics wise. Its computational only.
Yes, hUMA is great for virtual texturing.Actually, would hUMA not make massive "Megatextures" (to use the Carmack word) much easier to deal with? (Note: this is just a guess, as I am not a games developer. 🙂 )
hUMA means nothing graphics wise. Its computational only.
Same reason why graphics and hUMA is non related on all AMD slides.
Yes, hUMA is great for virtual texturing.
Is there any software written yet that shows this, or is it all theoretical?
http://www.extremetech.com/gaming/1...u-memory-should-appear-in-kaveri-xbox-720-ps4
Game developers and other 3D rendering programs have wanted to use extremely large textures for a number of years and theyve had to go through a lot of tricks to pack pieces of textures into smaller textures, or split the textures into smaller textures, because of problems with the legacy memory model Today, a whole texture has to be locked down in physical memory before the GPU is allowed to touch any part of it. If the GPU is only going to touch a small part of it, youd like to only bring those pages into physical memory and therefore be able to accommodate other large textures.
With a hUMA approach to 3D rendering, applications will be able to code much more naturally with large textures and yet not run out of physical memory, because only the real working set will be brought into physical memory.
So yes, the answer is no. We'll see this stuff in next-gen console games in a not too distant future.So the answer is "no".
Actually, would hUMA not make massive "Megatextures" (to use the Carmack word) much easier to deal with? (Note: this is just a guess, as I am not a games developer. 🙂 )
It might be as useful as 3DNow! and SSE5 if you get my drift. Unless console ports somehow transfers this to the PC. But that leaves any dGPU out of it and any Intel product.
The industry might have to follow AMD here, just like the 64bit deal
Why? A software solution for virtual texturing is already working on non-hUMA hardwares. Rage for example.... But that leaves any dGPU out of it and any Intel product.
Why? A software solution for virtual texturing is already working on non-hUMA hardwares. Rage for example.
The tiled resources function in DirectX is also works with any hardware. There are three tier for it. One software and two hardware solutions. If you use a Granite middleware for it, than the compatibility won't be a problem.
It might be as useful as 3DNow! and SSE5 if you get my drift. Unless console ports somehow transfers this to the PC. But that leaves any dGPU out of it and any Intel product.
But performance could be. Its abit like the PhysX GPU vs CPU.
Thats the question. Also hUMA is not exactly dGPU friendly.
And unlike 64bit, hUMA is far from as important. We can only wait and see how it turns out. It might also simply be replaced by something else.
Thats the question. Also hUMA is not exactly dGPU friendly.
And unlike 64bit, hUMA is far from as important. We can only wait and see how it turns out. It might also simply be replaced by something else.
Every manufacturer wants to integrate, wants to simplify and HUMA is just a part of that process for AMD. Both Intel and AMD are in a tight race for IGP crown so i don't know how a person as knowledgeable as you could dismiss HUMA's importance.
No, it's fair enough that it may not have much importance as exposed through any AMD proprietary API- the comparison to 3DNow! is apt, AMD just don't have enough penetration to make it universally adopted. But as wrapped in some commonly used API like DirectX, it will be a very useful tool.
AMD is not alone this time, HuMA is not 3DNow. HSA/HuMA has ARM, SAMSUNG, Qualcomm, TI and more behind it. This time AMD made the right move.
ps: HuMA can make the cooperation of x86 and ARM cores working together in the same IC/SoC possible(Kaveri + ARM), also dGPUs can benefit from it as they could use/share System ram with the APUs.
Being behind something is cheap PR. What products do they have in the pipeline?
No they cant. They use different memory archs. You cant replace the x86 core in kaveri for example with ARM and still have hUMA working. You also need a different CGN then.
Are you saying ARM and x86 accesses memory the same way? And you can reuse the GCN block for hUMA?
Maybe you should check how ARM maps memory vs x86.
It is, however, a problem for traditional CPU/GPU designs. As mentioned before, in traditional systems, data has to be copied from the CPU's memory to the GPU's memory before the GPU can access it. This copying process is often performed in hardware independently of the CPU. This makes it efficient but limited in capability. In particular, it often cannot cope with memory that has been written out to disk. All the data being copied has to be resident in physical RAM, and pinned there, to make sure that it doesn't get moved out to disk during the copy operation.
hUMA addresses this, too. Not only can the GPU in a hUMA system use the CPU's addresses, it can also use the CPU's demand-paged virtual memory. If the GPU tries to access an address that's written out to disk, the CPU springs into life, calling on the operating system to find and load the relevant bit of data, and load it into memory.
HSA isn't just for CPUs with integrated GPUs. In principle, the other processors that share access to system memory could be anything, such as cryptographic accelerators, or programmable hardware such as FPGAs. They might also be other CPUs, with a combined x86/ARM chip often conjectured. Kaveri will in fact embed a small ARM core for creation of secure execution environments on the CPU. Discrete GPUs could similarly use HSA to access system memory.
The Xbox One and PS4 support HSA. There will be complete developer tools for these in 2014.Being behind something is cheap PR. What products do they have in the pipeline?