• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Kaveri/Huma and Light Gaming

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Yes they can, they both could use the same System memory. Also, If the ARM core can use HuMA it could work with any other arch.

No they can't. AMRv8 and AMD64 are different scalar ISAs with different TLBs. For hUMA you must use the same TLB for the LoC and the ToC.
Also there is no reason to use two different scalar ISAs in one chip. If AMD will use an ARM core in their nextgen products than it will do some special stuff, but nothing more.
 
Why? A software solution for virtual texturing is already working on non-hUMA hardwares. Rage for example.

The tiled resources function in DirectX is also works with any hardware. There are three tier for it. One software and two hardware solutions (a limited tier1 option for Kepler, and a full feature set tier2 option for GCN, and also for the consoles). If you use a Granite middleware for it, than the compatibility won't be a problem.

hmm could be like this:
other GPU --> software solution
Kepler --> limited tier 1
GCN 1.1 ---> tier 1
GCN 2.0 --> tier 2

even DX 11.2 will exposed some of GCN 2.0 features,
the upcoming DX will exposed more and more GCN 2.0 features
no wonder some forum said that they are testing DX 11.5 or DX 12 alpha
(some russian forum)

also it is strangely Hawaii Lauch date is same month as Windows 8.1

Hmm if tier 2 relate so much to GCN 2.0
then X1 is must be a very custom build of GCN 2.0 as a base
 
Last edited:
No they can't. AMRv8 and AMD64 are different scalar ISAs with different TLBs. For hUMA you must use the same TLB for the LoC and the ToC.
Also there is no reason to use two different scalar ISAs in one chip. If AMD will use an ARM core in their nextgen products than it will do some special stuff, but nothing more.

So if MS want to own a full IP of X1 chip, as it is rumored to be fabbed at 3 CP fab. then the SCALAR ISA --- GCN can not X86 ?

are you sure ?

this is the first time i read the relation of TLB Loc ToC Huma of it but it seems rather make sense.
 
I think the top end Kaveri would be good for casual gaming especially if they get optimized due to AMD being in all 3 new consoles

I personally just want to play Dota 2 on my laptop with an iGPU, while I can do that already with Haswell or the A8/A10 (they're in terrible laptops though), I rather wait for Kaveri and see how it goes (yes I'm an AMD fanboy)
 
Last edited:
A few possible higher end mobile systems and probably some cpu/mobo combos with soldered on memory will probably be the only GDDR5 equipped Kaveri systems unless AMD really pushes manufacturers to support it and get the price down.

It must not be so expensive that low end laptop OEMs refuse offer it as well as not be so expensive as to compete with Intel HD 5000 series IGPs and low + mid tier dedicated graphics.

$600 for a general 15.6" 1366 x 768, 4 GB GDDR5 + Kaveri system seems like a nice sweet spot to me, though there would likely be better sell through if the GDDR5 was ditched for DDR3 and such a system was sold at $500.

I would almost just reserve GDDR5 Kaveri systems for ultrabook-esque form factors, where the more premium price is for more than just performance. Something like the 13.3" Lenovo Yoga with the same or similar chassis and 1600 x 900 monitor but with a 64 GB SSD, Kaveri and 4 GB GDDR5 in the $800 range (lol good luck). Yoga is a great design.
 
Last edited:
Kaveri might be nice - especially for laptops, but imo for big change in APU you will have to wait till Excavator and DDR4.
GDDR5 for APUs will not get enough adoption and Motherboards and GDDR5 sticks will both be rare and very costly. (if available at all).
 
A few possible higher end mobile systems and probably some cpu/mobo combos with soldered on memory will probably be the only GDDR5 equipped Kaveri systems unless AMD really pushes manufacturers to support it and get the price down.

It must not be so expensive that low end laptop OEMs refuse offer it as well as not be so expensive as to compete with Intel HD 5000 series IGPs and low + mid tier dedicated graphics.

$600 for a general 15.6" 1366 x 768, 4 GB GDDR5 + Kaveri system seems like a nice sweet spot to me, though there would likely be better sell through if the GDDR5 was ditched for DDR3 and such a system was sold at $500.

I would almost just reserve GDDR5 Kaveri systems for ultrabook-esque form factors, where the more premium price is for more than just performance. Something like the 13.3" Lenovo Yoga with the same or similar chassis and 1600 x 900 monitor but with a 64 GB SSD, Kaveri and 4 GB GDDR5 in the $800 range (lol good luck). Yoga is a great design.

Kaveri doesnt support GDDR5.
 
Kaveri doesnt support GDDR5.

There was some speculation abuout GDDR5m, but it's indeed quite unlikely.

Anyway, I can't believe you are so blatantly ignoring the fact that GCN is the first AMD architecture with any serious Cache hierarchy at all, that's a pretty big difference (and one of the reasons why HD4000 is a lot less sensitive to bandwidth than Trinity) . This WILL benefit AMD in some cases considerably (in some games, not at all, in others, quite a lot).

I Still wouldn't expect miracles.In the best real world situations Kavieri will probably be at most 50% faster than Richland. On average probabbly about 30%
 
Anyway, I can't believe you are so blatantly ignoring the fact that GCN is the first AMD architecture with any serious Cache hierarchy at all, that's a pretty big difference (and one of the reasons why HD4000 is a lot less sensitive to bandwidth than Trinity) . This WILL benefit AMD in some cases considerably (in some games, not at all, in others, quite a lot).

I Still wouldn't expect miracles.In the best real world situations Kavieri will probably be at most 50% faster than Richland. On average probabbly about 30%
I don't think anyone is ignoring this. Althrough 30% GPU peformance increase does not really change that much tbh. For laptops it's nice, same for people buying low-tier Kaveri chips.
For desktop that kind of increase does not change anything - it still won't be competetive vs. low-cost CPU and ~7750 GDDR5 GPU combo.

APUs to shine need serious increase in memory bandwidth and that cannot be done on DDR3.

Once we will get APU that has GPU that is 100-150% increase vs. Richland then than may be game changer, but not before. I still wonder for how long DDR4 will be competetive anyway, after all GDDR6 was not shelved and it will come eventually wasn't it?
 
Yes, PS4 supports hUMA, Xbox One doesnt. AMD says hUMA is not possible on the Xbox One due to its memory.

Which means multi-platform developers wont be able to use hUMA in both console versions of their games. I wonder if they will bother to implement hUMA exclusively for 1 console in multi-platform titles.
 
I don't think anyone is ignoring this. Althrough 30% GPU peformance increase does not really change that much tbh. For laptops it's nice, same for people buying low-tier Kaveri chips.
For desktop that kind of increase does not change anything - it still won't be competetive vs. low-cost CPU and ~7750 GDDR5 GPU combo.

APUs to shine need serious increase in memory bandwidth and that cannot be done on DDR3.

Once we will get APU that has GPU that is 100-150% increase vs. Richland then than may be game changer, but not before. I still wonder for how long DDR4 will be competetive anyway, after all GDDR6 was not shelved and it will come eventually wasn't it?

That's the big sticking issue.

And we're talking about performance that will start to be an issue when the true next-gen 20nm GPUs actually arrive. It's likely that the next run of GPUs will include sub-$100 SKUs that demolish 7770/650.
 
Which means multi-platform developers wont be able to use hUMA in both console versions of their games. I wonder if they will bother to implement hUMA exclusively for 1 console in multi-platform titles.

Hi guys,
I'm Sam from AMD and I wanted to provide you with the latest update on this topic: http://www.nowgamer.com/news/2052820/ps4_vs_xbox_one_amd_retracts_performance_comparison.html

To reiterate, AMD is retracting the comments made by an AMD spokesperson during a recent gamescom interview. Inaccurate statements were made regarding the details of our semi-custom APU architectures; and AMD will not comment on the Xbox One and PS4 memory architectures and won't speak for Microsoft, Sony or other AMD customers
 
An Xbox One dev did. And confirmed no hUMA. But they instead go with the software solution in the form of tiled resources in DirectX 11.2.
 
That's the big sticking issue.

And we're talking about performance that will start to be an issue when the true next-gen 20nm GPUs actually arrive. It's likely that the next run of GPUs will include sub-$100 SKUs that demolish 7770/650.

My sentiments exactly. It is not that you CANT game on a igp, it is just that for very close to the same price you can get so much better of an experience with a low end cpu and a discrete card. Except for certain limited scenarios, such as SFF systems, trying to game on an igp from either Intel or AMD makes no sense.

And as you said, despite improved apus coming out, the difference will probably be even greater when the 20nm discrete cards come out.
 
Hi guys,
I'm Sam from AMD and I wanted to provide you with the latest update on this topic: http://www.nowgamer.com/news/2052820/ps4_vs_xbox_one_amd_retracts_performance_comparison.html

To reiterate, AMD is retracting the comments made by an AMD spokesperson during a recent gamescom interview. Inaccurate statements were made regarding the details of our semi-custom APU architectures; and AMD will not comment on the Xbox One and PS4 memory architectures and won't speak for Microsoft, Sony or other AMD customers

Thank you SAM
So it is true If someone want semi custom GCN and what to fabbed themselves it has to be paired with non X86 ISA

As only AMD Complete APU/SOC have right to use X86 + GCN (intel-amd problem)
 
Back
Top