- Mar 3, 2017
- 1,779
- 6,798
- 136
I think they need to improve fetch to make sure if they are having to use the decoders, the instruction code is in L1i not in memoryFor throughput I guess it would be enough if they could make 2 clusters work together in 1T mode.
There are too many laptops combining good iGPUs with bad dGPUs. Strix Halo got the biggest iGPU outside of Apple, so the danger of a good iGPU being combined with a bad dGPU is even higher here. So in that regard I consider that excellent news.A detail that is not hasardous :
There are too many laptops combining good iGPUs with bad dGPUs. Strix Halo got the biggest iGPU outside of Apple, so the danger of a good iGPU being combined with a bad dGPU is even higher here. So in that regard I consider that excellent news.
To have the best of both worlds. RTX GPU for fake frames up the wazoo.Yes, I don't understand the desire for a dGPU on Strix Halo.
To have the best of both worlds. RTX GPU for fake frames up the wazoo.
Maybe a good thing. An RTX GPU would probably drive the value way up because then the sheep who look at RTX like something made out of solid gold will buy it whether they really need it or not.That just sounds stupid IMHO.
Not sure about fake frames, but working software stack for anything else than gaming is something that might appeal to some peopleAnd who cares about fake frames on laptops that likely won't have super high refresh rates?
But then it should only be a workstation class card. Need to keep these lovely laptops from falling into the hands of immature gamer bros.Not sure about fake frames, but working software stack for anything else than gaming is something that might appeal to some people![]()
There are too many laptops combining good iGPUs with bad dGPUs. Strix Halo got the biggest iGPU outside of Apple, so the danger of a good iGPU being combined with a bad dGPU is even higher here. So in that regard I consider that excellent news.
with next gen halo I agree, especially with LPDDR6 and RDNA4/5.Yes, I don't understand the desire for a dGPU on Strix Halo.
Looks like the N4x GPUs are going this way, weird things aboundFor once, AMD should just release a product without any rehearsed marketing speak.
If the Strix Halo LP island on the IO die rumors are true, there's a decent chance that we can get all the performance of the "HX" parts of past AMD and Intel processors while having much, much better battery life.Yes, I don't understand the desire for a dGPU on Strix Halo.
But the part of better battery life is sticking to iGPU. Won't dGPU nullify the advantage?If the Strix Halo LP island on the IO die rumors are true, there's a decent chance that we can get all the performance of the "HX" parts of past AMD and Intel processors while having much, much better battery life.
I do see where people are coming from when they say they want such a product.
Use iGPU on battery. Use dGPU when plugged in. More flexibility.But the part of better battery life is sticking to iGPU. Won't dGPU nullify the advantage?
How many customers are there that need a) 10 hours of battery life, b) better than a 4070 mobile tier GPU, and c) more CPU performance than Strix Point can offer.If the Strix Halo LP island on the IO die rumors are true, there's a decent chance that we can get all the performance of the "HX" parts of past AMD and Intel processors while having much, much better battery life.
I do see where people are coming from when they say they want such a product.
Given that AMD did not mention any, either these cores do not exist or they are disabled for this round. I don't expect AMD to suddenly tell us "oh, and one more thing" about Strix Halo until devices get onto the market.If the Strix Halo LP island on the IO die rumors are true,
But why provide a large iGPU then? With all the implications for extra effort and compromises with the SoC's memory interface (double width, memory-side cache, on-chip fabric...), the corresponding RAM layout and BOM, and the larger cooling apparatus.Use iGPU on battery. Use dGPU when plugged in. More flexibility.
The thing about battery life is for people actually doing work. You’re not going to get 10-20 hours while loading the APU fully. That’s the reason people want “20+ hour” battery life and the reason Macbook Pro is so popular. It’s not so I can browse ATOT for 20 hours, it’s so I can be running Unreal Engine or performing engineering simulations on-the-go for at least a few hours at a time without stressing about a charger, and no loss of performance. It’s really pretty liberating and I’d love to see x86 deliver on the same premiseHow many customers are there that need a) 10 hours of battery life, b) better than a 4070 mobile tier GPU, and c) more CPU performance than Strix Point can offer.
If a customer can compromise on any one of those things, they can save hundreds of dollars. The market for such a laptop would be so small that I wonder if any laptop manufacturer would even be willing to build one.
Strix Halo doesn't support dGPUs.Honestly I have very little trust in OEMs that they'll produce compelling Strix Halo products
I have the feeling it's going to be thin and lights and then giant beefy gaming laptops with extra dGPU nobody asked for.
AMD themselves doesn't help these things. This is really where having some standard reference hardware for the OEMs would help
Since Strix Halo will likely be fastest single thread and multithread CPU on the market OEMs will want to use it for discrete gaming laptops for those people that want the absolute most performance regardless of cost, then it will get flamed in reviews for being overpriced for what it is.
If they do a good enough job at fusing off the dGPU when running on battery I might still consider a laptop like that, but I'm not too hopeful. Maybe too much pessimism but we'll see
ASUS is selling a thin and light model. Assuming Dell, HP and Lenovo release their workstation models with 15.6 and 17 inch displays, there's going to be a lot of space available. They can't put more than a 99 Wh battery in it so I assume there will be space available for a dGPU and its heatsink/cooling pipes etc. And it would let them price the dGPU models higher.But why provide a large iGPU then? With all the implications for extra effort and compromises with the SoC's memory interface (double width, memory-side cache, on-chip fabric...), the corresponding RAM layout and BOM, and the larger cooling apparatus.
