Discussion Intel current and future Lakes & Rapids thread

Page 281 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,094
16,014
136
What I would like to see, is something that stresses the CPU for hours, then the firmware would have to deal with the sustained load, and the cooling solution of the hardware.. This would then show the REAL max sustained power, and how well it performed at that level.

For example, run Rosetta@home or some other DC app, even folding@home, that takes hours to complete, and then see how long it look for a specific unit, and the points awarded.

What I see here is a discussion of how long each CPU stays@ what power, and what rating it gets, but the timelines are all too limited. The longer range test would better tell the power at the wall, and how it performs. Do we really care what software says its taking for power ? Or what is it REALLY taking in worse case scenarios. And how well does it perform at that power level.
 
  • Like
Reactions: Drazick and Tlh97

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Prior AMD SOCs didn't have separate sensors for GPU/CPU power, it was just SOC power. I doubt it has changed with Renoir but if you have a source for this new capability I'd be happy to be corrected.

You are correct. The Ryzen 3000 mobile doesn't have it. The Ryzen 4000 mobile chips do.

This is true for both HWInfo and GPU-Z, so an extra sensor was added. GPU ASIC power in HWInfo and GPU Chip Power in GPU-Z. I'm not sure exactly how that number works.

What I see here is a discussion of how long each CPU stays@ what power, and what rating it gets, but the timelines are all too limited. The longer range test would better tell the power at the wall, and how it performs. Do we really care what software says its taking for power ? Or what is it REALLY taking in worse case scenarios. And how well does it perform at that power level.

Agreed.

Software measurements are fine, as long as its reporting from the right sensors, and we understand what it means.

Wall power can be misleading, especially on battery. That's a start though, and nearly every site forgoes testing that.
 

Hitman928

Diamond Member
Apr 15, 2012
6,642
12,245
136
You are correct. The Ryzen 3000 mobile doesn't have it. The Ryzen 4000 mobile chips do.

This is true for both HWInfo and GPU-Z, so an extra sensor was added. GPU ASIC power in HWInfo and GPU Chip Power in GPU-Z. I'm not sure exactly how that number works.

Interesting, that still doesn't explain the difference between Prime95+furmark causing 31 watts maximum system load yet Witcher 3 being reported as 57 watts. Their max load number is engaging both the GPU and CPU already so it should be reflected in its power use. Even if it is being set to 25W during gaming, then you would expect Witcher 3 to be in the low 40s for power consumption, not 57 Watts. I'm still of the opinion that the Witcher 3 number is peak usage and not average but without the laptop to test, I obviously can't confirm.

Going back to the Anandtech and Tom's results, they both report that the 4800u is set for 15W so they'd have to comment on the certainty of this. I tried mentioning Ryan here but maybe I'll shoot him an email too and see if I get a response.

Edit: Reading the change log for GPU-Z I'm still not convinced it is not SOC power. It doesn't mention anything about an additional sensor and I can't find anything about it in HWinfo change log either. Any confirmation or documentation of this additional sensor anywhere? I'll keep looking in the mean time.
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Renoir also boost well past 15W not up to 50W but it does boost past 15W for a long time. We saw this on Renoir launch reviews.

At any rate, considering this is the first version with very early drivers and AMD is not caring for IGP perf anymore and the RDNA APU may be coming in mid-late 2022, this is a good result for Intel.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Going back to the Anandtech and Tom's results, they both report that the 4800u is set for 15W so they'd have to comment on the certainty of this. I tried mentioning Ryan here but maybe I'll shoot him an email too and see if I get a response.

Can't say for Anandtech, but Tomshardware is also likely using 25W. Their R20 peak is nearly 3500 points before it levels down to the 2800 mark. NBC's device(the same model) is at 3200. They aren't showing the sustained results but similar drop as with R15 would result in R20 getting around 2500.

I found another device with higher Witcher 3 power use:

They do explain why.
In addition, we also measured high consumption at maximum load, although average load is higher than maximum load. The reason behind this is the throttling of the SoC. We do not measure power consumption at the beginning of the stress test, but after half-an-hour to an hour. The Witcher 3 can also be considered load but seems to not be as demanding on the SoC with a consumption of 38 watts, which is similar to average load.

So Load Maximum by their definition could mean Prime 95+Furmark, while Load Average is something that's realistic to the device such as Prime 95 only. Load Maximum doesn't necessarily mean "Peak power use", rather an arbitrary definition.
 
Last edited:
  • Like
Reactions: lightmanek

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Just saw the notbookcheck results for Witcher 3... thats a excellent result, 36fps at 1080p high at 28W... To give you some idea, the 3400G gets about 30-35 fps at LOW, thats 4750G performance right there. It makes me wonder why results are all over the place, it may be drivers issues or toggle.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
@Hitman928 Ok, I think I figured out the way they measure things.

Based on this review:
And this one:

It shows the screenshot from the meter they use to test.

3DMark06 scene Average - Load Average
Witcher 3 Average - Witcher 3
Prime 95 or Prime 95 + Furmark(whichever figure is higher) Max - Load Maximum

Just saw the notbookcheck results for Witcher 3... thats a excellent result, 36fps at 1080p high at 28W... To give you some idea, the 3400G gets about 30-35 fps at LOW, thats 4750G performance right there. It makes me wonder why results are all over the place, it may be drivers issues or toggle.

Figures are not comparable between sites. They use different methodologies and testing scenes.

From Notebookcheck-
Vega 8 Ryzen 4000 - 23.1 fps
Iris Plus G7(best result) - 20.1 fps
Iris Xe G7 - 35.9 fps

Could be a driver issue. Since its pre-release. It could just be strong on Witcher 3.
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Figures are not comparable between sites. They use different methodologies and testing scenes.

From Notebookcheck-
Vega 8 Ryzen 4000 - 23.1 fps
Iris Plus G7(best result) - 20.1 fps
Iris Xe G7 - 35.9 fps

Could be a driver issue. Since its pre-release. It could just be strong on Witcher 3.

I know its difficult to compare Witcher 3, but unless you have the entire camara watching a wall there is NO WAY, to have well over 30fps at 1080P all high on Witcher 3 on a 3400G, that performance is really impressive.

What i dont know is why no one tests 900p, 720p (and anything below) is unwatchable, and 900p has almost the same visual quality as 1080p with a significant performance boost, it is a must for a APU.
 

Hitman928

Diamond Member
Apr 15, 2012
6,642
12,245
136
@Hitman928 Ok, I think I figured out the way they measure things.

Based on this review:
And this one:

It shows the screenshot from the meter they use to test.

3DMark06 scene Average - Load Average
Witcher 3 Average - Witcher 3
Prime 95 or Prime 95 + Furmark(whichever figure is higher) Max - Load Maximum



Figures are not comparable between sites. They use different methodologies and testing scenes.

From Notebookcheck-
Vega 8 Ryzen 4000 - 23.1 fps
Iris Plus G7(best result) - 20.1 fps
Iris Xe G7 - 35.9 fps

Could be a driver issue. Since its pre-release. It could just be strong on Witcher 3.

So looking at that and looking at the Yoga review I've concluded two things:

1) There is no GPU only sensor for Renoir, at least not that HWinfo or GPUZ can tap into. The "GPU ASIC" power sensor shows CPU and GPU power, you can see this in the Prime95 only test where the GPU is not loaded at all but the "GPU ASIC" power sensor still shows 14 W. This is the same for Picasso where there is a "GPU Power" sensor but it doesn't show you only GPU power but is SOC power or possibly "core" (CPU and GPU) power.


2) Their power measurements need better formatting. The Witcher 3 power consumption numbers seems like it is taking the average of a very short window of play time which, with modern SOCs, will by and large be during max and 1 step down turbo boost performance and not representative of an actual game session. Beyond that their average and max labels are confusing at best and not accurate at worst.

Laptop reviews are hard and slight modifications can make large differences in performance and battery life. Best bet I think is to wait for like for like comparisons between models with TGL and Renoir (or at least as close as models as possible) and then see how they stack up in terms of performance, thermals, and battery life. Tigerlake GPU does look quite impressive though and is obviously a big step up from ICL. It will be interesting to see how well it scales with their desktop class products.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
1) There is no GPU only sensor for Renoir, at least not that HWinfo or GPUZ can tap into.

Renoir's sensors are still lacking compared to Intel systems, and quite vague in what they represent.

I'm not sure what they represent then. There are systems where the CPU package power values are different than that GPU power number. Maybe on Ryzen the GPU has its own "upto" power figure it can use.

The other problem on AMD systems is that monitoring utilities don't show a clear TDP number(PL1/PL2 as on an Intel system), so you have to rely on measurements, which vary tremendously as the workload runs.

What i dont know is why no one tests 900p, 720p (and anything below) is unwatchable, and 900p has almost the same visual quality as 1080p with a significant performance boost, it is a must for a APU.

Agreed. Though even on Iris Xe I'd reduce it to Medium settings, or even a mix between Medium and High as personally I found low-40s to stutter too much.

The game has very beautiful scenes, and its worth running on the Ultra(or in my case, with some set at High) settings.
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
sorry but your wrong,

Geechbench for ST/power is a joke it doesn't actually stress the systems at all, uses like 7 watts peak and more like 5 watt average on my 4700u in ST @ 4.2ghz.
cpu-z stress uses about 6 watts 1T
cinebench R20 uses 9 watts 1T


what does it look like if clock limits weren't imposed and it could sustain 20watt power limit for 1T, im betting its getting pretty close to 4.7ghz and cut a large amount of the deficit.

The only thing that comes close to using 15watts on a 1T is Prime small FFT which is right around 14watts @ 4.2.
Also just add i have trouble even getting geekbench 5 to run at peak turbo on my 4700u , that is how lite of a benchmark it is , the AMD boost doesn't even think its worth while........
There are plenty of results out there which show the 4700U boosting to 4.2GHz in Geekbench 5. Besides, the point is that doubling power limits does nothing to increase ST performance of Renoir to bring it close to Tiger Lake, like you were claiming it would.
 

uzzi38

Platinum Member
Oct 16, 2019
2,746
6,653
146
Renoir's sensors are still lacking compared to Intel systems, and quite vague in what they represent.

I'm not sure what they represent then. There are systems where the CPU package power values are different than that GPU power number. Maybe on Ryzen the GPU has its own "upto" power figure it can use.

The other problem on AMD systems is that monitoring utilities don't show a clear TDP number(PL1/PL2 as on an Intel system), so you have to rely on measurements, which vary tremendously as the workload runs.



Agreed. Though even on Iris Xe I'd reduce it to Medium settings, or even a mix between Medium and High as personally I found low-40s to stutter too much.

The game has very beautiful scenes, and its worth running on the Ultra(or in my case, with some set at High) settings.
Package power is full SoC power on Renoir. Best way to check power limits is to download Ryzen Controller (I think that's the name?) And read out the values from there. Well, it's the easiest way anyway.
 
  • Like
Reactions: Tlh97

itsmydamnation

Diamond Member
Feb 6, 2011
3,044
3,831
136
There are plenty of results out there which show the 4700U boosting to 4.2GHz in Geekbench 5. Besides, the point is that doubling power limits does nothing to increase ST performance of Renoir to bring it close to Tiger Lake, like you were claiming it would.

Seriously learn to read

Yes doubling the power limit doesn't help because you hit the CLOCK LIMIT Already while well under the power limit.
see my point? having no clock limit or a 4.7ghz clock limit and a single core power limit in the 20 watt range like tigerlake would increase performance by a non trivial amount.

Yes i can make my 4700u boost to 4.2 in geekbench, i have to play with the power settings. I dont have this problem with anything else including all my own code. My point also still stands that geekbench is a very light benchmark and doesnt stress the power system at all which can then factors back into my main point.

I really dont understand whats so hard to understand.......
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Seriously learn to read

Yes doubling the power limit doesn't help because you hit the CLOCK LIMIT Already while well under the power limit.
see my point? having no clock limit or a 4.7ghz clock limit and a single core power limit in the 20 watt range like tigerlake would increase performance by a non trivial amount.

Yes i can make my 4700u boost to 4.2 in geekbench, i have to play with the power settings. I dont have this problem with anything else including all my own code. My point also still stands that geekbench is a very light benchmark and doesnt stress the power system at all which can then factors back into my main point.

I really dont understand whats so hard to understand.......
This is what you implied would happen if one doubled power limits in Renoir to match Tiger Lake.

At twice the power consumption. If AMD took the same path they could probably get within ~15 percent perf in single thead.

Well, the lack of any significant disparity between the U and H models of Renoir in ST workloads is sufficient to prove your hypothesis wrong. No amount of increase of power limits is going to bring Renoir even close to what Tiger Lake can do in ST.
 

A///

Diamond Member
Feb 24, 2017
4,351
3,160
136
TGL is disappointing unless you work only in ST apps or frequency sensitive software, or super niche workloads utilizing niche software. Otherwise, it's out 7+ months after Renoir and doesn't put up much a fight. It's fine though. TGL will have its buyers, or they'll be confused by Intel and end up buying an AMD. "Hi, can I get the new Tiglerlake 4700U laptop?"
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
It's too bad depeche mode sabotaged 7nm.

54nm/36nm 10NF to 60nm/40nm 10SF to 50nm/32nm 7nm. Real shame.
Goodbye; 2020 Grand Ridge & 2021 Sierra Forest. You'll be missed.

:laughing:
 

mikk

Diamond Member
May 15, 2012
4,291
2,379
136
Your linked image is to laptop power measured at the wall.

Both Intel and AMD will use more than 15W for significant amounts of time, you'd have to check both to see exactly how much they were using during gaming but no one did that as far as I can see.


I know but this is not the point. You said it's running at 38W until it drops and as you can see there is no drop in this game test from Notebookcheck. Depending on how long one game bench lasted it uses much more than Anandtech claims. However it seems they blindly assumed it runs on default 15W PL1 all the time.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
TGL is disappointing unless you work only in ST apps or frequency sensitive software, or super niche workloads utilizing niche software. Otherwise, it's out 7+ months after Renoir and doesn't put up much a fight. It's fine though. TGL will have its buyers, or they'll be confused by Intel and end up buying an AMD. "Hi, can I get the new Tiglerlake 4700U laptop?"
what do you expect from ultrabook?
TGL is the most powerful low thread count chip available
I see it as the best product in years in mobile segment
Renoir was nice when the added cores added an advantage while burst/short load/medium was the same as the Intel's lineup
but now tiger lake is 60% faster than 2Y old i7 8x lineup, 60% in st

Package power is full SoC power on Renoir. Best way to check power limits is to download Ryzen Controller (I think that's the name?) And read out the values from there. Well, it's the easiest way anyway.
that is the point
there was a microsoft surface laptop test with the same HW except intel icelake vs older r3k U
despite the fact that AMD and Intel SOC power was 15W long term, the battery life of the AMD system was 1,5h lower
AMD soc power != intel soc power reporting, which is the most important and ends in battery life
the wall power tells you the truth
 
Last edited:
  • Like
Reactions: mikk

Gideon

Platinum Member
Nov 27, 2007
2,013
4,992
136
TGL is disappointing unless you work only in ST apps or frequency sensitive software, or super niche workloads utilizing niche software. Otherwise, it's out 7+ months after Renoir and doesn't put up much a fight. It's fine though. TGL will have its buyers, or they'll be confused by Intel and end up buying an AMD. "Hi, can I get the new Tiglerlake 4700U laptop?"
I wouldn't be so negative. It's true that a 6-8 core laptop would be better for some developers or people who regulary use software that benefits from 8+ threads, but there aren't so many of these.

TGL still has 4 cores and 8 threads, in the ultrabook form factor this is fine for a lot more than purely single-threaded software. There is a lot of software out there that isn't single-threaded but struggles to really use more than 4 cores. Even encoding isn't a clear win for AMD as Handlebrake uses AVX-512, etc ...

Considering it also now has a better IGP it's definitely the better overall chip for most people (at least as far as flagships are concerned). The same couldn't really be said about ice-lake. It was barely faster in some ST tasks, had terrible MT clocks, worse IGP and battery-life issues. IMO Tiger Lake is a very good improvement overall.
 

DrMrLordX

Lifer
Apr 27, 2000
22,692
12,638
136
TGL is disappointing unless you work only in ST apps or frequency sensitive software, or super niche workloads utilizing niche software. Otherwise, it's out 7+ months after Renoir and doesn't put up much a fight. It's fine though. TGL will have its buyers, or they'll be confused by Intel and end up buying an AMD. "Hi, can I get the new Tiglerlake 4700U laptop?"

At least it appears to be significantly better than previous-gen 4c Intel mobile SoCs, which isn't something that could be said for IceLake. It's a different market now, though, so people are going to have to decide for themselves whether they want more clockspeed plus better iGPU or more cores than what they could get in the past. Honestly, when it was confirmed that TigerLake wasn't going to launch with more than 4c, did you really think it was going to beat Renoir in everything? I didn't.
 
  • Like
Reactions: Tlh97

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Intel said 8 cores TGL are coming, it is likely they did not expected AMD to move to 8C in notebooks, but i would not expect those to be on 15W.
 

DrMrLordX

Lifer
Apr 27, 2000
22,692
12,638
136
Intel said 8 cores TGL are coming, it is likely they did not expected AMD to move to 8C in notebooks, but i would not expect those to be on 15W.

Let's be honest, Intel shouldn't have been waiting to see what some competitor would do in mobile. And they (Intel) started selling 6c mobile chips some time ago. 10nm should have amounted to an excellent opportunity for Intel to expand core counts in their 15w-25w category, and they didn't do it on IceLake because they probably couldn't or on TigerLake because . . . we still don't know exactly why. I don't buy that it was "oops we didn't know AMD would do that". Not even a tiny bit.

So Intel claims 8c TigerLake-H is coming, but it remains to be seen when and where.

Let's wait and see pricing first.

Heh well, Intel could cut prices, but do they really want to do that?
 
  • Like
Reactions: Tlh97

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Before anything Intel needs better drivers, IGP perf is all over the place, in some cases it is better than a 3400G almost matching a 65W 4750G and in others is below the 4500U Vega 6.
 
  • Like
Reactions: Tlh97 and Gideon

TheGiant

Senior member
Jun 12, 2017
748
353
106
Let's be honest, Intel shouldn't have been waiting to see what some competitor would do in mobile. And they (Intel) started selling 6c mobile chips some time ago. 10nm should have amounted to an excellent opportunity for Intel to expand core counts in their 15w-25w category, and they didn't do it on IceLake because they probably couldn't or on TigerLake because . . . we still don't know exactly why. I don't buy that it was "oops we didn't know AMD would do that". Not even a tiny bit.

So Intel claims 8c TigerLake-H is coming, but it remains to be seen when and where.



Heh well, Intel could cut prices, but do they really want to do that?
IMO Intel didn't wait, they just f.d up with 10nm
if I remember correctly, there was a leak of cannonlake 8C
well we know now that they can deliver, how much that we don't

IMO tigerlake 8C is ready, but if released now it will compete with 10700K in MT and crush it in ST, so they can't bery their own 2 months old chips
 
  • Like
Reactions: Tlh97