Wasn't this pretty much given? Ryzen 2700 is 65W chip. 7nm process will give it 25% performance boost at the same power even without any architectural improvements. So, simply porting Zen1 to 7nm would get you that.It would be a huge upgrade for the consumer if Ryzen 3 8C 16T would give 90% (or more) of Core i9 9900K Gaming performance at 65W TDP and half the price.
The question is if this is 14LPP or IBM's HPC 14nm process.
High frequency should not matter for IO die. What IBM 14HP would provide is the possibility of having eDRAM. Since XBOX One SoC only had 32MB embedded ram and that was enough to handle 1080p, if it has 32MB of eDRAM, it might be enough as a side port memory to support GPU chiplet on a mobile platform.The IBM one is usually for higher frequency right? Anyone know if higher clocked IO would be better or matter at all?
The IBM one is usually for higher frequency right? Anyone know if higher clocked IO would be better or matter at all?
I don't think it would matter overly much. Running it at a higher frequency than whatever it's communicating with wouldn't net any additional improvements. For example, the IO bus speed for DDR4 memory would only be around 2 GHz for the fastest memory, and I'm not even sure if people are using that because the CAS latency is usually a lot higher.
Interestingly Adored came to the same conclusion that it was a 65W ES at the presentation. Your result of 1412 is almost exactly what a 65W R7 1700 scores at stock.Just for fun, I tried to make my R7 1800x system run at the same total system power as the Zen2 ES demo machine.
Like the demo machine, I have an RX Vega, though mine is Vega FE. I also have some big honking fans on the HSF (NH-D15) which are not standard - Noctua industrialPPC 3000s, running full bore. This includes the stock fan for a D15S which completes the 3-fan configuration. Finally I have the stock system fans for a Rosewill Thor V2. So my system power draw is going to be a little higher just from the fans. The board is x370 Taichi, and I have a 480GB BPX NVMe SSD. I downclocked my DDR4-3333 to DDR4-2666 to match the system RAM configuration.
Underclocking my chip was kind of hard - I had to use the AMD CBS settings, which are a PITA versus the ASRock OC interface. Regardless, I ran CBR15 @ 3200 MHz and turned in a score of 1412. I measured 180W at the wall. In this power range, my PSU averages maybe 89% efficiency (EVGA P2 750W), meaning pre-loss draw was more in the ballpark of 160W. Subtracting all the extra fan power, I figure my power usage was close to the 135W for the ES demo machine.
And all I scored was a measly 1412.
Not sure if I can get any more clockspeed at that power level, but I doubt it.
How can you say Zen2 is not a new architecture when its uncore is the biggest departure from convention in over a decade?
High frequency should not matter for IO die. What IBM 14HP would provide is the possibility of having eDRAM. Since XBOX One SoC only had 32MB embedded ram and that was enough to handle 1080p, if it has 32MB of eDRAM, it might be enough as a side port memory to support GPU chiplet on a mobile platform.
Considering that there will be no APU-chiplet version, I wouldn't mind a smallish GPU inside the I/O die (at least the decode/encode blocks and at least 2EUs). It would make this CPU a lot easier to market for OEMs and those retail customers who have no need for a standalone GPU. Some software utilizing integrated graphics for GPGPU or the video-encode blocks (such as Adobe suite) would also work better that way. A small GPU would also mean much better battery life in those desktop-replacement laptops like ASUS ROG STRIX (with Ryzen 1700). Not that it matters much in that segmentThe IO die is almost exactly the same size as Summit Ridge with both CCX lopped off (~123mm2 vs 213 - 88 = ~125mm2) so I think we we can pretty safely put the eDRAM rumors to rest.
I would rather see AVX2 performance.
IMO, this is packaging and not directly architecture.
Considering that there will be no APU-chiplet version, I wouldn't mind a smallish GPU inside the I/O die (at least the decode/encode blocks and at least 2EUs). It would make this CPU a lot easier to market for OEMs and those retail customers who have no need for a standalone GPU. Some software utilizing integrated graphics for GPGPU or the video-encode blocks (such as Adobe suite) would also work better that way. A small GPU would also mean much better battery life in those desktop-replacement laptops like ASUS ROG STRIX (with Ryzen 1700). Not that it matters much in that segment
We already lose 8 PCIe lanes just so that the socket can support APU's.
I think you are misunderstanding the power of the GPU that is being requested.
1 PCIe lane would be entirely adequate to feed it.
Its something for:
- running the OS (great for debug)
- word processing
- spreadsheets
- basic browsing
- text editor
etc
Power is not required (requested).
There will be no APU chiplets on AM4.
Why, gaming performance is going to be by and large more important for most of the people on this forum.
But again the main point is the IO doesn't include a GPU and won't and AMD isn't at the point where the volume of CPU's they are shipping are missing out because their only iGPU cpu's are 4c or less.
Whilst I agree with what you're saying, PCIe4 pretty much mpacts the discussion; since most devices are PCIe3 and PCIe4 lanes can be split into 2x PCIe3, then it maybe is less of an issue.I'd have to check why the 8 lane drop but I thought I remembered it being wiring that is shared with the video outputs. The other are lost because they have no wiring at all because of the sockets support for AM4 and their decision to not go LGA. It isn't as simple as put a 1-3 unit iGPU for remeadial tasks, AM4 is packed a little tight for Zen's possible feature set and no desktop user wants AMD to lose more PCIe just so that people can pretend it's costing them business sales.