- Feb 2, 2008
- 2,219
- 218
- 101
Consider the following:
1) CPUs used to do the work of GPUs. We can see from the evolution of the discreet GPU the advantages of having them separate:
a) nimbleness – upgrade GPU without having to upgrade CPU/board
b) more chip space – more space to use for transistors from having separate chips
c) better tailoring – people who need GPU power can invest in that without being forced to pay for it on the chip (usually... Broadwell C is an exception, with half the CPU being graphics)
d) more power – separate CPU and GPU makes it easier to cool the same wattage with less noise, generally, thanks to much larger area to work with
e) better hardware tailoring and power – separate VRAM versus system ram with different attributes
2) Intel has to downclock AVX-2/AVX-512 already and Intel enthusiasts are in a quandary about stability testing with Prime (AVX-2 or not?). That doesn't seem very optimal. By having the SIMD chip separately powered and cooled it could be cranked up.
3) Intel seems to be using AVX tech as a way to separate itself from Zen. There is already talk about Zen not being optimal for AVX-2 and not having AVX-512 at all. People say "well, it will be forever before AVX-512 is a thing" but that doesn't change the fact that people said the same thing about regular AVX. It will be a thing and it will take up transistor real estate, power consumption, cooling, etc.
By spinning off AVX-2 and higher to a SIMD chip AMD could let people counter Intel's instructional expansion moves. It could also propel AVX-512 into the mainstream, with AMD being able to lead the pack by cranking up the power instead of having to downclock.
I know CPUs have eaten the MMU and FPU but they have mostly coughed the GPU out. HBM2 may make the APU more viable but there is always going to be the problem of concentrating so much heat into a small area, yields, and the fact that a chip can only be so large. One can put more than one chip into a die, though.
4) Should the separate SIMD chip be cooled by the CPU cooler, with a hybrid socket that can fit both chips but also operate with just the main CPU installed? Or, should it go fully separate like a northbridge chip or even have a slot like a GPU?
5) Would the latency penalty be too great, even if the chips are close to each other on the board?
6) Would it make sense to dump all SIMD into the separate chip or leave AVX and below on for mainstream users? It seems most sensible to do the latter to not force people into buying a second chip. However, that also makes for more fragmentation. I can hear the whining now, though "With Intel I only need to buy one CPU. With AMD I have to buy TWO!!!" (even if both are cheaper together than that one chip is alone).
AMD has the opportunity, with AM4+ at least, to counter Intel's instruction advantage strategy. If Intel follows suit and spins off the SIMD then enthusiasts are the winners regardless because they won't have the overclocking trouble they're faced with now due to AVX-2. As Intel continues to put more power consumption into SIMD it will cause increasing issues with overclocking — as long as it's in the same chip, eh?
It's probably too late for the AM4 boards but Zen+ could go to AM4+. It would be nicer for enthusiasts to not have to switch chipsets/boards but it's not like Intel isn't frequently changing sockets.
1) CPUs used to do the work of GPUs. We can see from the evolution of the discreet GPU the advantages of having them separate:
a) nimbleness – upgrade GPU without having to upgrade CPU/board
b) more chip space – more space to use for transistors from having separate chips
c) better tailoring – people who need GPU power can invest in that without being forced to pay for it on the chip (usually... Broadwell C is an exception, with half the CPU being graphics)
d) more power – separate CPU and GPU makes it easier to cool the same wattage with less noise, generally, thanks to much larger area to work with
e) better hardware tailoring and power – separate VRAM versus system ram with different attributes
2) Intel has to downclock AVX-2/AVX-512 already and Intel enthusiasts are in a quandary about stability testing with Prime (AVX-2 or not?). That doesn't seem very optimal. By having the SIMD chip separately powered and cooled it could be cranked up.
3) Intel seems to be using AVX tech as a way to separate itself from Zen. There is already talk about Zen not being optimal for AVX-2 and not having AVX-512 at all. People say "well, it will be forever before AVX-512 is a thing" but that doesn't change the fact that people said the same thing about regular AVX. It will be a thing and it will take up transistor real estate, power consumption, cooling, etc.
By spinning off AVX-2 and higher to a SIMD chip AMD could let people counter Intel's instructional expansion moves. It could also propel AVX-512 into the mainstream, with AMD being able to lead the pack by cranking up the power instead of having to downclock.
I know CPUs have eaten the MMU and FPU but they have mostly coughed the GPU out. HBM2 may make the APU more viable but there is always going to be the problem of concentrating so much heat into a small area, yields, and the fact that a chip can only be so large. One can put more than one chip into a die, though.
4) Should the separate SIMD chip be cooled by the CPU cooler, with a hybrid socket that can fit both chips but also operate with just the main CPU installed? Or, should it go fully separate like a northbridge chip or even have a slot like a GPU?
5) Would the latency penalty be too great, even if the chips are close to each other on the board?
6) Would it make sense to dump all SIMD into the separate chip or leave AVX and below on for mainstream users? It seems most sensible to do the latter to not force people into buying a second chip. However, that also makes for more fragmentation. I can hear the whining now, though "With Intel I only need to buy one CPU. With AMD I have to buy TWO!!!" (even if both are cheaper together than that one chip is alone).
AMD has the opportunity, with AM4+ at least, to counter Intel's instruction advantage strategy. If Intel follows suit and spins off the SIMD then enthusiasts are the winners regardless because they won't have the overclocking trouble they're faced with now due to AVX-2. As Intel continues to put more power consumption into SIMD it will cause increasing issues with overclocking — as long as it's in the same chip, eh?
It's probably too late for the AM4 boards but Zen+ could go to AM4+. It would be nicer for enthusiasts to not have to switch chipsets/boards but it's not like Intel isn't frequently changing sockets.