We have heard many things about 3D Tri-gate transistors, FinFETs, and other scavanged Alien Technology when Intel announced their 22nm process back when Ivy Bridge was released, but I believe we haven't "fully enjoyed" all the benefits that these technologies bring with them in terms of reduction in power consumption, heat and greater switching speed on the CPU side - core speed has been stagnant in Intel chips for 3 generations.
Now that TSMC is talking about future nodes and processes, i wonder what is going to be the effect of these technologies on GPUs.
GPU makers can choose a maximum power consumption number and stick with it, utilizing a decrease in gate leakage and increase in transistor density to cram in more transistors and thus increase performance, or make GPUs with roughly the same number of transistors as we have today, tweak the architecture, and design smaller-die lower-TDP and cooler running chips for the midrange.
If GPU makers do elect to utilize every Watt available to push performance in their high-end chips, what is the Overclocking headroom going to be like? likely it will suffer from lack of "electric oxygen" and we won't see 20-25% OCs we see today.
Core clock speed is steadily rising, 1GHz core is a common frequency these days, but memory speed is slightly plateauing with 6-7Gbps GDDR5 chips. Memory modules are not produced at the same process as GPUs so it could be a while before we see any major improvement there.
GPU makers are always competing for the performance crown, but i really don't want to get used to 95C operating temperatures as the norm.
what do you think?
Now that TSMC is talking about future nodes and processes, i wonder what is going to be the effect of these technologies on GPUs.
GPU makers can choose a maximum power consumption number and stick with it, utilizing a decrease in gate leakage and increase in transistor density to cram in more transistors and thus increase performance, or make GPUs with roughly the same number of transistors as we have today, tweak the architecture, and design smaller-die lower-TDP and cooler running chips for the midrange.
If GPU makers do elect to utilize every Watt available to push performance in their high-end chips, what is the Overclocking headroom going to be like? likely it will suffer from lack of "electric oxygen" and we won't see 20-25% OCs we see today.
Core clock speed is steadily rising, 1GHz core is a common frequency these days, but memory speed is slightly plateauing with 6-7Gbps GDDR5 chips. Memory modules are not produced at the same process as GPUs so it could be a while before we see any major improvement there.
GPU makers are always competing for the performance crown, but i really don't want to get used to 95C operating temperatures as the norm.
what do you think?