- Nov 16, 2006
- 7,895
- 8,997
- 136
I've long maintained that AMD really missed a golden opportunity during the HD 4xxx/5xxx/6xxx era to build a top notch brand by pursuing the small die strategy and building incredibly competitive GPUs at a fraction of NV GPU die sizes, but never scaling up to NV GPU die sizes (and presumably crushing NV in the performance category) and securing a reputation as a premium brand. Instead they went after the so called sweet spot and cemented their reputation as "the other GPU maker".
NV arguably flipped this strategy on AMD from the GTX 6xx series onward, and executed on it nearly flawlessly... Until the RTX series. RTX dies are huge and expensive and pushing a new technology with arguable short term returns. Sounds similar to the GT200 series situation.
Granted, we don't have the exact performance figures on Turing, but this potentially leaves an opening for AMD, with even a moderate refinement of their architecture "unleaded" by a bunch of RT/TN cores to make a GPU that provides better performance in the "launch day titles" and win back the performance at all costs crowd (the "whales" and big spenders) in the consumer GPU space.
My understanding is that the GCN arch in Vega is actually reasonably competent in Ray Tracing tasks (naturally not going to perform as well as dedicated hardware, but better than Pascal) so they can still tout compatibility with DXR while focusing performance increases in standard rasterized tasks.
In short: Unmake the mistake they made with the small die strategy during the Terascale Arch years.
What are your thoughts on how AMD should move forward from here knowing what we know about the Turing shake-up?
NV arguably flipped this strategy on AMD from the GTX 6xx series onward, and executed on it nearly flawlessly... Until the RTX series. RTX dies are huge and expensive and pushing a new technology with arguable short term returns. Sounds similar to the GT200 series situation.
Granted, we don't have the exact performance figures on Turing, but this potentially leaves an opening for AMD, with even a moderate refinement of their architecture "unleaded" by a bunch of RT/TN cores to make a GPU that provides better performance in the "launch day titles" and win back the performance at all costs crowd (the "whales" and big spenders) in the consumer GPU space.
My understanding is that the GCN arch in Vega is actually reasonably competent in Ray Tracing tasks (naturally not going to perform as well as dedicated hardware, but better than Pascal) so they can still tout compatibility with DXR while focusing performance increases in standard rasterized tasks.
In short: Unmake the mistake they made with the small die strategy during the Terascale Arch years.
What are your thoughts on how AMD should move forward from here knowing what we know about the Turing shake-up?