This applies to nvidia as well but I don't care about them as much.
People always talk about yields being lower on a new process, hence the cost of the larger dies being much greater with a new generation. This leads companies like amd and nvidia choosing smaller die sizes initially while yields improve over time.
Multi gpu often sucks, it's dependent on developer support and even with that often has issues.
So why can't amd create a megazord like gpu? A gpu where several smaller gpu dies come together and form a single larger die and most importantly, BEHAVE like a single larger die? What is the technical constraint there?
Is is the proximity to the memory? The inability to properly share the same data to the different smaller gpu pieces at a time?
But can't there be a common pool of memory stuck somewhere on the board with optical interconnects to the individual gpu components? I thought that was the whole point of that talk about the machine hp was building and later backed away from?
Would optical interconnects between the different gpu dies be insufficient to allow enough bandwidth and speed to work as a single unit? What else is the constraint here?
There must be an answer. OR is this the kind of thing amd means on that slide with navi when talking about a more scalable gpu?
If amd had this tech, there would be no need to target midrange vs higher end, they could create smaller dies that were less expensive due to the way wafer defects affect the economy of chip production, and scale them up to whatever they wanted.
People always talk about yields being lower on a new process, hence the cost of the larger dies being much greater with a new generation. This leads companies like amd and nvidia choosing smaller die sizes initially while yields improve over time.
Multi gpu often sucks, it's dependent on developer support and even with that often has issues.
So why can't amd create a megazord like gpu? A gpu where several smaller gpu dies come together and form a single larger die and most importantly, BEHAVE like a single larger die? What is the technical constraint there?
Is is the proximity to the memory? The inability to properly share the same data to the different smaller gpu pieces at a time?
But can't there be a common pool of memory stuck somewhere on the board with optical interconnects to the individual gpu components? I thought that was the whole point of that talk about the machine hp was building and later backed away from?
Would optical interconnects between the different gpu dies be insufficient to allow enough bandwidth and speed to work as a single unit? What else is the constraint here?
There must be an answer. OR is this the kind of thing amd means on that slide with navi when talking about a more scalable gpu?
If amd had this tech, there would be no need to target midrange vs higher end, they could create smaller dies that were less expensive due to the way wafer defects affect the economy of chip production, and scale them up to whatever they wanted.