- Mar 24, 2013
- 1,787
- 95
- 91
Kituru writes:
Beyond3D
Simulatinously, there was a thread made on Chiphell today that said GP100 will be first on Pascal with a Tesla card featuring this with 8/16GB HBM2 memory.
Here is a top view of a Pascal card (prototype). Remember that with HBM2 those stacks are 2GB instead of 1GB (HBM1).
Nvidia Corp. has reportedly taped out its next-generation high-performance graphics processing unit that belongs to the “Pascal” family, according to a market rumour. If the information is correct, then Nvidia is on-track to release its new GPU around mid-2016. The company needs its “Big Pascal” graphics processor to build next-generation Tesla accelerators for high-performance computing applications and better compete against AMD on the market of consumer GPUs
An anonymous person presumably with access to confidential information in the semiconductor industry revealed in a post over at Beyond3D forums that Nvidia had already taped out its next-generation code-named GP100 graphics processing unit. Nowadays, a tape-out means that the design of an integrated circuit has been finalized, but the first actual chips materialize only months after their tape-out.
Beyond3D
Time to revive this thread. My info says big Pascal has taped out, and is on TSMC 16nm (Unknown if this is FF or FF+, though I suspect it is FF+). Target release date is Q1'16. This is a change from Kepler and Maxwell where the smallest chip (GK107 and GM107 respectively) taped out first. Maybe the experience with 20nm was enough for NV to go back to their usual big die first strategy. Given the huge gains in performance compared to 28nm, and the fact that the 16nm process is both immature and quite expensive, I suspect the die size may be a bit smaller than what we've seen with GK110/GM200.
Simulatinously, there was a thread made on Chiphell today that said GP100 will be first on Pascal with a Tesla card featuring this with 8/16GB HBM2 memory.
Here is a top view of a Pascal card (prototype). Remember that with HBM2 those stacks are 2GB instead of 1GB (HBM1).

Last edited: