This from the same site that claims the NV30 would be a 4X4 architecture with a 256bit memory controller and a specialized voxel rendering unit.
Personally I'd say it looks like BS, .09u is possible and knowing nVidia's trend to bet on the latest tech they may well push for it if possible but it's hardly a guarantee.
750-800MHz core is viable enough IMHO.
16MB E-DRAM is highly unlikely especially paired with a 45GB/s main memory bandwdith, not to mention it will require a vast relayout compared to what they've designed in the past.
nVidia's never been one to make significant architectural changes unless absolutely required, and EDram would assuredly necessitate that if it's to be used effectively.
8x4 architecture? Not a chance in hell. I don't see 4 TMU's as being at all beneficial especially with pixel shaders partially taking over the job of dedicated TMU's in the long term. I'd bet on 2TMU's/per pixel rendering pipeline at most and even that may be stretching it, 4 fixed TMU's would almost seem to be portending a step back in rendering methodology relative to the NV30. A 16pixel shared rendering piepline, or 12-16 dedicated pixel rendering pieplines would cost less in transistor real estate and yield significantly more dividends.
16 VS? That's clearly not going to happen. nVidia has already dropped deciated units with the NV30, and gone the P10 route of a fexible array of small shaders working jointly.
Given all of nVidia's claims of improved vertex shading efficiency it would seem counterproductive to go back to a sheer brute force method.
In the long run 16dedicated units is a losing proposition.
I don't expect to see either the R400/NV40 to use deciated VS.
The NV30 has already egun to follow the P10's path, and I expect the NV40 to complete the journey.
DDR-II is possible, though I'd bet they'll jump to GDDR-III if viable in volume. Especially given that ATi is hedging their bets on skipping DDR-II entirely and jumping to GDDR-III.
44.8GB/s main memory bandwdith initially seems like an inordinately large jump to make is one generation, and dubious given the industry's tendency to make small bumps so as to easier facillitate milking each generation chip sales for every $ they can.
On the other hand a 256bit memory controller is almost a certainty, and 1.4GHz DRAM should be easily viable by then.
I'd look towards 38GB/s as being the likely figure. 1.2GHz RAM will be cost efficient by then and they won't be forced to pay a premium for the fastest, paired with a 256bit memory controller it should still provide a significant boost in bandwidth.
Personally 16 dedicated vertex shaders + a 256bit memory controller + an 8x4 arcgitecture + 16MB eDRAM looks to be somwhat unlikely if they expect to hit 350M transistors.
DX10 doesnt look likely until very late 2004 at the earliest, MS put PS/VS 3.0 in the DX9 spec specifically so they could leave DX9 as being viable for as long as possible to give them time to work on DX10.
I'd bet on NV40/R400 being DX 9 parts, or DX9.1 should PS/VS3.0 be renamed DX9.1.
Oh FWIW, the use of EDram in a graphics renderer cannot be patented, the specific implementation can be however. There is nothing to force nVidia into following the same path as BitBoys in implementation however.
In any case, EDram is extremely dubious in my mind.