Is it true that the N64's CPU was on a 350 nm process and that the Saturn's SH2s were done on a 250 nm process?
I had thought I read that was the case. However, that doesn't make sense due to the fact that the N64's CPU had an FPU, had much more cache, and was clocked faster than the SH2s in the Saturn. Does an SH2 have any advantages over the MIPS 4000 series-derivative used by the N64?
Also, I heard some N64 games didn't use a Z-buffer. Did games use the CPU for depth testing if they didn't use the RDP's Z-buffer? I thought I heard that Factor5 didn't use the Z-buffer for Indiana Jones and the Infernal Machine, which would make sense due to the fact that the N64 only had a 16 bit zbuffer (which is very low precision).
What is/are likely reason/s didn't Nintendo go with a 250 nm process (assuming they used a 350 nm process)? Did have to do with transistor density or was it due to something else?
I had thought I read that was the case. However, that doesn't make sense due to the fact that the N64's CPU had an FPU, had much more cache, and was clocked faster than the SH2s in the Saturn. Does an SH2 have any advantages over the MIPS 4000 series-derivative used by the N64?
Also, I heard some N64 games didn't use a Z-buffer. Did games use the CPU for depth testing if they didn't use the RDP's Z-buffer? I thought I heard that Factor5 didn't use the Z-buffer for Indiana Jones and the Infernal Machine, which would make sense due to the fact that the N64 only had a 16 bit zbuffer (which is very low precision).
What is/are likely reason/s didn't Nintendo go with a 250 nm process (assuming they used a 350 nm process)? Did have to do with transistor density or was it due to something else?