• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

some tech questions for those with programming experience

Anarchist420

Diamond Member
1. Did the flipper actually have a hardware mode of RGBA8 and just couldnt use it because of RAM limitations?

2. did the Flipper do T&L in 32bit precision or was its calculations 24 bit precision?

3. how did the PS1 clip things? it didnt have a hardware z-buffer IIRC but then it generally didnt look like there was much precision far away like there was in 3d Saturn games... i know the Saturn didnt have a hardware depth buffer no matter how much it looked like it used the w-buffer.

4. did most PS2 games use the 32 bit z-buffer mode? i was guessing so since a lot of them looked like they used logarithmic depth buffering (spider man web of shadows, the part of Lament of Innocence when you walk up to the "throne room", and Devil may Cry 1-3 looked like they had pretty even depth distribution) although i realize that i could be guessing wrong so that is why i am asking.

5. did the dreamcast's infinite clip planes mean that there would be disproportionately more precision far away or did it mean that it would look like a log depth buffer was used? i know it didnt use a depth buffer but instead used depth testing and some games looked pretty even while others looked like there was way more precision far away... none of them looked like they had more precision close up.

anyway, i have generally never been a fan of z-buffers (they have had their use and sometimes they looked good enough like in MDK2, some other open gl games, and in serious sam 2), although i love 32 bit fixed point log depth buffers and games from back in the day that used the w-buffer looked a lot better as far as depth went than what DX9 games were limited to... all DX9 games were limited to partial precision z buffers and so there was nothing that looked quite like the original unreal engine did.

and of course, 32 bit fixed point log z buffers wont be used much in the the future either (even though the fps loss from not being able to use early-z doesnt make the game go from smooth to a slideshow)... the unreal DX1x renderer only uses a 32 bit float 1 - z buffer but at least the DLLs are free to the end user (the guy who was kind of enough to spend his time making them didnt charge anything🙂. however, i wish the guy who did the open gl renderer would just make one final update to it so it would use a 32 bit fixed log depth buffer on modern hardware, but that will probably never happen... i can only remember how sweet the original unreal and ut99 looked on the diamond monster 3dII i had back in the day other than that there was no properly rotated grid AA, some dithering artifacts (due to lack of full precision RGBA buffer), small textures, and no lighting and depth calculations werent done per pixel when all should be done per pixel.
 
i am an ESTJ (and i hate myself because of that) so i have no influence over anyone (even though i am the very MBTI type that likes to try to control everyone).

bump anyway.
 
Ill respond when I'm not on a a 5" touch screen.

Or not. Here's a start.

Vertex clipping and per pixel depth sorting are two entirely different things btw.

What is stored in the depth buffer is completely arbitrary and is determined by what you store in the z field in the outgoing [xxxxyyyy----zzzz] command packet and then linearly interpolated by the rasterizer.

The notion of Z or W buffering occurs purely in the transformation and clipping process. You have an <x,y,z,w>, it undergoes a homogenous transform and perspective divide, and the screen space results go to the rasterizer. What you put in that register or packet is entirely up to you. W buffering support in hardware just tells it how to interpolate and invert the values. Its really just a game of evenly distributing depth range between near and far in camera vs screen space to utilize a wider range up close where it matters. A straight linear interpolation of clip space Z puts 90% of your depth resolution at the far plane due to perspective foreshortening where things are so small you can't see them, while full polygons up close fight with low resolution depth range in the other 10%. Hence why using camera space Z instead of clip space Z and inverting to flip it and have more precision up close. It essentially undoes the effect of perspective foreshortening on the depth buffer.

Honestly you should learn to derive a 4x4 homogenous projection tansform yourself by hand and you'll understand the entire process.
 
Last edited:
Back
Top