How many times easier is it to emulate via the CPU only...

Status
Not open for further replies.

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
...from a programmer's perspective?

I was wondering because no DX11 plugin has been released for any DC emulator. I would've thought that would've been pretty damn easy.

I also don't think a mid range intel CPU would have trouble with it anyway since PCSX2 runs pretty smoothly with SSE4.1 or better and with edge AA on.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Graphics functions are typically high level emulated. There will be CPU emulation of the processor reads/writes to VRAM and graphics registers, but the emulator software will know based on the register writes and FIFO packets "this is a triangle" and translate to a DX/GL wrapper call.

DC PVR chip is just a simple fixed function raster engine, nothing is programmable, hence no need for over complicating things with DX11.

The Hitachi CPUs in the Dreamcast contain special low level vector and matrix instructions for geometry computation (read: T & L), so it needs to be emulated at CPU instruction level and can't really be separated out. The PVR graphics chip is sent screen space primitives (eg: lit, transformed, and perspective projected and clipped) as display lists and in turn generates tile bins in VRAM. That's it. Any target platform that has "DrawPrimitive" functionality is all that is needed to emulate DC GPU. It's the SH4 that does all the work at the CPU instruction level.

Also the PowerVR itself is a strange beast; it's a unique depthbuffer-less deferred tile rendering architecture that isn't really ideologically compatible with traditional rendering methods. Any low level interaction that takes advantage of this unique hardware rendering method, such as scene post processing, shadows, etc, would be very difficult to emulate or translate at an algorithm level to a traditional triangle based architecture or API. So for all I know even the GPU has to be software emulated for this reason alone?

Tile rendering constructs never did map well to the "z buffered triangles" DX/GL APIs. It was bad enough as a PC add on card trying to accomodate the unique architecture under the umbrella of traditional DX/GL API. It would be next to impossible to map to current APIs coming from a machine who's programmers had direct access to the frame buffer and tile bins for whatever bit fiddling their imagination allowed.

Short answer: the Dreamcast pretty much WAS just a simple CPU only machine. The graphics chip was Voodoo/Voodoo2/PowerVR era simple fixed function texture mapped polygon blitter, so there is nothing else to do really BUT emulate the SH4s at the instruction level.
 
Last edited:

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Graphics functions are typically high level emulated. There will be CPU emulation of the processor reads/writes to VRAM and graphics registers, but the emulator software will know based on the register writes and FIFO packets "this is a triangle" and translate to a DX/GL wrapper call.

DC PVR chip is just a simple fixed function raster engine, nothing is programmable, hence no need for over complicating things with DX11.

The Hitachi CPUs in the Dreamcast contain special low level vector and matrix instructions for geometry computation (read: T & L), so it needs to be emulated at CPU instruction level and can't really be separated out. The PVR graphics chip is sent screen space primitives (eg: lit, transformed, and perspective projected and clipped) as display lists and in turn generates tile bins in VRAM. That's it. Any target platform that has "DrawPrimitive" functionality is all that is needed to emulate DC GPU. It's the SH4 that does all the work at the CPU instruction level.

Also the PowerVR itself is a strange beast; it's a unique depthbuffer-less deferred tile rendering architecture that isn't really ideologically compatible with traditional rendering methods. Any low level interaction that takes advantage of this unique hardware rendering method, such as scene post processing, shadows, etc, would be very difficult to emulate or translate at an algorithm level to a traditional triangle based architecture or API. So for all I know even the GPU has to be software emulated for this reason alone?

Tile rendering constructs never did map well to the "z buffered triangles" DX/GL APIs. It was bad enough as a PC add on card trying to accomodate the unique architecture under the umbrella of traditional DX/GL API. It would be next to impossible to map to current APIs coming from a machine who's programmers had direct access to the frame buffer and tile bins for whatever bit fiddling their imagination allowed.

Short answer: the Dreamcast pretty much WAS just a simple CPU only machine. The graphics chip was Voodoo/Voodoo2/PowerVR era simple fixed function texture mapped polygon blitter, so there is nothing else to do really BUT emulate the SH4s at the instruction level.
I was thinking that the reason why DX11 was better than DX9/10 was precisely because they would run into less depth buffer issues. However, as you pointed out, it would be best if they emulated it all completely via the CPU or completely via the GPUs shaders. I don't see any reason to try to use the ROPs.

Thanks for the reply:)
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Yeah better all or nothing. The SH4 is nothing compared to modern CPUs. There would be 1000x driver and API overhead trying to send some things to the GPU selectively; it's faster to just carry out the instruction with the CPU, esp with SSE, etc. having equivalent instructions.

Even if the host CPU had to emulate a complex SIMD instruction with several native instructions, it would still take just as many if not many more instructions to setup an API call to the GPU, repack the data, and much more wasted time to invoke a thread context switch, wait for the OS, driver, and GPU, etc. and the added overhead to the emulator core to stall and keep things in order. Highly inefficient.

Now a system like the N64 and PS2 with programmable GPU microcode and stand alone vector units, cross assembling the microcode and using the GPU shaders would be perfectly suited for the task. VU1 on the PS2 is very much like DX11 (can generate geometry, real time tessalation and subdivision , branching and looping, recursion, custom skinning/boning, VIF packing/unpacking of vector data, etc). These things weren't possible to GPU accelerate on the PC until DX11 due to API limitations alone.

As is, emulation of old consoles is so fast that the emulator has to be intentionally delayed to 30 fps anyway. There would be no benefit for higher performance unless you were trying to minimize requirements to target older or slower host platforms which won't support DX11 and GPGPU anyway.
 
Last edited:
Status
Not open for further replies.