Next thing you know, someone will actually implement x86 on a Turing machine...
Seriously though, the N64 is a particularly difficult console to emulate for a few reasons:
- It's way more complex than anything before it (as in, previous generation)
- If you were to emulate the hardware exactly instead of just interpreting the software, you'd have a slow-as-hell emulator. The only realistic way to render the graphics is by using a GPU. DirectX/OpenGL are pretty different from the Silicon Graphics chip used. What is usually done is that instead of emulating the GPU, you interpret the stuff that has to be drawn and use DirectX/OpenGL commands to draw it.
First example that comes to mind - the SGI chip had a somewhat different clipping algorithm than DirectX, which leads to textures that should still be visible in-game not being drawn.
- The GPU needed microcode. Two basic versions produced by SGI (one extra-accurate and painfully slow, one fast but painfully inaccurate) were used by most developers, however, the good ones (essentially Rare and Factor 5) developed their own microcode, optimized for their games. That's painfully hard to emulate (must be as bad as writing it was) - the microcode tools were very poorly documented, had no debugger and were generally crap. Good luck finding the chip's design team or reverse-engineering it.
- This is mostly a non-issue if you keep the whole game image in RAM, but many games depend on extra-fast load times to load textures quickly into RAM. A number of tricks to get around the 4KB texture limit were also related to the way memory was managed
That's what comes to mind at first. So my guesstimate is that true accuracy is impossible at the moment, at least for all games. It may be possible to do it with those that use the normal microcode.
As for speed, we may be at a point where common desktop processors can do the rendering and game logic without help from a GPU, but I wouldn't expect 60fps. Something like 10-30 fps.