I would imagine anything software based is going to sync automatically as it's interacting with the drivers etc, maybe even something hardware would as well. (ex: if you use a HDMI splitter going to a HDMI capture device)
If it's a video camera pointed at the screen I feel it would still sync ok as long as the screen refreshes all at once, and I'm kinda stumped now, I'm honestly not sure if that's how it works with modern screens. Do they update all the pixels at once or do they scan like CRTs do?
Either way I could see where the recording could also get two frames blend together if it's not synced.
Take a look at this graph:
Code:
Cam: 000011111111222222223333
Mon: 000000001111111122222222
The numbers represent frame numbers over time. If they are not synced which is kinda hard to do, if the camera is exposing for the full 1/60th of a second to get a frame, it would capture part of frame 0 and frame 1 and so on. But I think the way cameras work is they will expose only for as long as they need then wait to take the next frame capture, so since you're aiming at a screen that is producing light, chances are the camera's shutter speed will be fast enough that it has less chance of capturing two frames at once. But it really depends how it ends up syncing up. If it ends up like this:
Code:
Cam: 000000011111111222222223
Mon: 000000001111111122222222
You could maybe still get partial frames blending together, like capture a frame for 90% of the time then the next frame for the other 10% or something.
At least this is my theory... I don't really know a lot about this stuff. Now I'm kinda curious to experiment with that. I imagine recording a 60hz screen at 60fps and just doing it on and off, you will end up with some that are clearer than others because of how it synced up. This would be easier with high end camera that lets you adjust things like exposure so you can try to get exposure time to match frame rate.