I could create dummy jobs to test but I presume locally it will work fine, it's NFS and remote storage it seems to have trouble with. Some of the jobs also rsync directly using key pair ex: using user@host for source or destination. The issues are very sporadic which makes it harder to troubleshoot. I know it's not network congestion since when I'm watching the traffic graph during the delays it goes from full saturation to zero during the delay period, not to mention that it happens locally too and there's no way I'm actually congesting my gb LAN. The load average of the server is also not that high during this pause, so it's like if it's actually really waiting. Waiting for what, I don't know.
All my data is on my NAS so the local jobs are to or from the NAS, while internet jobs are through ssh. I don't tend to notice how well the backup jobs work as they happen overnight, but when running them manually I've noticed these delays as well. If I try without checksum, then at random it will decide it wants to update entire folders and it will take longer than it needs to, so even with the random glitches checksum ends up being faster.
Without very expensive temperature compensated GPS based sync/clocking equipment it's impossible to ensure that each system clock is 100% in sync so I have a feeling slight variations of time could be at play here too, for when trying to use time based syncing instead of checksum. Is there perhaps an option to make it so it allows a few seconds in variation perhaps? Idealy it should compare the source and destination system time too and compensate based on that too. Ex: if the destination time is 1 second behind then when checking time stamps it should add +1 second because all files will be 1 second behind and trigger an update when really it should not.
Worse comes to worse I might just write my own app, I don't imagine it would be that hard to do really.