• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

This has never been asked before. Regards gaming mostly..

havecold

Junior Member
Sep 26, 2013
7
0
0
Regarding performance based on fps, resolution, and hardware. Of course I could figure this out myself by buying 2 or 3 more PCS or going to somewhere like BestBuy and asking if they would let me install separate programs on 4 diff machines, but why go to the trouble when I could ask you super nice loveable, wanna take you home with me and hug you and kiss you and bleh. Oh how I could never say that to somebody in real life. Anyway.

Things I know to be fact already:


  • A lower resolution gives you better performance and responsiveness even with the same graphical settings and constant fps on both the higher and lower resolutions.

  • Better hardware gives you faster load times.

  • High Polling Rate for your mouse on a machine that can handle it gives you faster responsiveness.
Okay so here's what I'm trying to figure out. Let's say you want 58 fps constant in your games with zero dip. Load times don't matter to you. You have 3 or 4 machines that can give you 58fps constant with zero dip. Each machine is progressively better than the last in every hardware aspect(ram, hard drive, CPU, GPU, power supply, motherboard).

These are my 2 questions:
1. Does the fastest machine respond and/or perform better than the slowest machine with the same graphical settings(including resolution) and the same peripherals(mouse, kb, monitor), regardless of load times? If yes, please go into the greatest detail you can give. I'll research what I don't understand.

2. If the answer to the above question is yes. Does the fastest machine with a higher resolution and same graphical settings and peripherals perform the same as the slowest machine on a lower resolution regardless of load times?
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
If you truly have 3 or 4 different machines that all run at 58 fps and never dip then yes they are all going to run and play exactly the same. If one machine has an expensive sata 3 SSD, and the other has a 5400 rpm HDD, then there could be moments in the game where you might have dips in fps as it loads more textures. Same goes if you run up agaisnt a RAM barrier, say one machine has 4GB and another has 8. At some point you might drop frames as windows starts swapping out memory. But then again you said "58fps constant with zero dip" so I guess you already answered your own question.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
By definition of the question the machines are 100% equivalent in terms of fps.

They could differ in other ways not covered by that metric however. One could have a lot more latency making the gaming experience less pleasant and ultimately causing motion sickness. You could also have quality of rendering differences despite the same settings. You could have on of them using a different colour bias or choosing not to run certain shader programs or run optimised ones. You could also have on of them stuttering, the fps count is fine but on a frame by frame basis there is a problem (microstutter).

But I think somehow you meant to ask a different question about whether ssds are worth having, I am not really sure I understand what it is you are looking for.
 

havecold

Junior Member
Sep 26, 2013
7
0
0
BrightCandle your answer was what I was looking for regarding performance. Thanks. 58 fps with zero dips could still mean a gtx680's shaders would look better than an older card with the same graphical setting.

Hmm sry lol. I had to reread your post a third time. You've answered my question. There is a possibility all the older machines do not cause microstutter. Correct?
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
58 fps is a very odd number to choose, I know of no monitors that use this refresh rate. I don't why you are choosing that, 60 fps and no dips is far more reasonable as it matches the common refresh rate. If you are capping at 58 you are going to see issues as two frames in every 60 are going to be twice as long.

As far as I know the modern cards and driver are all better than their predecessors for microstutter. Its a topic that has finally been explored, explained and measured appropraitely. Both vendors have taken steps to fix it as best as possible.
 

havecold

Junior Member
Sep 26, 2013
7
0
0
I do use vsync, yes. 58fps capped with Dxtory(AMD card) completely removes as far as I can tell the input lag normally associated with vsync. Try it.

My monitor is 60hz and I've never noticed this 2 frames per 60 being stretched out. However if I did I would think it far better than the alternative. Tearing is extremely noticeable and affects what I should be seeing, but don't because a screen refresh is occurring while turning. But if I had vsync on I could see it very clearly.

@Ben90: I didn't create the topic for vsync, no. It was just something that popped in my head. I haven't determined yet whether having vsync on without input lag refreshes the screen later as vsync on at 60fps. Or if it has any real world difference in game(i.e. being seen before you see them against opponent with same ping). Screen tearing used to really really get on my nerves. I could count the refreshes. Sends people to the psych ward.

I believe your scenarios are regarding changing the graphical settings per PC. I/O polling rate the same otherwise?

@BrightCandle: Are you saying all cards and drivers produce microstutter? Even with a game made for that specific card, generation of cards, or future cards?
 
Last edited:

havecold

Junior Member
Sep 26, 2013
7
0
0
And this right here is what convinced me to leave this shithole forum forever.

He was high on apple juice? And your drunk or under the influence of something? Not defending him(although I don't think there's a reason to). Just saying you coulda not added a negative comment.
 

Murloc

Diamond Member
Jun 24, 2008
5,382
65
91
omg what is that movie thing.
Pretty funny, you got a wild imagination going.
I don't think vsync is useless though, I use it as an FPS limiter because otherwise the menus heat up my gpu like crazy.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
And this right here is what convinced me to leave this shithole forum forever.

What is really wrong with this forum is that people with crazy ass ideas that are obviously garbage and based on nothing scientific or technical at all spouting all sorts of rubbish (58 fps caps removing vsync latency !!!). Its full of garbage stupid things like this which we are expected to respond sensibly too.

This is highly technical? Its not highly technical, its just another attempt to push a clearly false (proven so many times) idea again. Its just not worth responding to some things, the ideas and people with them can't be saved and its not worth our effort to try.
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
What is really wrong with this forum is that people with crazy ass ideas that are obviously garbage and based on nothing scientific or technical at all spouting all sorts of rubbish (58 fps caps removing vsync latency !!!). Its full of garbage stupid things like this which we are expected to respond sensibly too.

This is highly technical? Its not highly technical, its just another attempt to push a clearly false (proven so many times) idea again. Its just not worth responding to some things, the ideas and people with them can't be saved and its not worth our effort to try.

Agreed. Offtopic in HT forum, if you ask me.
 

havecold

Junior Member
Sep 26, 2013
7
0
0
What is really wrong with this forum is that people with crazy ass ideas that are obviously garbage and based on nothing scientific or technical at all spouting all sorts of rubbish (58 fps caps removing vsync latency !!!). Its full of garbage stupid things like this which we are expected to respond sensibly too.

Maybe I said it wrong? It removes the mouse lag input latency associated with it? It was just a response to ben90's statement, which he redacted. That part is offtopic.

The difference between 58fps capped via Dxtory and 60fps is too noticeable to not acknowledge it removing the mouse lag if not fully, almost fully. I guess you were talking about the refreshing of the frames and not mouse lag. I have yet to read up on how vsync capped to 58fps via external program still gives you the same latency of refreshing frames as 60fps.

I was more interested in my original post which I hope to be considered highly technical. Sorry to cause mass panic.

Based on BrightCandle's responses I'm guessing the only difference would be load times, microstutter, and very slight(if not unnoticable) changes in graphics. So real world performance would almost completely if not completely be unaffected. Somebody correct me if I'm missing something. 58fps with zero dip on the fastest machine is the same as 58fps with zero dip on the slowest machine with same peripherals minus load times and a tad bit more microstutter with no real world difference?
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Maybe I said it wrong? It removes the mouse lag input latency associated with it? It was just a response to ben90's statement, which he redacted. That part is offtopic.

The difference between 58fps capped via Dxtory and 60fps is too noticeable to not acknowledge it removing the mouse lag if not fully, almost fully. I guess you were talking about the refreshing of the frames and not mouse lag. I have yet to read up on how vsync capped to 58fps via external program still gives you the same latency of refreshing frames as 60fps.

I was more interested in my original post which I hope to be considered highly technical. Sorry to cause mass panic.

Based on BrightCandle's responses I'm guessing the only difference would be load times, microstutter, and very slight(if not unnoticable) changes in graphics. So real world performance would almost completely if not completely be unaffected. Somebody correct me if I'm missing something. 58fps with zero dip on the fastest machine is the same as 58fps with zero dip on the slowest machine with same peripherals minus load times and a tad bit more microstutter with no real world difference?

It strikes me that your use of the words "slowest" and "fastest" is completely without context.

You have just said that they're running exactly the same speed. Are you presuming that one of them somehow processes inputs at a rate faster than the framerate in some fraction that you can reasonably observe?

How is one machine faster than the other when they are getting the exact same framerate? Or are you assuming a vsync at 58fps?

Your mouse polling and IRQ polling and bus speeds and so much other crap goes into determining *how quickly* a system responds to a given input, but I'm almost certain that the speed of any system that can maintain nearly 60fps without any dips in performance will not vary perceptibly.

But I'm still not even sure what you're asking, honestly.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
The succinct version of this question is IMO: Are there any other performance measures of performance besides frames per second such that given two machines both being able to achieve vsync 60 that the faster machine still has benefits and what are those benefits?
 

Murloc

Diamond Member
Jun 24, 2008
5,382
65
91
The succinct version of this question is IMO: Are there any other performance measures of performance besides frames per second such that given two machines both being able to achieve vsync 60 that the faster machine still has benefits and what are those benefits?
if both have the same features (shaders, directx version etc.) and work as intended (no stuttering, no obvious problems), and we ignore load times, the graphics will look the same and both will be modern machines, and I think any other performance difference during gaming will be negligible.

There's a reason fps and their consistency are usually measured in the reviews when testing gaming performance, I think it's because the other differences are impossible to measure and/or to perceive.

Price, power consumption and fan noise come into the picture at this point.

Also there are games that will push any reasonably-priced computer to the limit, so I think it's a pointless discussion, because you can have better graphics with a better computer, there is always better graphics if you have more money: higher resolution (past full hd, think 4K, buying the monitor is an investment alone), higher AA and anisotropic filtering (which at high resolutions will affect performance greatly).
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
The succinct version of this question is IMO: Are there any other performance measures of performance besides frames per second such that given two machines both being able to achieve vsync 60 that the faster machine still has benefits and what are those benefits?
*returns very cautiously to this thread*
While I would no doubt choose the faster computer when presented with the option; in a theoretical situation where neither computer drops a frame and this application is the only thing being ran, the slower computer should have a latency advantage.

Lets take two computers. One that is infinitely fast, and one that finishes the frame right on the buffer flip. Double buffering Vsync is on. Main loop takes 25% time, physics 25%, and rendering 50%.

Fast computer:
The frame is done at t=0ms, input polled at 0ms. We are now just waiting for the next 17ms for the buffers to swap so we can start again. The mouse is clicked at t=2ms, but that is going to have to wait until the next frame to get rendered (and again for the buffers to flip for that new frame). The frame with the mouseclick included doesn't get sent to the monitor until t=34ms.

Slow computer:
Since the main loop is slower, and takes until 4ms to complete, we get the mouseclick at 2ms. The physics and renderer use this input and just barely manage to get a frame out before the buffer flips. The frame with the mouseclick included gets sent to the monitor at t=17ms.
 

havecold

Junior Member
Sep 26, 2013
7
0
0
Ben90 you just lost me. Why does the frame with the mouseclick have to wait for the buffer to flip on the fast machine and not on the slow machine? I'm not understanding the first parts of your statements. 0ms and 4ms for the main loop(rendering the frame and being sent to the monitor?). Also can't the faster machine take less time than 17ms to wait for the buffers to swap?
 
Last edited:
Jul 18, 2009
122
0
0
I suppose what I said earlier was a little bit rash. (I'm not going to apologize for it too much, though.)

Ben90 you just lost me. Why does the frame with the mouseclick have to wait for the buffer to flip on the fast machine and not on the slow machine? I'm not understanding the first parts of your statements. 0ms and 4ms for the main loop(rendering the frame and being sent to the monitor?). Also can't the faster machine take less time than 17ms to wait for the buffers to swap?

That's just how double-buffering works. The videocard draws one frame ahead of time, but can't get started on the next frame until the previous one has been sent to the display. Triple-buffering solves this.
 
Jul 18, 2009
122
0
0
Basically, here are the steps from I/O poll to display output:

1) IO poll event
2) Game renders scene and sends to framebuffer.
3) If Vsync is disabled, framebuffer is immediately sent to display. If Vsync is enabled, the framebuffer cannot be sent to the display until the start of the next frame.
3a) With double-buffering, the render loop has to pause until the end of the current frame (so that there will be an available buffer to draw in.) With triple-buffering, there is always another buffer available, so the next frame can be started immediately. (This is why T-B is faster than D-B.)