96Firebird
Diamond Member
- Nov 8, 2010
- 5,738
- 334
- 126
You guys are making mountains out of molehills about the whole Nvidia/PCPer transparency...
:biggrin:
:biggrin:
From Article #1, page 1, last paragraph:
:whiste:Originally Posted by pcper
NVIDIA's Involvement
You may notice that there is a lot of “my” and “our” in this story while also seeing similar results from other websites being released today. While we have done more than a year’s worth of the testing and development on our own tools to help expedite a lot of this time consuming testing, some of the code base and applications were developed with NVIDIA and thus were distributed to other editors recently.
NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially) as well as the software extractor that reads the captured video file to generate raw information about the lengths of those bars in an XLS file. Obviously, NVIDIA has a lot to gain from this particular testing methodology: its SLI technology looks much better than AMD’s CrossFire when viewed in this light, highlighting the advantages that SLI’s hardware frame metering bring to the table.
The next question from our readers should then be: are there questions about the programs used for this purpose? After having access to the source code and applications for more than 12 months I can only say that I have parsed through it all innumerable times and I have found nothing that NVIDIA has done that is disingenuous. Even better, we are going to be sharing all of our code from the Perl-based parsing scripts (that generate the data in the graphs you’ll see later from the source XLS file) as well as a couple of examples of the output XLS files.
Not only do we NEED to have these tools vetted by other editors, but we also depend on the community to keep us on our toes as well. When we originally talked with NVIDIA about this project the mindset from the beginning was merely to get the ball rolling and let the open source community and enthusiast gamers look at every aspect of the performance measurement. That is still the goal – with only one minor exception: NVIDIA doesn’t want the source code of the overlay to leak out simply because of some potential patent/liability concerns. Instead, we are hoping to have ANOTHER application built to act as the overlay; it may be something that Beepa and the FRAPS team can help us with.
From Article #1, page 1, last paragraph:
:whiste:
Can anyone else spot the problems with this picture?
![]()
That is not how frames are drawn on the screen at all.
Article #1 was back in jan 3, no word about NV involvement.
Article #2 was back in jan 16, no word about NV involvement.
Article #3 was back in feb 22, no word about NV involvement.
Also, I'm not saying there's anything phony about NV and PCPer. PCPer is just giving themselves more credit than they deserve.
It does sound as though they got the ball rolling, that they gave birth to the idea. That's the big part for me.
But hey if you want to give all the credit to Nvidia that's fine.
This is true. Using RadeonPro I can eliminate or vastly reduce Crossfire microstutter/frametime problems. Vsync, Triple Buffering, Flip queue sizes, FPS caps can all be used to work around the problem. Even if you do prefer to use no vsync or FPS caps it is possible to work around the problem using flip queue adjustments.
It does mean CF can be a case of trial and error to get working well, but it can be done. I would still recommend SLI if you were starting from scratch, but if you have an AMD 7xx0 series card and are prepared to tinker, going CF does still give a massive performance boost.
Im not so sure about this, 1)- are the FPS measured by FRAPS accurate then if half the frame are RUNTS and 2)- you need to limit the frames from 100 down to 60...what is the point of the 2nd GPU?
I can answer this - a crap ton more of AA and IQ tweaks.
1x7970:
WoW Ultra @ 1440p stuck using 8xMSAA.
60 FPS - 90% of the time, dips into the 40-50FPS during raids/traveling
2x7970s:
WoW Ultra @ 1440p able to switch up to 4xSSAA+EQ + HQ-AFx16/HQ Bitmapping
60 FPS - 90% of the time, dips into the 40-50FPS during raids/traveling
1x7970:
Bioshock: Infinite Ultra + DDoF @ 1440p == 40-50FPS (>90% GPU utilization)
2x7970:
Bioshock: Infinite Ultra + DDoF @ 1440p == 60FPS (with overhead, as GPUs are pegged at bout 60-70% usage but not many IQ options to force unfortunately.)
1x7970:
Tomb Raider Ultra+TressFX+FXAA @ 1440p == 40-50FPS
or
Tomb Raider Ultra+TressFX+2xSSAA @ 1440p == ~30FPS
2x7970:
Tomb Raider Ultra+TressFX+FXAA @ 1440p == 60FPS
or
Tomb Raider Ultra+TressFX+2xSSAA @ 1440p == >50FPS
For the record, I've always used v-sync since I can't stand tearing. So far only WoW stutters like a bastard and that seems to be only during the world map, dungeons/raids - it works unbelievably smooth.
I've only had my CFX for about 4 days, and after getting over the Radeon Pro Learning curve, I've decided to keep the second card. I'm applying more AA and IQ tweaks to my games without performance issues or stuttering (again WoW being the only exception) which to me is a huge perk.
Of course everyone is different, which is why PC gaming is still the best - options!
Im not so sure about this, 1)- are the FPS measured by FRAPS accurate then if half the frame are RUNTS and 2)- you need to limit the frames from 100 down to 60...what is the point of the 2nd GPU?
you realise that 100 FPS with 40 runts is better then 60 FPS with 0 runts. The reason is all the simulation is still tied to the renderer so you get the simulation run more often.
AMD have stated that they think this is "better" then pacing frames so they come out at an even interval, but in future drivers(june/july) they are going to provide a slider in CCC to change the behaviour.
That is debatable, as the simulation is too near the previous simulation, almost nothing happens. Though it may not hurt either.
Ultimately, it would be better to space it out some, which also gives more evenly space input points.
you realise that 100 FPS with 40 runts is better then 60 FPS with 0 runts. The reason is all the simulation is still tied to the renderer so you get the simulation run more often.
AMD have stated that they think this is "better" then pacing frames so they come out at an even interval, but in future drivers(june/july) they are going to provide a slider in CCC to change the behaviour.
It does sound as though they got the ball rolling, that they gave birth to the idea. That's the big part for me.
But hey if you want to give all the credit to Nvidia that's fine.
I guess if that is the only option -- but what if one can receive 90-95 FPS with zero runts?
Hmm:sneaky:Originally Posted by pcper
NVIDIA's Involvement
You may notice that there is a lot of my and our in this story while also seeing similar results from other websites being released today. While we have done more than a years worth of the testing and development on our own tools to help expedite a lot of this time consuming testing, some of the code base and applications were developed with NVIDIA and thus were distributed to other editors recently.
NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially) as well as the software extractor that reads the captured video file to generate raw information about the lengths of those bars in an XLS file. Obviously, NVIDIA has a lot to gain from this particular testing methodology: its SLI technology looks much better than AMDs CrossFire when viewed in this light, highlighting the advantages that SLIs hardware frame metering bring to the table.
The next question from our readers should then be: are there questions about the programs used for this purpose? After having access to the source code and applications for more than 12 months I can only say that I have parsed through it all innumerable times and I have found nothing that NVIDIA has done that is disingenuous. Even better, we are going to be sharing all of our code from the Perl-based parsing scripts (that generate the data in the graphs youll see later from the source XLS file) as well as a couple of examples of the output XLS files.
Not only do we NEED to have these tools vetted by other editors, but we also depend on the community to keep us on our toes as well. When we originally talked with NVIDIA about this project the mindset from the beginning was merely to get the ball rolling and let the open source community and enthusiast gamers look at every aspect of the performance measurement. That is still the goal with only one minor exception: NVIDIA doesnt want the source code of the overlay to leak out simply because of some potential patent/liability concerns. Instead, we are hoping to have ANOTHER application built to act as the overlay; it may be something that Beepa and the FRAPS team can help us with.
Originally Posted by pcper
NVIDIA's Involvement
You may notice that there is a lot of “my” and “our” in this story while also seeing similar results from other websites being released today. While we have done more than a year’s worth of the testing and development on our own tools to help expedite a lot of this time consuming testing, some of the code base and applications were developed with NVIDIA and thus were distributed to other editors recently.
NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially) as well as the software extractor that reads the captured video file to generate raw information about the lengths of those bars in an XLS file. Obviously, NVIDIA has a lot to gain from this particular testing methodology: its SLI technology looks much better than AMD’s CrossFire when viewed in this light, highlighting the advantages that SLI’s hardware frame metering bring to the table.
The next question from our readers should then be: are there questions about the programs used for this purpose? After having access to the source code and applications for more than 12 months I can only say that I have parsed through it all innumerable times and I have found nothing that NVIDIA has done that is disingenuous. Even better, we are going to be sharing all of our code from the Perl-based parsing scripts (that generate the data in the graphs you’ll see later from the source XLS file) as well as a couple of examples of the output XLS files.
Not only do we NEED to have these tools vetted by other editors, but we also depend on the community to keep us on our toes as well. When we originally talked with NVIDIA about this project the mindset from the beginning was merely to get the ball rolling and let the open source community and enthusiast gamers look at every aspect of the performance measurement. That is still the goal – with only one minor exception: NVIDIA doesn’t want the source code of the overlay to leak out simply because of some potential patent/liability concerns. Instead, we are hoping to have ANOTHER application built to act as the overlay; it may be something that Beepa and the FRAPS team can help us with.
Were these in single or dual gpu configs? I can't seem to find any data like this for older cards such as my geforce 480.More like given a ball and told to go play with it in the street where every one could see. Once better sites have started to finish all their testing it will be interesting to see how they reflect on each other's results. Of note the other sites with the tool have released some results - but reserved final judgements or conclusions - unlike pcper who was running to judgement months ago, as well as being so eager to judge unreleased hardware they created a fantasy review on it.
Credit has to go to nvidia for what they have done here; found a new metric that they do better at and pushing it out trying to get it used. I've seen some of the fraps frametime test results done on GTX 4XX and 5XX cards and the results were terrible. This big push may be the result of them having been aware of that problem with the Fermi cards, improving on it hugely with the 6XX Kepler cards and then wanting recognition of that.
I'm waiting for the full reviews from TR, Tom's and most of all here at AT. The whole notion of the value of the size of each frame being rendered, the value of pushing frames as fast as possible versus adding latency to even the time the frames are sent to your monitor at and what this means for the experience seems to be where the subjective/objective(or lack of) views comes into play.
We need to hear conlcusions from more than just one site, a site echoing some of what may possibly be nvidia mouthpiece-work.
The thing is, the problem was discovered before Nvidia involvement.
Just using FCAT, they were able to record the runt frames and show stuttering as a result. http://www.pcper.com/reviews/Graphi...art-3-First-Results-New-GPU-Performance-Tools
That article was done with nvidia involvement, that involvement appears to have been there right from the beginning without disclosure.http://www.tomshardware.com/reviews/graphics-card-benchmarking-frame-rate,3466-2.html
dropped frames and what Nvidia is introducing to us as runt frames.
Again incorrect. That colour overlay software was from nvidia, also, the vernacular 'runt frames' is nvidia terminology directly from them to describe a shorter frame.
http://www.tomshardware.com/reviews/graphics-card-benchmarking-frame-rate,3466-2.html