AMD Framepacing Driver 13.8 review

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Been stringing my i5 out on BF3 to test it out...

Was using 8.1 when 13.8 came out but never really needed two cards, tried it yesterday with 13.8b2 and it was still stuttering for me. So I reverted back to Windows 8 and installed these and can tell there is a huge difference in smoothness running my 7950s between 85 and 180 fps in BF3, it's quite smooth.

So even though frame pacing shows up with the drag and drop method on 8.1, the driver doesn't actually have any pacing taking place, just thought I'd share that.. Also thanks AMD, finally feel OK with two 7950s :D
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I've pretty much noticed 99% load for my slave card on all the games I've tried the last few days.

I guess that is AMD's software solution for the microstutter. Keep one card going, possibly with a frame ready to go at all times? Haha.

For anyone playing FFXIV, one card is a stuttering mess for me Maximum @ 1440p. In areas where the FPS meter hovers >50 the game feels like it's in the 20-30s.

Turning on CFX using 1x1 Optimize, I get 60 FPS and the game feels night and day smoother.
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
I've pretty much noticed 99% load for my slave card on all the games I've tried the last few days.

I guess that is AMD's software solution for the microstutter. Keep one card going, possibly with a frame ready to go at all times? Haha.

Either that or it's bug to be smoothed out. Have you seen the release notes for 13.8 Beta 2?

Known Issues of The AMD Catalyst 13.8 Beta2 Driver for Windows:

CrossFire configurations (when used in combination with Overdrive) can result in the secondary GPU usage running at 99%

Does it still happen for you if you disable Overdrive?
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Either that or it's bug to be smoothed out. Have you seen the release notes for 13.8 Beta 2?



Does it still happen for you if you disable Overdrive?

I just saw Balla mention 13.8b2 but I hadn't headed over to AMD.com to check it out.

I make sure to disable Overdrive since I was running into a weird issue, if I turned on Overdrive my second card wouldn't be able to go into ZeroCool. It would idle higher than my primary card. I'd normally OC through Overdrive, but have switched to keeping it disabled and just using MSI-AB.

I will try these drivers out when I get home. Thanks for the heads up.
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
^Well ok np, but my (real) point was that disabling Overdrive might even solve the GPU spike issue on 'Beta 1'...any chance you could try this for us before upgrading to Beta 2?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I only get 99% usage on the slave card when something isn't working right.

This is my usage in BF3 on Caspian Boarder (cpu limited).

usage_zps270979f6.png~original
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
^Well ok np, but my (real) point was that disabling Overdrive might even solve the GPU spike issue on 'Beta 1'...any chance you could try this for us before upgrading to Beta 2?

Do you mean return the card to stock? Because I mentioned I don't use Overdrive, I keep it disabled due to it introducing another issue during 13.5/13.6.

If you are asking about stock, it also happens with cards at stock.
 

Mark Rejhon

Senior member
Dec 13, 2012
273
1
71
I guess that is AMD's software solution for the microstutter. Keep one card going, possibly with a frame ready to go at all times? Haha.
Actually, the way I understand frame pacing is that you want all the frametimes to be evenly spaced apart.


1. The Easy Coles Notes

In a micro-stuttery SLI/Crossfire GPU setup, you can have inconsistent frametimes or frame delivery times (e.g. both GPU's can have nearly identical frames from the same gameworld time, or that different gameworld times aren't delivered on a properly relative basis to the monitor's display. Basically, a page flipping timing problem, e.g. flipping a framebuffer to front buffer isn't occuring at a reguarly spaced interval.

Think of frame pacing as GPU load balancing, while keeping gametimes synchronized (on a relative time-basis) to delivery of frames to display. Different GPU's can be rendering full game frames concurrently, in parallel, but with slight time offsets relative to each other. Metaphorically, frame pacing is like the music orchestra conductor controlling the pace of each music player (GPU). As a hugely simplified example, during 60 frames per second on two GPU's:

Frame pacing is the art of making sure:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime 1.0/60 (to be displayed at T+16.7ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at T+33.3ms)
GPU #2 is rendering gametime 3.0/60 (to be displayed at T+50.0ms)
(good, good: Consistent frame rendertimes, consistent gametimes, consistent frame delivery)

INSTEAD of incorrectly microstuttery:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime 0.1/60 (to be displayed at T+16.7ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at T+33.3ms)
GPU #2 is rendering gametime 2.3/60 (to be displayed at T+50.0ms)
(bad, bad: Incorrect gametimes being rendered, out of sync with visual presentation)

OR THIS incorrectly microstuttery:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime 1.0/60 (to be displayed at T+2ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at T+35.3ms)
GPU #2 is rendering gametime 3.0/60 (to be displayed at T+37.5ms)
(bad, bad: Incorrect timing of GPU back buffer flipping to front buffer for monitor)

Obviously, this above is a simplified-down and dumbed-down explanation of why frame pacing is necessary in multiple-GPU setups. nVidia has to do it, they just did a good job of it compared to AMD in the past. AMD is just catching up to a rough equivalent to what nVidia is already doing (even though they do it in a different way). But the frame pacing mathematics are still identical, from a human vision microstutter-detection perspective.


2. The Complex Technical Stuff

Now the below goes more technical than the above, frame pacing during SLI, is the art of delivering frames to the human eye (display) in a very consistent, evenly-paced manner. Some of it is the videogame's responsibility, and some of it is the driver's repsonsibility. One of the many examples a driver has to work with is that for SLI/Crossfire, it's often the driver's responsibility to make sure the return from Direct3D Present() returns in a consistent amount of time. Behind the scenes, it needs to deliver the frame to the display in a consistent manner that is synchronized with gametime (even during VSYNC OFF). If it blocks only sometimes and returns quickly at other times, because of SLI frame pacing problems, then it can throw timings out of whack, and the game can't accurately render gametimes delivered at accurate times to the monitor, and can't render each frame at an evently spaced interval. Then all this divergence creates visible microstutters.

A microstutter problem can still affects either VSYNC OFF or VSYNC ON or both. During VSYNC OFF, you can have some small slices of combined with larger slices of image (due to inconsistency in frame pacing) rather than very evenly-spaced-apart tearlines for more consistent frame rendering/delivery. This creates a feel of increased microstutters during VSYNC OFF, since each scanline in a frame is a constant amount of time, and having different-sized slices (more random timing of tearlines) of image during VSYNC means inconsistency in a time-basis from frame to frame. People who understand how VSYNC OFF and tearing works, know that when running at 300fps @ 60Hz (~5x refresh rate), there is an average of 5 frame slices per refresh, and an ideal config has all frame slices 1/5th the height of the screen, for consistency. (in real life, the tearlines moves ideally about a bit randomly rather than be annoyingly stationary, perhaps because height of frame slices vary slightly (randomly) say bewteen 1/4.985th to 1/5.312th height of screen, as an example). Conceptually, a human needs to think of the frames being delivered to the monitor like a reel of refreshes, and tearlines being tantamount to film reel splices, and also visualize the black gaps between frames as being the vertical blanking interval between refreshes (approximately 5-10% of the height of a visible frame). Those who remember analog TV's with VHOLD adjustments (and see the horizontal black bar boundary between refreshes, when VHOLD is badly adjusted), will visualize this concept better than the average human. Occasionally tearlines can splice across the blanking interval, too, so sometimes some frames have 4 onscreen tearlines, and sometimes frames have 5 onscreen tearlines, because some of the tearlines splices across the blanking interval. But what's important is that splices (seen as tearlines during VSYNC OFF) is made at consistent intervals along the "film reel" of refreshes, including through the blanking intervals, too. Now with bad GPU pacing during VSYNC OFF, you can have some frame slices that are 1/3rd the height of screen, and other frame slices that are 1/20th the height of screen. Inconsistent height in frame slices during VSYNC OFF, leads to a feel of microstutters, due to the divergence in game time away from frame delivery times. Since scanlines (single rows of pixels) are being delivered to the computer monitor at a constant speed (at the current graphics card dotclock speed), you really want each slice height to be approximately the same on a frame-to-frame basis, even for slices that overlaps the blanking interval (e.g. bottom 1/10th of previous refresh and first 1/10th of next refresh). Incorrect VSYNC OFF handling of blanking intervals can also be a theoretical cause of microstutters. The metaphorical filmreel of refreshes are fed into the display at constant speed -- one row of pixels at a time, at the current "horizontal scanrate". VSYNC OFF is simply the splicing of the next frame during mid-refresh while still delivering the current refresh. All VSYNC OFF slices must be consistent (even across the "black bars" in the metaphorical filmreel of refreshes -- the blanking interval between frames). If the splices are inconsistently spaced anywhere along this metaphorical filmreel, this creates the feel of microstutters due to the divergence of gametimes away from the delivery times (since the delivery of rows of pixels are at a constant rate).

Also, I'm a computer programmer, so I can tell you some insight of some causes of microstutters in SLI setups, from a programmer's perspective. It's bad for Direct3D Present() API to take huge fluctuations in times to return. It can't randomly return instantaneously sometimes, and take a long time to return at other times; due to random pacing problems on SLI/Crossfire setups. This can create microstutters if the drivers doesn't provide a little bit of SLI/Crossfire pacing help. If the next gametime rendering depends on the timing of Present(), and incorrect gametimes are being rendered because some Present()'s returned quickly and other Present()'s returned very slowly, this can create microstutters. It might not be it, it can be other part of the frame delivery chain. This is just one of the many things that can "go wrong" with microstutters, in the art of trying to keep gametimes synchronized with presentation to monitor. Trying to make multi-GPU look like one GPU capable of rendering sequential frames on a consistent time basis, is a major software challenge, especially since some frames take longer to render than others, creating challenges for GPU load-balancing.

In the old days (like 3dfx), you rendered different parts of the same frame on separate GPU's (Scan Line Interleve). You also have split-frame rendering modes, often used by nVidia SLI for fill-rate limited applications. You've also got alternate-frame rendering modes. Today, things like complex shaders (that interacts with different parts of the same frame) have made it much easier/simple to delegate entire frames to specific GPU's. For split-frame rendering, shaders trying to read the other GPU's framebuffer for computations, can really slow things down, and some shader effects can have "seams" along the split frame. Split-frame can have very uneven loads (e.g. low-detail sky at top, complex detailed ground at bottom). So split-frame rendering often isn't good for shader-heavy graphics with lots of detail differences throughout the image. So most modern shader-heavy games tend to use alternate-frame-rendering during multi-GPU setups. All these crazy different modes of parallel-GPU operations can have very different performance impacts and frame-pacing considerations, and dramatic differences (and potential fluctuations) in frame rendertimes, that can interfere with motion fluidity. Alternate frame rendering is much simpler. During alternate frame rendering, the next frame is started on the other GPU while still only about halfway finished rendering the current frame on a GPU.

Now, both the game makers and the driver makers are responsible for proper frame pacing, but good graphics drivers can reduce the amount of work that a game maker has to do for frame spacing. If the drivers are really good, very little further work can end up being required of the game programmer to make things work great with multiple-GPU setups.

There are many places in a pipeline that microstutters can be created in, and it's quite horrendously complex in a SLI/Crossfire setup. You will go bald tomorrow, trying to make a multibillion-transistor chip (otherwise known as a GPU) to co-operate harmoniously, especially when you let people (game makers) do really unforseen things with the piece of complex silicon.

While I do software development, I am not a low level graphics driver programmer. However, I know the importance of consistent frame rendertimes, and consistent frame delivery times, for maximum motion fluidity. And I can appreciate how horrendously complex this can become for multi-GPU setups. I have come to appreciate the challenges that graphics vendors have to do with making motion more fluid.

Now, a picture is worth one thousand words. See AnandTech's old diagrams.

afrmode.png


splitmode.png

supertiling.png
As you can see, AnandTech's old diagrams is worth a thousand words. :)
The approximate concept is still the same today for Radeon and nVidia, but with different variations (e.g. internal bridges, mix-and-match cards, any card can become master/become slave, prevailing use of alternate-frame rendering modes). You can recognize that the concept of trying to accurately load-balance between graphics card is horrendously, horrendously, horrendously, complex. Sometimes a card does more complex stuff (e.g. an edge of screen has more graphics), and that can cause framepacing to become challenging, as one GPU "falls behind the other", and you need some framepacing help to fix the microstutters.


3. References

Useful references about parallel GPU's (scanline interleave, split frame rendering, and alternate frame rendering)
http://www.anandtech.com/show/1698/5
http://www.nvidia.com/object/quadro_sli_rendering.html
http://en.wikipedia.org/wiki/Scalable_Link_Interface
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Welps, it seems 13.8 has pretty much rendered CFX useless for WoW on my setup.

Before I could use AFR mode and get 80% scaling. Now, regardless what mode I use even turning off Frame Pacing, I get ~100% negative scaling in certain parts.

Same spot that I'd normally test:
1xcard == ~50 FPS
2xCard regardless of CFX mode == ~27 FPS
Before 13.8 2x Card with AFR mode == ~80 FPS

I've tried disabling Frame Pacing and just like with other games even using 13.8b2, my second card hits 99% load while my main card barely breaks 30%. Also if you didn't update to 13.8b2 and had Frame Pacing on launching WoW would BSOD me. 3 out of 3 times until I decided to try disabling Frame pacing. :/

Kudos AMD, you guys sure do know how to make a man hate you.
 

Elfear

Diamond Member
May 30, 2004
7,165
824
126
Welps, it seems 13.8 has pretty much rendered CFX useless for WoW on my setup.

Before I could use AFR mode and get 80% scaling. Now, regardless what mode I use even turning off Frame Pacing, I get ~100% negative scaling in certain parts.

Same spot that I'd normally test:
1xcard == ~50 FPS
2xCard regardless of CFX mode == ~27 FPS
Before 13.8 2x Card with AFR mode == ~80 FPS

I've tried disabling Frame Pacing and just like with other games even using 13.8b2, my second card hits 99% load while my main card barely breaks 30%. Also if you didn't update to 13.8b2 and had Frame Pacing on launching WoW would BSOD me. 3 out of 3 times until I decided to try disabling Frame pacing. :/

Kudos AMD, you guys sure do know how to make a man hate you.

That sucks. You should submit a trouble ticket to their driver team so they can get it sorted since WoW has sort of a big following.
 

UaVaj

Golden Member
Nov 16, 2012
1,546
0
76
Kudos AMD, you guys sure do know how to make a man hate you.

Time to go Green. :rolleyes:

Sell one of those 7970 from "me", then give that 7970 to "her", then move the 680 from "her" to "me", then buy another 680 for "me".

Guarantee you will be a happy man.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
That sucks. You should submit a trouble ticket to their driver team so they can get it sorted since WoW has sort of a big following.

Will be doing that, but testing 13.6b2 before I assume it's the driver and not just me "forgetting things."

Time to go Green. :rolleyes:

Of course, I'm going to smash this Hulk style. :p

To your edit:
Sell one of those 7970 from "me", then give that 7970 to "her", then move the 680 from "her" to "me", then buy another 680 for "me".

Guarantee you will be a happy man.

Please don't do this stupid stuff assuming I'm going to jump ship or whatever because I'm being vocal about an issue. If you don't like reading negativity about AMD/Radeon just ignore it. You do nothing by promoting the "fanboy" mantra. KTHXBAI.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
MSI sent me a custom GOP/UEFI for my 8+6 pin card today, so I loaded the 6+6 up and the 6+8 then proceeded to enjoy the fast boot for Windows 8. Nothing insane, just a nice additional benefit in a world of small incremental increases.

Then the fun began, MSI AB can't unlock voltage. MSI AB causes the system to freeze, once a game is loaded. Is it my cpu, my ram, is my gpu overclock unstable, I dunno it won't tell me nothing no bsod or anything to go on... Enter blind man troubleshooting!

So then there was Trixx, but Trixx doesn't have vram control for 4.6, not a problem right? Use 4.4 mod Balla, but then there is that small problem of 4.4 mod not being able to adjust VID!


Lulz, no joke.

So now I run 4.4 mod and set my vram voltage, close it, then run 4.6 to get my clocks and voltages, then close that for fear it will crash me.


This is just my experience personally with AMD, there is a lot of tinkering, troubleshooting, quirky *fixes*, and general WTFUDDERY.

It's not terrible, and I'm more laughing than complaining, but if I said its been all smooth sailing and a totally flawless experience I'd be lying >.<


Edit: I when I read my post I had a strong sense of deja vu, seems like I've been down this road before!
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I can definitely confirm it is the driver.

Same spot, even cranked up 8xAA, CFX in AFR Mode - 67 FPS.

My GPU load was normal too, both cards showed a more even load curve (60% each.)

Welps, now to report this to AMD, since my WoW guild is starting back up I'm forced with:
13.8 - single card
or
13.6 - CFX

With FFXIV around the corner, I'd really like to see what the performance boost was for 13.8b2

You'd figure disable Frame Pacing would revert to normal CFX mode found in previous drivers. Whatever they did - it seems the on/off switch doesn't work.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Ticket sent.

Hey Balla, you don't get the 99% load issue on your slave card?

Using MSI AB, Win8, Overdrive Off, my slave card is always pegged at 99% with 13.8.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
No but since I updated the bios the second card idles at 500MHz, 99% usage has always been an issue with ULPS for me personally.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
No but since I updated the bios the second card idles at 500MHz, 99% usage has always been an issue with ULPS for me personally.

Good ol AMD haha.

Gonna try 13.8Beta1 and see if disabling Frame Pacing helps. Hopefully it doesn't BSOD me like it did yesterday.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Anyone using something similar to:

Catalyst 13.8 Beta2
Win 8 64bit

Can you test your load if you run GPU-Z.

I found this rather...amazing. All I did was turn on GPU-Z so I can test monitor my slave card.

On Desktop, idle only Chrome and GPU-Z is open:
Master Card:
rx.png

Slave:
6dx.png


The moment I turn off GPU-Z the card returns to idle.

Either something is wrong with my system, or 13.8 Beta weren't ready for prime time. I might stick to 13.6 Beta 2 just to be safe. It also reports my card as PCIE 3.0 1x, and yes I know that is a known issue, but just weird.

Not a big fan of my slave card pegged at 99% even in games/scenes that wouldn't push them too hard.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
wut_zps8533c118.png~original


Weird, now I'm idling properly at 300Mhz...

The saga continues...


Edit: Have you tried a stand-alone program to disable ULPS?
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
wut_zps8533c118.png~original


Weird, now I'm idling properly at 300Mhz...

The saga continues...


Edit: Have you tried a stand-alone program to disable ULPS?

I haven't tried disabling ULPS. My last adventure with disabling ULPS caused my slave card to not go into ZeroCool idle. It would idle at 500/300 or something like that while my primary would idle at 300/150

Here is the same GPU-Z/Chrome scenario but with Cata 13.6 Beta2:
Master:
bsv.png

Slave:
6xs.png


Unless it's something else I'm running (hardware wise) I'm out of ideas. Too tired to trouble shoot some more tonight, will tinker tomorrow.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
It's a bug with CF and ULPS.

Its been hit or miss for me. I played all of Sleeping Dogs with it enabled and overclocked, as well as a few weeks of various games in borderless windowed mode - however since then I've had more trouble than I found it was worth keeping it on.

Sometimes it works, sometimes it doesn't *shrug*.


Are you overclocking at all?

I think I was doing it with 13.6, didn't you mention rolling back and it working properly for you?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I dunno if our issues are related or not, but this seems to be my personal fix.


When I load up Windows I need to check GPUz, if the second card is idling at 500MHz I need to disable CF, then enable it.

After I do that it returns to the proper 300MHz idle.

If I don't do that I get driver crashes, even system freezes.

This is what BF3 spits at me if it's idling 500MHz and I open the game.

derp_zps17fa4668.png~original
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Hey Balla so you have two MSI twinfrozr 7950, one with 6+6 and another with 8+6 pcie connector? that's really weird.