I guess that is AMD's software solution for the microstutter. Keep one card going, possibly with a frame ready to go at all times? Haha.
Actually, the way I understand frame pacing is that you want all the frametimes to be evenly spaced apart.
1. The Easy Coles Notes
In a micro-stuttery SLI/Crossfire GPU setup, you can have inconsistent frametimes or frame delivery times (e.g. both GPU's can have nearly identical frames from the same gameworld time, or that different gameworld times aren't delivered on a properly relative basis to the monitor's display. Basically, a page flipping timing problem, e.g. flipping a framebuffer to front buffer isn't occuring at a reguarly spaced interval.
Think of frame pacing as GPU load balancing, while keeping gametimes synchronized (on a relative time-basis) to delivery of frames to display. Different GPU's can be rendering full game frames concurrently, in parallel, but with slight time offsets relative to each other. Metaphorically, frame pacing is like the music orchestra conductor controlling the pace of each music player (GPU). As a hugely simplified example, during 60 frames per second on two GPU's:
Frame pacing is the art of making sure:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime 1.0/60 (to be displayed at T+16.7ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at T+33.3ms)
GPU #2 is rendering gametime 3.0/60 (to be displayed at T+50.0ms)
(good, good: Consistent frame rendertimes, consistent gametimes, consistent frame delivery)
INSTEAD of incorrectly microstuttery:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime
0.1/60 (to be displayed at T+16.7ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at T+33.3ms)
GPU #2 is rendering gametime
2.3/60 (to be displayed at T+50.0ms)
(bad, bad: Incorrect gametimes being rendered, out of sync with visual presentation)
OR THIS incorrectly microstuttery:
GPU #1 is rendering gametime 0.0/60 (to be displayed at T+0ms)
GPU #2 is rendering gametime 1.0/60 (to be displayed at
T+2ms)
GPU #1 is rendering gametime 2.0/60 (to be displayed at
T+35.3ms)
GPU #2 is rendering gametime 3.0/60 (to be displayed at
T+37.5ms)
(bad, bad: Incorrect timing of GPU back buffer flipping to front buffer for monitor)
Obviously, this above is a simplified-down and dumbed-down explanation of why frame pacing is necessary in multiple-GPU setups. nVidia has to do it, they just did a good job of it compared to AMD in the past. AMD is just catching up to a rough equivalent to what nVidia is already doing (even though they do it in a different way). But the frame pacing mathematics are still identical, from a human vision microstutter-detection perspective.
2. The Complex Technical Stuff
Now the below goes more technical than the above, frame pacing during SLI, is the art of delivering frames to the human eye (display) in a very consistent, evenly-paced manner. Some of it is the videogame's responsibility, and some of it is the driver's repsonsibility. One of the many examples a driver has to work with is that for SLI/Crossfire, it's often the driver's responsibility to make sure the return from Direct3D Present() returns in a consistent amount of time. Behind the scenes, it needs to deliver the frame to the display in a consistent manner that is synchronized with gametime (even during VSYNC OFF). If it blocks only sometimes and returns quickly at other times, because of SLI frame pacing problems, then it can throw timings out of whack, and the game can't accurately render gametimes delivered at accurate times to the monitor, and can't render each frame at an evently spaced interval. Then all this divergence creates visible microstutters.
A microstutter problem can still affects either VSYNC OFF or VSYNC ON or both. During VSYNC OFF, you can have some small slices of combined with larger slices of image (due to inconsistency in frame pacing) rather than very evenly-spaced-apart tearlines for more consistent frame rendering/delivery. This creates a feel of increased microstutters during VSYNC OFF, since each scanline in a frame is a constant amount of time, and having different-sized slices (more random timing of tearlines) of image during VSYNC means inconsistency in a time-basis from frame to frame. People who understand how VSYNC OFF and tearing works, know that when running at 300fps @ 60Hz (~5x refresh rate), there is an average of 5 frame slices per refresh, and an ideal config has all frame slices 1/5th the height of the screen, for consistency. (in real life, the tearlines moves ideally about a bit randomly rather than be annoyingly stationary, perhaps because height of frame slices vary slightly (randomly) say bewteen 1/4.985th to 1/5.312th height of screen, as an example). Conceptually, a human needs to think of the frames being delivered to the monitor like a reel of refreshes, and tearlines being tantamount to film reel splices, and also visualize the black gaps between frames as being the vertical blanking interval between refreshes (approximately 5-10% of the height of a visible frame). Those who remember analog TV's with VHOLD adjustments (and see the horizontal black bar boundary between refreshes, when VHOLD is badly adjusted), will visualize this concept better than the average human. Occasionally tearlines can splice across the blanking interval, too, so sometimes some frames have 4 onscreen tearlines, and sometimes frames have 5 onscreen tearlines, because some of the tearlines splices across the blanking interval. But what's important is that splices (seen as tearlines during VSYNC OFF) is made at consistent intervals along the "film reel" of refreshes, including through the blanking intervals, too. Now with bad GPU pacing during VSYNC OFF, you can have some frame slices that are 1/3rd the height of screen, and other frame slices that are 1/20th the height of screen. Inconsistent height in frame slices during VSYNC OFF, leads to a feel of microstutters, due to the divergence in game time away from frame delivery times. Since scanlines (single rows of pixels) are being delivered to the computer monitor at a constant speed (at the current graphics card dotclock speed), you really want each slice height to be approximately the same on a frame-to-frame basis, even for slices that overlaps the blanking interval (e.g. bottom 1/10th of previous refresh and first 1/10th of next refresh). Incorrect VSYNC OFF handling of blanking intervals can also be a theoretical cause of microstutters. The metaphorical filmreel of refreshes are fed into the display at constant speed -- one row of pixels at a time, at the current "horizontal scanrate". VSYNC OFF is simply the splicing of the next frame during mid-refresh while still delivering the current refresh. All VSYNC OFF slices must be consistent (even across the "black bars" in the metaphorical filmreel of refreshes -- the blanking interval between frames). If the splices are inconsistently spaced anywhere along this metaphorical filmreel, this creates the feel of microstutters due to the divergence of gametimes away from the delivery times (since the delivery of rows of pixels are at a constant rate).
Also, I'm a computer programmer, so I can tell you some insight of some causes of microstutters in SLI setups, from a programmer's perspective. It's bad for Direct3D Present() API to take huge fluctuations in times to return. It can't randomly return instantaneously sometimes, and take a long time to return at other times; due to random pacing problems on SLI/Crossfire setups. This can create microstutters if the drivers doesn't provide a little bit of SLI/Crossfire pacing help. If the next gametime rendering depends on the timing of Present(), and incorrect gametimes are being rendered because some Present()'s returned quickly and other Present()'s returned very slowly, this can create microstutters. It might not be it, it can be other part of the frame delivery chain. This is just one of the many things that can "go wrong" with microstutters, in the art of trying to keep gametimes synchronized with presentation to monitor. Trying to make multi-GPU look like one GPU capable of rendering sequential frames on a consistent time basis, is a major software challenge, especially since some frames take longer to render than others, creating challenges for GPU load-balancing.
In the old days (like 3dfx), you rendered different parts of the same frame on separate GPU's (Scan Line Interleve). You also have split-frame rendering modes, often used by nVidia SLI for fill-rate limited applications. You've also got alternate-frame rendering modes. Today, things like complex shaders (that interacts with different parts of the same frame) have made it much easier/simple to delegate entire frames to specific GPU's. For split-frame rendering, shaders trying to read the other GPU's framebuffer for computations, can really slow things down, and some shader effects can have "seams" along the split frame. Split-frame can have very uneven loads (e.g. low-detail sky at top, complex detailed ground at bottom). So split-frame rendering often isn't good for shader-heavy graphics with lots of detail differences throughout the image. So most modern shader-heavy games tend to use alternate-frame-rendering during multi-GPU setups. All these crazy different modes of parallel-GPU operations can have very different performance impacts and frame-pacing considerations, and dramatic differences (and potential fluctuations) in frame rendertimes, that can interfere with motion fluidity. Alternate frame rendering is much simpler. During alternate frame rendering, the next frame is started on the other GPU while still only about halfway finished rendering the current frame on a GPU.
Now, both the game makers and the driver makers are responsible for proper frame pacing, but good graphics drivers can reduce the amount of work that a game maker has to do for frame spacing. If the drivers are really good, very little further work can end up being required of the game programmer to make things work great with multiple-GPU setups.
There are many places in a pipeline that microstutters can be created in, and it's quite horrendously complex in a SLI/Crossfire setup. You will go bald tomorrow, trying to make a multibillion-transistor chip (otherwise known as a GPU) to co-operate harmoniously, especially when you let people (game makers) do really unforseen things with the piece of complex silicon.
While I do software development, I am not a low level graphics driver programmer. However, I know the importance of consistent frame rendertimes, and consistent frame delivery times, for maximum motion fluidity. And I can appreciate how horrendously complex this can become for multi-GPU setups. I have come to appreciate the challenges that graphics vendors have to do with making motion more fluid.
Now, a picture is worth one thousand words. See AnandTech's old diagrams.
As you can see, AnandTech's old diagrams is worth a thousand words.

The approximate concept is still the same today for Radeon and nVidia, but with different variations (e.g. internal bridges, mix-and-match cards, any card can become master/become slave, prevailing use of alternate-frame rendering modes). You can recognize that the concept of trying to accurately load-balance between graphics card is horrendously, horrendously, horrendously, complex. Sometimes a card does more complex stuff (e.g. an edge of screen has more graphics), and that can cause framepacing to become challenging, as one GPU "falls behind the other", and you need some framepacing help to fix the microstutters.
3. References
Useful references about parallel GPU's (scanline interleave, split frame rendering, and alternate frame rendering)
http://www.anandtech.com/show/1698/5
http://www.nvidia.com/object/quadro_sli_rendering.html
http://en.wikipedia.org/wiki/Scalable_Link_Interface