I have an ATI Radeon 9550 128MB (128-bit memory) AGP that I'm testing with WMV-HD clips. The R9550 is a die shrink of the R9600 GPU with lower core clock (250 vs. 325), otherwise identical. I've the latest Catalyst 6.2 driver + Catalyst Control Center with WMV acceleration enabled in Catalyst Video Properties, using WMP10 on XP Home SP2 with all available updates.
ATI SmartGART and DXDIAG report everything is kosher with my graphics subsystem. AGP mode is running at the maximum supported by my motherboard (AGP 4x). Following ATI's instructions for enabling DXVA WMV acceleration, I installed the DXVA patch for WMP10, then selected 'Use High Quality Mode' in WMP10 video acceleration properties.
Playing a 720p clip with the R9550, CPU utilization varies between 80% ~ 90% no matter if high quality or overlay mode is selected. I even deselected WMV acceleration in Catalyst Control Center, no difference at all. There are no dropped frames and the video looks correct, but this is exactly the same performance I get using a Radeon 8500 128MB AGP!
1080p clips are dropping frames like mad and play choppy, but not any more so than with the Radeon 8500. CPU utilization for both R9550 and R8500 when playing 1080p clips runs between 95% and 100%.
In short, I am getting absolutely no better results in terms of decoding performance or CPU utilization using R9550 than with R8500.
I realize the R9550 isn't an X1800 or anything, but shouldn't this GPU be doing more hardware DXVA and WMV acceleration than a R8500?
TIA
ATI SmartGART and DXDIAG report everything is kosher with my graphics subsystem. AGP mode is running at the maximum supported by my motherboard (AGP 4x). Following ATI's instructions for enabling DXVA WMV acceleration, I installed the DXVA patch for WMP10, then selected 'Use High Quality Mode' in WMP10 video acceleration properties.
Playing a 720p clip with the R9550, CPU utilization varies between 80% ~ 90% no matter if high quality or overlay mode is selected. I even deselected WMV acceleration in Catalyst Control Center, no difference at all. There are no dropped frames and the video looks correct, but this is exactly the same performance I get using a Radeon 8500 128MB AGP!
1080p clips are dropping frames like mad and play choppy, but not any more so than with the Radeon 8500. CPU utilization for both R9550 and R8500 when playing 1080p clips runs between 95% and 100%.
In short, I am getting absolutely no better results in terms of decoding performance or CPU utilization using R9550 than with R8500.
I realize the R9550 isn't an X1800 or anything, but shouldn't this GPU be doing more hardware DXVA and WMV acceleration than a R8500?
TIA