Well, on one hand, the algorithms for SETI could actually be very good for a GPU. The main algorithm is the FFT, the same algorithm used by various large-prime-finding projects.
The problem is that graphics cards use single-precision math, while most projects require double-precision. You probably don't care if a pixel rendered in a game is off by 1/1000th of a pixel; but if the math in SETI is off by the same amount, it causes big differences in results.
There are probably ways around this (I know there are for integer math), but they might be so slow that the card would go from 40x faster than the CPU to something like 40x slower. (I'd at least expect no higher performance than the CPU itself.) We may have to wait a generation or two (or maybe ten) before GPUs can do SETI effectively.