The quality of the video will probably be about the same between all those possible cards. The GPU on the card is entirely disconnected from video capture.
The "avi" format isn't technically a single format. There's raw AVI, which is just the raw data for every frame, with no compression, but AVI is also a "wrapper" which can contain many other formats. Most Divx and MPEG4 files you see will have an AVI wrapper, even though it uses compression, not raw AVI. Your TV card outputs a raw data stream to the host CPU, which then performs any compression, and that's only dependent on the software you have. The WinTV-Go should allow at least MPEG2 format compression even if it's saved as an AVI. Sometimes it's not named MPEG2, the software may only give you "quality" settings. Lower quality means higher compression. You can also download free software, or purchase software, which allows you to capture in many formats and with various other editing features (cropping during the capture, editing after the capture, et cetera).
When you capture video, from perhaps a coax input, the TV tuner on the card converts the signal to one that the capture chip on the card can work with. If you use RCA inputs or S-Video, then it goes directly to the chip since there's no "tuning" needed. The chip converts it to digital data, and sends it to the processor. The CPU then performs whatever compression functions you've set in software (Divx, MPEG2 or MPEG4, anything you want, depending on the software). After it's compressed, or immediately if you're saving to raw AVI format without compression, the CPU sends the data to the hard drive or whatever other storage you're using. Of course, the system's memory is where the data is stored while it "streams" and the CPU works with it from there, not holding all the data at once, and the data for each frame is erased after the CPU is done with it.
If you had a hardware MPEG2 encoder, the CPU would offload the compression to the encoder, which allows a slower CPU to handle high compression ratios. But MPEG2 is somewhat old-fashioned now. Divx and MPEG4 offer significantly better compression (smaller file sizes) with the same quality, and most systems with moderately fast CPU's can handle video capture just fine even at higher resolutions and compression rates. Current video cards have onboard DEcompression routines, to allow varying amounts of CPU cycle offloading for watching videos or DVDs. However that's only for MPEG2 (ATi is adding Divx decompression in their next cards), and does not do any work when you're ENcoding a video.
Whether the system you have will be able to do what you want depends on three things: the hard drive speed, the CPU speed, and the resolution of the video you want to capture. The type and rate of compression you choose is dependent on those things.
If you have a very large video resolution, like 640x480, that's a very large amount of data per frame. Raw AVI video takes something like 35GB per hour. Your choices are to try to capture the best quality you can with little or no compression, or try to reduce the data needed to be stored by using compression.
If you try for best quality, with no compression, the hard drive has to be pretty fast; a single fast drive can possibly just barely do it without frame loss but not likely at 30fps; a striped IDE RAID set with two fast drives is fast enough to do it with no loss of frames, at 30fps. Anytime the drive can't store the data and be ready for the next bits fast enough, the processor will simply get rid of the data so it can deal with the next frame, meaning you lose frames.
If you opt for compression, you get a slight loss of quality, but since you're capturing from (I assume) a VHS camera, and since it's not vital for these projects that it be great video, it's not that important. Even a decent amount of compression with Divx gives you video that's almost as good as a DVD (assuming a good source video). Compression requires a fast CPU though, so that it can work with each frame, then send it to the hard drive (which can be not quite as fast now, since the frame is compressed and has less data to be stored). If the CPU isn't done with the current frame, the capture card simply disposes of it and attempts to send the next one, over and over until the CPU is done. A very slow CPU could lose dozens of frames for every frame that it does capture. I found that with moderate compression, my 1GHz Athlon managed to capture with a perhaps 2% loss of frames. This is essentially unnoticeable, and that was at a medium resolution at 30fps. 15fps would have been a cakewalk, and is usually sufficient for school project type things.