fp data is converted to an internal 80-bit format. the fp data you use and receive is still 64-bit. you'd get less precision in the end, but the question is does that change the end results to a large degree? hmm... i guess it would depend whether value is staying in the register or being written to memory.
For many purposes the loss of precision in going from 80 bit to 64 bit is unimportant. There are however a few aplications that need as much precision as possible. These ususally involve number crunching where there is some sort of feeback structure (part of the output feed back to the input), allowing small rounding errors to gradually build up. I should point out that not all algorithms of this type need super high precision to remain stable, it's only small minority.
Most x86 compilers give you an optional 80 bit FP data type (I haven't used it for a long time, I think it's called "extended" but I'm not sure). Obviously it's less efficient to do so, but those extra 16 bits of FPU mantisa
can be saved to memory if the compiler is so instructed.
As mentioned earlier, SSE2 is potentially much faster if vectorization is possible but has reduced instruction richness that may make it unsuitiable for certain tasks, particularly some scientific applications. Also, depending on the implementation, SSE2 may still be faster for certain tasks even if vectorization is not possible. (I dont know this for certain but I've seen some BM's that suggest the the opteron/A64 etc might still benifit from SSE2 even in the absence of vectorization).