I have seen a few edge cases where 3rd party ones work better than windows defrag / contig but it was rare.
Example: Lotus notes server with 100+ 2->30gig mail files where the admin had file growth set to "File system default" ie NTFS 4k compounded with a mail file clean up that compacts the store weekly. This would result after months some of these files having 750k+ fragments. I have screenshot some place of that. I had to use a 3rd party defrag utility that still used the built in NTFS defrag API to open gaps on the disk to defrag the disk. Mostly because the system couldn't be down to do a full move operation to let a file move naturally defrag the files. We also changed the compact to "monthly" and to "compact and allocate an additional 256MB of space per store" and "expand the stores by 64MB at a time." That defrager ran as "idle time" for about 9 days before it got the mail stores to 64MB fragments.
Fragmentation with fragments that are 64MB+ barely register on performance. However when people search email on stores that are near 1million+ file fragments, they do notice a performance hit, even of a decent SAN.