Personally, I would just stick with the default 64K and be done with it - moving down to 16K will slightly increase the read transfer rate on small files, but the small file performance will still primarily be hampered by IDE's slower access times, so the difference should be very small. Might as well get the better performance on the large files you do have to deal with.
In fact, the Anandtech article mentioned above found 64K to be the best, by a narrow margin, for typical workstation use on the Promise controller. But... those tests were conducted using StorageReview's Testbed 2 methodology, which SR now states is misleading for desktop usage. All of this leads to the very tricky problem of benchmarking storage. The only characteristic of an array you can reliably benchmark yourself is streaming transfer rate (using Winbench/Iometer), but this accounts for only a small part of the overall performance of an array, especially on the typical Windows desktop. Other factors, especially firmware quality and access times, are more relevant. SR now uses a new, very expensive, commercial benchmark called IPEAK to measure these factors. About the best you could do would be to run something like Winstone Content Creation - though this doesn't necessarily measure your personal usage very well. Content Creation was also used in the Anand article and supported their Iometer results, so you wouldn't really be doing anything new. Practically, I don't see much that you could do other than trust the Anand results, imperfect though they are.
The SR article on Testbed 3 and their improved methodology is an *extremely* informative read if you're interested in storage. You can find it
here.