No, what I mean is that if you're memory is low enough that you're actively using the pagefile you're also going to be paging from binaries, shared libraries, data files, etc on other volumes too so putting the pagefile on it's own partition is just going to cause more seeking and make things worse.
In case is that the page file is at the start of the partition fallowed by user data (lets say 30% of the disk is full), and the system is allowed to enlarge the page file, and a heavy usage pattern is detected by the system, the new chunk of the page file will new be at the end of the user data on the partition, as this (heavy) usage pattern continue the page file algorithm tells the os that all existing chunk of the page file is in need and the system will not remove them in the short period of time that the system is ideal and/or under only light load.
As time goes by the user add new application to his machine, filling 2/3 of his disk and keeps to maintain a heavy usage pattern when he reaches a new peak of memory requirement, the page file algorithm will try to
guess what will be a best size for a new chunk of page file to be allocated.
Now we have three chunk of page file scattered on the disk: one at the beginning of the disk, one at middle of the user data and one chunk at the end of data. While the page file algorithm will
try to estimate which chunk of code/date will best stored on the multiple fragments of the page file this if only a prediction, and at times, a simple move between one active windows to another will cause the system to execute a read from ALL the page file fragments, and at a non optimized sequence (middle chunk first, first chunk second, last chunk third, and then an additional read from the first chunk), causing a
noticeable delay.
The technique described earlier would have prevented this
specific delay.
Now remember that this is only a small idealized example, with only three large chunks of page file, in the real world I've seen far worse cases.
/edit
I'm not saying this optimization should be perceived as a 'best practice' or even a good 'tweak' but it does however have it merits, at least in some very specific scenarios.
(probably highly debatable but wth) And, you cannot take other unrelated read/write (user or system) operation into account when you design/deploy paging algorithms, or any other system service for that matter. In theory, system service operation should be design to work on their own terms (higher priority) and preferably in their own secluded background microcosms(memory and other resources like disks).