ZFS offers several beneficial features in a RAID:
1. Everything is checksummed, and verified. This costs CPU, RAM, and disk I/O, but detects corruption and loss by the drive better than most common file systems.
2. With RAID 1, an error in one drive can be recovered by data from the other drive. With RAIDZ, it can be recovered by using parity data. In a traditional RAID 1, this can be done if the drive gives an error, but not for any other form of corruption. With RAID 5, data corruption in one stripe means the whole stripe set is bad. RAIDZ2 offers RAID 6-like protection, but better, since it can also handle a drive read error or silent corruption, in addition to another bad disk (FI, a read error while rebuilding or running degraded, after a disk drops out, or new one is installed).
3. With copy on write, it can get around the RAID 5 write hole, since th old data is still on the drive. If it loses power, it can just roll back. But, copy on write also tends to cause fragmentation, over time.
TANSTAAFL. Getting the same kind of performance form ZFS as using traditional HW RAID will cost you as much as traditional HW RAID, if not more, plus needing to spend time tweaking it. OTOH, it will also give you far more flexibility, error checking, and error correcting, than normal HW or SW RAID.
200MBps from a NAS is simply not happening, without spending more what you want to. It will, but not today. 10GbE costs too much (2 NICs will cost about half your current budget), and most affordable consumer alternatives are going to have poor FreeBSD or Linux support. At best, you'll be looking at 80-100MBps from a wired NAS. Theoretically you can gang the Ethernet adapters, but good luck getting that working across OSes and all.
Also, write speeds could become an issue, over time, with RAIDZ (read speeds, as a file server should never become an issue, with healthy hardware). Write back cache can sometimes be enough to mitigate it, but sometimes it needs a real cache and/or log device, which means forking over more money for a couple of SSDs. OTOH, RAID 10 is always an option. Without dedupe, it might not end up becoming an issue, though (YMMV).
With a home-built server, you might end up needing Intel NICs to saturate GbE, due to driver support (with FreeBSD, it's going to vary by what NAS distro you use, and what your mobo has, so try what's in there, first), but that's not too expensive (20-30GBP, probably).
Also, does anyone know if ZFS likes big CPU caches? I wonder if an Athlon FX 4300/970 build might be slightly better.
If you want the high performance, you'll need to use a fast RAID enclosure. Or, get a big enough case your PC, and a RAID controller, and use RAID 10 on it, if you don't need the network sharing. Neither would give network access.
Now, I'm not saying you should stay away from a ZFS server, but just don't have sky-high expectations, especially over the network, which looks like it would be your main bottleneck. If keeping your data correct, and letting you know if it's not, is priority #1, then ZFS is one of the best options out there, and it is cost-effective, as long you don't mind putting the time in (us weirdos around here actually like spending time tinkering with this stuff 🙂).