The Intel SSDs are connected to the onboard chipset SATA ports; but its software RAID that does the work. The disks operate in AHCI/NCQ mode which is required to use SSDs to their full potential.
The RAID engine is geom_stripe (generic RAID0 on FreeBSD). The Stripesize is 1 megabyte (1MiB). My tests suggest the higher the stripesize, the more random IOps.
Generally i found high stripesizes to work well with random I/O, and small stripesizes to work well for sequential transfers. With 1 megabyte stripesize, you need to read ahead alot to still put the other disks at work when doing sequential I/O.
FreeBSD (and also Linux) do not require partitions on disks; you can put the filesystem directly on the bare device node (/dev/sda, for example). This has the advantage of always being perfectly aligned. The downside is that booting into windows may overwrite portions of the disk quickly as windows asks to 'initialize' the disks. This is only a concern if you ever connect these disks to another OS like Windows.
The Intel X25-V read 245MB/s, and random read is slightly below that. So RAID0 in my case appears to scale perfectly; i couldn't be more happy about its performance. It writes with 40MB/s per disk; thus 160MB/s with 4 or 200MB/s with 5 SSDs in the RAID.
Some background info:
I'm using the SSDs to power all my five Ubuntu Linux workstations, who do not have any system disk or other local disk. Each workstation uses network boot and has its system drive on the central server instead, accessed using iSCSI and NFS. This has the advantage of having all my workstations running on SSDs on the central server. The downside is that gigabit limits my throughput considerably; which is why i'm looking at 10GBaseT and things like teaming several cheap gigabit NICs together to a faster (2Gbps) network interface.
One other sleek feature is snapshots on my system disks. Whenever i do an update, i snapshot first. If anything with the update goes wrong, which happened one time during upgrade to a beta-version, i can simply rollback to before the update again and my system disk would 'go back in time' so to speak.
So i really love ZFS+SSD.
Right now, i'm using the SSDs as:
- iSCSI images of system disks
- NAS central storage accessed with NFS
- ZFS L2ARC cache device for a huge multi RAID-Z array storing mass data
I'm still looking into multi-NIC setups, to increase my network bandwidth.