This is the first time I've install any flavor of ZFS and decided to go with a baremetal setup with Nexenta CE.
I wanted a storage system that support VAAI and Nexenta CE fit the bill perfectly . This is to be used as shared ISCSI target for 4 ESXi host on a C6100 platform. The system is built on a supermicro chassis 836 with build in sas2 expander with the following hardware:
X8DTL-iF 24Gb Mem
Intel l5606 CPU
OneConnect 2 port 10Gb CNA in ethernet mode
16 x Hitachi 450Gb 15k Drives
LSI-9211 IT
1 x WD Raptor 150Gb -- Syspol
The zpool is created with 8x Mirror vdev with no L2ARC or ZIL (was planning to use the other 9211 port to connect a few SSD for this) Compression=on, Dedup=off, Sync=normal. I've created 3 x 1TB iscsi ZVOL and 3 ISCSI target with default setting and connect them to the ESX server.
Initially in vm performance shows promising result. My atto bench was super, and IOmeter also looks very good.
However as I move more VM from my old storage I noticed during svmotion the transfer throughput was not steady, in fact it was very bursty. Network performance monitor on the other storage (running windows) shows that throughput was erratic. You can see it burst to 120MB for a few sec then drop to 30MB then up again until transfer is complete.
A simple 10Gb file transfer from my desktop to a VM shows similar result both on the ISCSI utilization and my desktop network throughput.
Looking at iostat during transfer shows that ZFS is flushing it cache every 5 second for a bout 3 second then stop, and it is during the write flush that my transfer performance tanks.
Just curious if this is the normal behavior of zfs or if there's any tuning that can be done to help with this issue.
Thanks
I wanted a storage system that support VAAI and Nexenta CE fit the bill perfectly . This is to be used as shared ISCSI target for 4 ESXi host on a C6100 platform. The system is built on a supermicro chassis 836 with build in sas2 expander with the following hardware:
X8DTL-iF 24Gb Mem
Intel l5606 CPU
OneConnect 2 port 10Gb CNA in ethernet mode
16 x Hitachi 450Gb 15k Drives
LSI-9211 IT
1 x WD Raptor 150Gb -- Syspol
The zpool is created with 8x Mirror vdev with no L2ARC or ZIL (was planning to use the other 9211 port to connect a few SSD for this) Compression=on, Dedup=off, Sync=normal. I've created 3 x 1TB iscsi ZVOL and 3 ISCSI target with default setting and connect them to the ESX server.
Initially in vm performance shows promising result. My atto bench was super, and IOmeter also looks very good.
However as I move more VM from my old storage I noticed during svmotion the transfer throughput was not steady, in fact it was very bursty. Network performance monitor on the other storage (running windows) shows that throughput was erratic. You can see it burst to 120MB for a few sec then drop to 30MB then up again until transfer is complete.
A simple 10Gb file transfer from my desktop to a VM shows similar result both on the ISCSI utilization and my desktop network throughput.
Looking at iostat during transfer shows that ZFS is flushing it cache every 5 second for a bout 3 second then stop, and it is during the write flush that my transfer performance tanks.
Just curious if this is the normal behavior of zfs or if there's any tuning that can be done to help with this issue.
Thanks