iSCSI over wireless N/Gbit switch-- bandwidth

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
I have a RAID1 NAS that has a 10/100/1000 NIC. I was using my WRT54G until last month when it finally died. I burned a $125 Dell GC to get some new patch cables and a DLink DIR-655.

I just reconfigured my NAS network settings for the new subnet and copied files from an SD card.... All I can say is, WOW...it made a total difference. The real limitation was always wireless G only providing 20-30 megabits. I am extremely happy with the performance of N & an iSCSI mapped drive.

Does anyone else use iSCSI this way? I've got all my systems setup to auto-mount using iSCSI initiators & CHAP. It's funny because at work, I avoid iSCSI like the plague and only use FC. :p
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
No, because in general I've noticed SMB tends to be a little faster than iSCSI in most of the benchmarks/transfer methods that I care about. That and iSCSI is limited to your max single connection speed.

I am running 8 on my server and 8.1 on my desktop with a couple of GbE links to the switch on each...which means I am limited to 2GbE with SMB3.0 transfers (thank you SMB Multichannel). iSCSI would limit me to 1GbE max transfers.
 

kevnich2

Platinum Member
Apr 10, 2004
2,465
8
76
You can use multi-pathing with ISCSI and your bandwidth goes up based on the available channels/NIC paths you have set to multi-path with.

As far as the OP - I wouldn't recommend iscsi in that fashion anyway but that's up to you. I would strictly use SMB. iscsi is more block based storage and used in more server based applications.
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
No, because in general I've noticed SMB tends to be a little faster than iSCSI in most of the benchmarks/transfer methods that I care about. That and iSCSI is limited to your max single connection speed.

I am running 8 on my server and 8.1 on my desktop with a couple of GbE links to the switch on each...which means I am limited to 2GbE with SMB3.0 transfers (thank you SMB Multichannel). iSCSI would limit me to 1GbE max transfers.

Multi-path allows for some pretty insane speeds. I run dual 10Gbe Nics between server/SANS just for the iSCSI network.
 

kevnich2

Platinum Member
Apr 10, 2004
2,465
8
76
This is for the OP - depending on your application and desire of use, you're going to find limitations in how you're using iscsi. your computer will see an iscsi drive almost like a locally attached drive it's just using tcp/ip to access it. You cannot access this iscsi LUN from multiple computers simultaneously or you will cause data corruption. In contrast, smb is made for multiple connections from different systems.

This is why it's used in more server environments where a server accesses a centrally stored ISCSI LUN and then clients access that from the server using smb.

I can't really see any situation where using iscsi over smb in the fashion you are is better but that's for you to ultimately determine yourself.
 

Scarpozzi

Lifer
Jun 13, 2000
26,391
1,780
126
Multi-path allows for some pretty insane speeds. I run dual 10Gbe Nics between server/SANS just for the iSCSI network.
10G copper is definitely the way to go these days. It's finally giving FC a run for its money with the speed vs price.

My NAS is split. I'm running half of my storage on SMB/CIFS and the other half on iSCSI. I was really doing iSCSI in this fashion as a test and haven't had any wired performance issues with it. I was primarily doing iSCSI because I was using the NAS to backup a desktop system and wanted to map a permanent drive. Microsoft's iSCSI initiator works alright for this and speeds are great over ethernet.

These days, I have more laptops than I do desktops and I typically offload our camera and video footage from SD - Laptop/iSCSI over wifi.... So the transfer is a little slow when doing multiple GBs of data. Other than that, I just store data there and will eventually get another NAS or external drive and mirror data using rsync if I don't just pay a cloud backup company a yearly fee...
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I wouldn't run iSCSI over wireless N because it is so intolerant of the packet loss, latency and half duplex nature of wireless.

Quick glance at one SAN shows I am pushing 1800 IOPS over iSCSI this minute. FC is practically "dead." Very little reason to pay for the up charge for it unless there is a very specific reason to use it like ultra low latency for a high load database server.

Hell even lowly "1 gig" is pretty powerful 8 path multipathed to dual active/active controllers. Granted I would like 10gig for that box because of the wiring mess behind those servers.
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
I wouldn't run iSCSI over wireless N because it is so intolerant of the packet loss, latency and half duplex nature of wireless.

Quick glance at one SAN shows I am pushing 1800 IOPS over iSCSI this minute. FC is practically "dead." Very little reason to pay for the up charge for it unless there is a very specific reason to use it like ultra low latency for a high load database server.

Hell even lowly "1 gig" is pretty powerful 8 path multipathed to dual active/active controllers. Granted I would like 10gig for that box because of the wiring mess behind those servers.

That's why I've moved to 10GB. It's at the point where cost wise a 2x10Gb module is not much more than 2 4x1Gb modules. Less space taken in my boxes (double GPU cards in them so PCI-E space is a premium) and go from 8 Ethernet to 2 (and greater speed no top of that!). It's pretty much a no brainer for VM workloads.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
I wouldn't run iSCSI over wireless N because it is so intolerant of the packet loss, latency and half duplex nature of wireless.

Quick glance at one SAN shows I am pushing 1800 IOPS over iSCSI this minute. FC is practically "dead." Very little reason to pay for the up charge for it unless there is a very specific reason to use it like ultra low latency for a high load database server.

Hell even lowly "1 gig" is pretty powerful 8 path multipathed to dual active/active controllers. Granted I would like 10gig for that box because of the wiring mess behind those servers.

Where I stand, FC is FAR from dead, the newer 16g optics still outperform anything that iSCSI can do.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Where I stand, FC is FAR from dead, the newer 16g optics still outperform anything that iSCSI can do.

It is still practically dead. Like I mentioned, if you still need FC, you likely have a special case.

It may bounce back as the 32gb stuff comes back and 128gb is in the pipe but 40gb Ethernet isn't far off and the 100gb Ethernet lab devices are starting to appear.

From my experience, the 2 of them are just transport mediums. Once you get to real world use, the techs are neck and neck. 1 gigabit of FC vs 1 gigabit of iSCSI generally delivers the the same throughput with FC edging out latency. FCoE generally eliminates the latency of the IP stack.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
It is still practically dead. Like I mentioned, if you still need FC, you likely have a special case.

It may bounce back as the 32gb stuff comes back and 128gb is in the pipe but 40gb Ethernet isn't far off and the 100gb Ethernet lab devices are starting to appear.

From my experience, the 2 of them are just transport mediums. Once you get to real world use, the techs are neck and neck. 1 gigabit of FC vs 1 gigabit of iSCSI generally delivers the the same throughput with FC edging out latency. FCoE generally eliminates the latency of the IP stack.

FC also has less protocol overhead, which means slightly better performance.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
FC also has less protocol overhead, which means slightly better performance.

I agreed. Hence why FCOE beats iSCSI. In reality this really only shows up in very latency sensitive applications like when finance decides to do a query * on * outer join * on * (very high database load) or when you are very near the limit of the physical layer.

Most of the time when you need it, you will know you need it. However just hosting 40-50 file servers on VMWare, no one is likely to notice which of the physical layers you picked.
 
Feb 25, 2011
16,983
1,616
126
I use an iSCSI LUN on my NAS at home as well - mostly because I can. (I'm not doing anything that wouldn't work fine with mapped CIFS shares.)

But it's hard-wired. I've had enough performance issues with WiFi that this thread title makes me wince a little every time I read it.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
I agreed. Hence why FCOE beats iSCSI. In reality this really only shows up in very latency sensitive applications like when finance decides to do a query * on * outer join * on * (very high database load) or when you are very near the limit of the physical layer.

Most of the time when you need it, you will know you need it. However just hosting 40-50 file servers on VMWare, no one is likely to notice which of the physical layers you picked.

Im not overly familiar with the FCOE tech thats out there, but wouldnt it still have IP header info that would make it ever so slightly less efficient than straight up FC?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Im not overly familiar with the FCOE tech thats out there, but wouldnt it still have IP header info that would make it ever so slightly less efficient than straight up FC?

FCOE doesn't utilize the IP stack. Think IPX/SPX over Ethernet. (FCoE is nothing like IPX, I just mean they utilize and can share Ethernet networks with each other but one doesn't utilize or operate with the other)
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Infiniband. /done

Cheaper and faster than 10GbE too.

I wouldn't say cheaper. I would say they have reached parity. Main issue with IB is that is states the raw rate so 10gb to 10gb IB is slower. 56gb to 40gb, barely faster at 44gb data rate.

Also seems that most of the arrays that support IB are automatically much more expensive...

--edit--

IBM paper on 40GBe vs IB. Generally minimally different or leaning in favor of GBe.

http://www.chelsio.com/wp-content/u...t-A-Competitive-Alternative-to-InfiniBand.pdf
 
Last edited:

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
FCOE doesn't utilize the IP stack. Think IPX/SPX over Ethernet. (FCoE is nothing like IPX, I just mean they utilize and can share Ethernet networks with each other but one doesn't utilize or operate with the other)

So when it means ethernet, it mainly means just the cabling type correct? I mean its not routable or anything? How do you define storage? WWN of some sort since IP addressing isnt there?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
It puts the FCoE frames in to Ethernet frames. It operates on a standard ethernet switch even though there are ones that have FCoE specific features. You generate a WWN just like fiber. I see a lot of them use the ethernet MAC as part of the WWN.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
So, on to my next question (too lazy to google i guess) But what is the benefit of these types of storage connections? Can I use the same adaptor for storage and for network? What if they are separate vlans( storage/ip) , 802.1q trunk down to the port etc?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
So, on to my next question (too lazy to google i guess) But what is the benefit of these types of storage connections? Can I use the same adaptor for storage and for network? What if they are separate vlans( storage/ip) , 802.1q trunk down to the port etc?

Some people do. A trunk pair of 10gig connections is often enough to handle all the IO for that server. Even VM's. If you need more, go up to 3 or 4. Little bit of QoS helps tune it a bit if needed.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Is this of any use to us that run a home NAS?

What are the reasons I would want to use iSCSI rather than SMB for sharing files, etc.?

Or is this intended for diskless workstations?
Will any off-the-shelf PC / NIC be able to boot off of a remote iSCSI LUN hosted on a NAS?
Is Windows' slow running over the network, with no local storage?
How does an iSCSI initiator work, compared to PXE booting?
Are there any wireless USB NICs that support PXE booting? I guess I can continue to run a wireless ethernet bridge, and use the desktop PC's built-in NIC, and PXE booting if supported in the BIOS / UEFI.

Basically, can I boot my PC using remote storage (an iSCSI LUN on a NAS, FreeNAS or some such, maybe QNAP?), and have that remote storage set up with TrueCrypt?

Would performance be acceptable for running Windows 7 on the desktop, over wireless N (30Mbit/sec)?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
iSCSI is block storage. So for a home NAS it would depend. iSCSI would let you mount a remote LUN like a local disk. That local disk is just block storage so you can do what ever you want with it. If you have a iSCSI HBA that supports booting you could boot from it.

iSCSI and wireless is basically incompatible. It will work, but very badly.