Boot Time: 2.5" vs GbE Diskless to SSD?

dealcorn

Senior member
May 28, 2011
247
4
76
A conventional 2.5 drive like WD Blues does not deliver a speedy boot process compared to a middle of the road SSD. How does a diskless workstation compare to a 2.5" drive when you are using GbE and your boot drive is a SSD on the server?

I ask this question in the context of a home media consumption station with light Internet use. If the box is consuming media from a hard wired server with reasonable resources, is local storage necessary?
 
Last edited:

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,549
425
126
A SATA 3 regular HD is probably faster than GB network traffic.


:cool:
 

jumpncrash

Senior member
Feb 11, 2010
555
1
81
A SATA 3 regular HD is probably faster than GB network traffic.


:cool:


Not by much though, and it will depend on the drive, a western blue or green might not be faster.

However tehre is no point in running an SSD though, because most mid end drives will be faster than what Gbit can do
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Your question is a little vague, but I think what you are asking is this:

Given the speeds of a local gigabit network, can you use a solid state drive locally for booting, and access media over the network on remote storage system.

The answer to that is:

1. Yes
2. That is the best way to do this as your media is easily accessed by other devices AND you don't have to try and stuff a decent, backed up storage array into an HTPC chassis.
3. No need for your HTPC chassis to be on all the time to serve media, and you can have a cooler running chassis.

By all means, put an SSD into your HTPC, and access the media from your network on another computer. Gigabit is enough to serve all current media needs unless you have some need to work with raw video.
 

dealcorn

Senior member
May 28, 2011
247
4
76
Your question is a little vague, but I think what you are asking is this:

Given the speeds of a local gigabit network, can you use a solid state drive locally for booting, and access media over the network on remote storage system.

The answer to that is...

Half, right. That is not what I want to do. I know I am ok with boot speed using a 2.5" WD Blue. I want to set up an Intel NUC as a disk-less media consumption station: there is no local boot drive. Instead I will boot off the network and my OS will be located on a networked Intel SSD.

Intel's NUC with Ethernet is set up for either M-sata boot, usb boot, or network boot: there is no plain SATA option. Generally flash drives boot faster than rotating disks due to faster random I/O (a benefit). Networked boot is slow due to latency (a detriment). My question was how to offset the benefit and detriment to conclude which would be faster when you boot off a networked SSD.

From the replies posted above my first sigh of relief comes from the fact that no one said "You idiot, haven't you seen this testing that conclusively shows that is a bad idea." My second point of optimism comes from the prudent selection of weasel words like "probably" which suggests this is not a slam dunk issue.

My intended use is solely for media consumption and light Internet browsing and I typically boot once or twice a day. If skipping the local m-SATA drive cost me an additional 15 seconds boot time, I can live with that for this machine.

My conclusion is that this merits further investigation so I ordered an Intel
Next Unit of Computing Kit DC3217IYE, a power cord, and 2 GB of 1.35v ram from Amazon for under $325. If it is a horrible experience, I can add a m-SATA drive later.

I live on an island with pricey electrons so the NUC efficiency has great appeal. At $325, this has strong appeal as a Linux box. It has almost all the benefits Atom was supposed to have without the detriment of low cost. More important, Intel's Linux support of it's mainstream graphics (like HD 4000) is pretty good. I think I like it.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
For your home, booting diskless via gigE would work. Just know that anything disk intensive is going to have higher latency due to the network. If you can disable or massively limit swap the performance should be ok over all once the machine is booted. The main place SSD's win is random IO so if you were booting 25 machines you would wouldn't end up thrashing like a spinning disk would. Heavy random IO is what kills spinning disk performance. The SSD on the other hand wouldn't even blink at heavy random IO and likely could still saturate the network making the network the slow link rather than the disk.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Half, right. That is not what I want to do. I know I am ok with boot speed using a 2.5" WD Blue. I want to set up an Intel NUC as a disk-less media consumption station: there is no local boot drive. Instead I will boot off the network and my OS will be located on a networked Intel SSD.

Intel's NUC with Ethernet is set up for either M-sata boot, usb boot, or network boot: there is no plain SATA option. Generally flash drives boot faster than rotating disks due to faster random I/O (a benefit). Networked boot is slow due to latency (a detriment). My question was how to offset the benefit and detriment to conclude which would be faster when you boot off a networked SSD.

From the replies posted above my first sigh of relief comes from the fact that no one said "You idiot, haven't you seen this testing that conclusively shows that is a bad idea." My second point of optimism comes from the prudent selection of weasel words like "probably" which suggests this is not a slam dunk issue.

My intended use is solely for media consumption and light Internet browsing and I typically boot once or twice a day. If skipping the local m-SATA drive cost me an additional 15 seconds boot time, I can live with that for this machine.

My conclusion is that this merits further investigation so I ordered an Intel
Next Unit of Computing Kit DC3217IYE, a power cord, and 2 GB of 1.35v ram from Amazon for under $325. If it is a horrible experience, I can add a m-SATA drive later.

I live on an island with pricey electrons so the NUC efficiency has great appeal. At $325, this has strong appeal as a Linux box. It has almost all the benefits Atom was supposed to have without the detriment of low cost. More important, Intel's Linux support of it's mainstream graphics (like HD 4000) is pretty good. I think I like it.

Ah, in that case its more dependent on how you set up your network. If you don't have a method set up to do Jumbo Frames over your network for iSCSI, you'll probably be disappointed.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Ah, in that case its more dependent on how you set up your network. If you don't have a method set up to do Jumbo Frames over your network for iSCSI, you'll probably be disappointed.

Jumbo frames only help with CPU usage and very tiny amount of latency. There are entire companies that boot thin clients via 1500mtu gig without issue. Jumbo Frames are very niche and mostly only seen in storage.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Jumbo frames only help with CPU usage and very tiny amount of latency. There are entire companies that boot thin clients via 1500mtu gig without issue. Jumbo Frames are very niche and mostly only seen in storage.

I understand this, but it sounds as though the OP is not happy with the idea of a 2.5" platter-based hard drive in regards to speed. If that is the case, unless he does some heavy optimization, remote booting an SSD won't be much better, especially if he is running that same traffic with general ethernet. This isn't like a storage OS or VMWare that barely touches the boot drive outside of logs, Windows doesn't work like that. It needs HDD access and needs its often, and all these transactions will have at least 7ms of lag added (nearly double the seek time of your average platter based HDD).

This clients often work in the form of an RDP shell. There is no storage data being transacted at all, just an RDP session. Very different data needs.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I understand this, but it sounds as though the OP is not happy with the idea of a 2.5" platter-based hard drive in regards to speed. If that is the case, unless he does some heavy optimization, remote booting an SSD won't be much better, especially if he is running that same traffic with general ethernet. This isn't like a storage OS or VMWare that barely touches the boot drive outside of logs, Windows doesn't work like that. It needs HDD access and needs its often, and all these transactions will have at least 7ms of lag added (nearly double the seek time of your average platter based HDD).

This clients often work in the form of an RDP shell. There is no storage data being transacted at all, just an RDP session. Very different data needs.

True but I have worked with network booted OS's before and noted that Jumbo frames really didn't add anything in the testing environments. Windows doesn't handle network booting well via PXE at least. iSCSI via a NIC with native iSCSI boot worked pretty well. Can you tell me where you got 7ms from? All of my lab servers had sub 1ms times when using iSCSI.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
True but I have worked with network booted OS's before and noted that Jumbo frames really didn't add anything in the testing environments. Windows doesn't handle network booting well via PXE at least. iSCSI via a NIC with native iSCSI boot worked pretty well. Can you tell me where you got 7ms from? All of my lab servers had sub 1ms times when using iSCSI.

7-20ms is some average latency that ESXi folks see in the VM Community when trying to tweak iSCSI on their network (as it is rarely idea when first turned on). The 7ms will be heavily dependent on a number of factors.

Primarily, what other traffic is going over the ethernet link? The NUC only has a single ethernet port (unless OP goes with a USB gigabit link for general traffic). The more general traffic the switch has to deal with going down the same port, the more iSCSI traffic suffers (especially if we are talking a basic unmanaged switch). The NUC NIC is also unlikely to offer Offload, which, while not necessarily putting an undue strain on the CPU, does tend to increase network latency.

My point really isn't whether or not it would work, just that the latency introduced by a network boot doesn't really make it worth putting in an SSD in another system. Just use a basic platter drive. If you want the benefits of an SSD, just put an mSATA SSD in the NUC. The 128GB Crucial M4 costs only about $10-15 more in mSATA form than in 2.5" SATA form. Why not get something that will work in his NUC if he's looking to get one anyways?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
7-20ms is some average latency that ESXi folks see in the VM Community when trying to tweak iSCSI on their network (as it is rarely idea when first turned on). The 7ms will be heavily dependent on a number of factors.

Primarily, what other traffic is going over the ethernet link? The NUC only has a single ethernet port (unless OP goes with a USB gigabit link for general traffic). The more general traffic the switch has to deal with going down the same port, the more iSCSI traffic suffers (especially if we are talking a basic unmanaged switch). The NUC NIC is also unlikely to offer Offload, which, while not necessarily putting an undue strain on the CPU, does tend to increase network latency.

My point really isn't whether or not it would work, just that the latency introduced by a network boot doesn't really make it worth putting in an SSD in another system. Just use a basic platter drive. If you want the benefits of an SSD, just put an mSATA SSD in the NUC. The 128GB Crucial M4 costs only about $10-15 more in mSATA form than in 2.5" SATA form. Why not get something that will work in his NUC if he's looking to get one anyways?

Hmm. That still doesn't mimic what I am seeing in my test environment. DAS vs SAN disk latency is 'identical' but +1ms, actually at this exact moment it is +498microseconds. VMWare is reporting it to me in microseconds because milliseconds isn't accurate enough.

---

And yes the amount of other traffic will greatly affect the running OS. That is why when you do a PXE booted thin terminal it is normally a small OS that runs Terminal services / citrix / whatever. It is loaded directly to a dedicated block of RAM. This keeps the visual traffic from competing with the OS.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
And yes the amount of other traffic will greatly affect the running OS. That is why when you do a PXE booted thin terminal it is normally a small OS that runs Terminal services / citrix / whatever. It is loaded directly to a dedicated block of RAM. This keeps the visual traffic from competing with the OS.

I agree with all this but the OP is not doing any of this. He wants to directly run his OS from a remote drive. There isn't a point to an SSD when implementing it like that.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
I'm not sure what your network device is like, but I currently run ESXi at home on my desktop and use a NAS for NFS storage. I really don't have any issues running multiple instances of Windows 2008 and various applications. I have one management network/NIC and a VM second switch/NIC to segment traffic and the NAS uses teaming.
 

dealcorn

Senior member
May 28, 2011
247
4
76
As I read the responses, my impression is that this merits further investigation. As my client is hardwired directly to the server with no intervening switch or competing network traffic, my network environment is ideal for this experiment. By way of repeat clarification, I identified a WD Blue 2.5" drive as a comparison point as I consider that known OK performance. Rough comparability in boot performance is my definition of success. The point of comparison is whether I will be OK with using existing SSD capacity on the server versus the fresh purchase of a m-SATA drive (cost of $0.00 versus a cost of $70-100). Given that this is a $325 do, I prefer not to increase the cost by 20% or more absent some benefit from the improved performance. If you read the old Atom marketing literature, I intend to run workloads for which Atom was designed. As long as boot time is not a killer, NUC will feel like King Kong in this context.

I am disappointed that no one called me out on ACPI. Historically, Windows support of ACPI was excellent while Linux was more in the "Huh" category. However, Intel tasked one of it's open source guru's to get that fixed and it should work on NUC with a recent kernel. My original question was about "boot" time but if ACPI works correctly, instead of "Boot" the timing at issue is "resume from G1/S4 sleep". I suspect that resume speed may be fast enough to make the whole issue moot (assuming OK SSD capacity on the server). Am I correct that network latency is less of an issue on resume than it is on boot. The SSD benefit may dominate even if it is on a server rather than local.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
It has been awhile since I tried ACPI sleep modes on a network booted OS. Last time I tried, sleep mode resulted in the NIC going to sleep enough to cause both Windows and Linux to BSOD / Kernel Panic on the way back awake.
 

dealcorn

Senior member
May 28, 2011
247
4
76
It has been awhile since I tried ACPI sleep modes on a network booted OS. Last time I tried, sleep mode resulted in the NIC going to sleep enough to cause both Windows and Linux to BSOD / Kernel Panic on the way back awake.

Your concern about ACPI functionality is terribly well grounded. My sole sorce of optimism that it may work with the Linux 3.5 kernel comes from a sketchy mention on Phoronix http://www.phoronix.com/scan.php?page=news_item&px=MTExMjM. Until I get reports that it is no go, it appears worth a look see even if an upgrade to the 3.5 kernel is required. Intel kind of needs ACPI to work to bolster the positioning of Ultra Books as a viable competitor in the marketplace. NUC is just a cheap Ultra Book where they do not bother to include a keyboard, monitor, battery, or most of the plastic required to build an Ultra Book case.

There are many excellent reasons to purchase a SSD, especially in commercial settings. On the home front, outside of gaming and video trans-coding, the principal benefit of a SSD is a faster boot time. If it is possible to use ACPI resume to mimic the perceived fast boot benefit of a SSD in a disk-less environment, it is a big win.