• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Using firehose

My friend pointed me at this site:

http://heroinewarrior.com/firehose.php3

Basically, you can combine multiple NICs in a *nix box to increase the total max bandwidth (i.e. 2 1000 Mbit cards = 2000 Mbit connection). Has anyone played around with this yet? I'm not able to find really much info on the web when googling and this has not appeard to have been discussed in these forums. Basically, I'd like to have this setup on my Linux server and desktop so that I can move large files between them rapidly.
 
Load balancing is nothing new. However I never heard of firehose or know much about linux. It might be worth a shot.
 
Never heard of it.

Normally bonding NICs is done at the driver level and requires a switch that supports it.

Also - huge servers have a hard time filling a single gigE NIC, let alone two. It's normally done for redundancy purposes. one nic goes to one access switch and the other to another access switch.
 
Originally posted by: pak9rabid
I'd like to have this setup on my Linux server and desktop so that I can move large files between them rapidly.

Interesting, and it probably can do the above. However, it's greatly limited by being just a toolkit that with some dedicated applications and not integrated into the OS / networking stack. This means that you can transfer files using the pre-built utility, but can't do much else. Transferring files back and forth is good and all, but a better use of a fast file server is direct access through applications.

I'd also think that getting greater than gigabit speeds during file transfers would require very fast drive arrays, so you'd typically see no benefit over basic gigabit. But you don't know all the cases and variable until you try, and this could be a useful tool for testing just that (though probably confirming the negative.)

You'd also get to see some other bottlenecks in addition to the drive bottleneck. CPU bottleneck perhaps. PCI bottleneck (if you're going through standard PCI, it'd be a killer).
 
from what I read on the site listed above, you can use this with anything that uses tcp/ip. but yea, I didn't think about the hard drive bottleneck though...im sure that would be the limiting factor.
 
I think that 802.3ad is the official way to do this sort of thing; but it's nice to see an app that'll do it, albeit only for certain applications, over just about any TCP/IP link.
 
actually, when i went to recompile a linux kernel, i noticed the "Bonding driver support" (2.6 kernel). this looks like exactly the same thing that firehose does. this is basically 802.3ad support in linux. for those of you interested, you can access it in the linux kernel config here:

-> Device Drivers
-> Network device support
-> Network device support
->Bonding driver support
 
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.
 
Originally posted by: n0cmonkey
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.

Of the specified protocols for trunk, none of them seem to address performance improvement. Round Robin sounds more like load balancing, not the concurrent transfers that would be implied for performance improvement.

Broadcom and others have some utilites that provide such features. Broadcom Advanced Control Suite (2) provides a couple of versions of their own "Smart Load Balancing", and a couple of versions of 802.3ad. 802.3ad requires a supporting switch. I've tried the SLB, but not properly -- with non-Broadcom NIC's on one end. In some cases/configuration, it alternates the connections in a round-robin. In others, it seems to do link aggregation as desired, but gives me no performance improvement.

Has anyone had more success than me with this / tried it in a kosher environment?
 
Originally posted by: Madwand1
Originally posted by: n0cmonkey
OpenBSD's trunk(4) seems interesting, although I don't know how much it'll help performance wise. One of the ways devs said they would use it appears in the second example: a dev using wireless on his laptop most of the time, but switching to wired for bigger transfers.

Of the specified protocols for trunk, none of them seem to address performance improvement. Round Robin sounds more like load balancing, not the concurrent transfers that would be implied for performance improvement.

More connections utilizing more bandwidth. Sounds like a performance improvement to me. 😉

No, it isn't going to speed up the average download, but it's more useful. 🙂
 
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me. 😉

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)
 
Originally posted by: Madwand1
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me. 😉

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)

I haven't seen anything, but I haven't been paying too close attention. The timing at which this trunk(4) interface was added makes me believe that it was more for failover than anything else. I'm guessing it's being used with CARP and whatnot to aid in automatic failover when stuff happens.

EDIT: What kind of benchmark do you want? If I feel so inclined I may try it out.
 
Originally posted by: Madwand1
Originally posted by: n0cmonkey
More connections utilizing more bandwidth. Sounds like a performance improvement to me. 😉

A lot of things sound like performance improvement until you try them. The Broadcom SLB for example uses both connections and gives me no improvement. Has anyone actually confirmed a performance improvement from trunk or any of the others (even just for a benchmark app?)

Yes - we see steady 60-80% utilization on 4 NICs using Intel's bonding with cisco etherchannel (very similar to link aggregation).

The thing is the distribution algorithyms for "which nic do I send this packet out of" are based on layer2 or layer3 addresses. So a computer to computer transfer won't see any improvement because it will be following on link/path. But for servers that have thousands to 10s of thousands of connections the load will balance out and you'll see higher overall thruput from the server.

-edit- but no many servers are setup that way anymore. they just have two gig cards in them. each to separate switches running in a failover mode instead of bonded/channeled. Large backup servers will have 4 - two bonded to one switch, two bonded to another for failover.
 
Back
Top