Solaris NIC bonding

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
Just wondering if anyone knew of a good overview/guide for implementing NIC bonding on Solaris 10. Server is a Fujitsu 450 (not my idea to buy Fujitsu), either currently a single NIC or dual in failover (doesn't matter, I can buy new). Looking for something that goes into the switch requirements and how to implement on the server itself. End goal being to achieve at least 2, if not 4gb, receive capabilities. Also open to suggestions on which NICS to use. Slots are at a premium so dual-headed is probably the right direction. The server SA's are open to it, telecom guys aren't so I need to make it as easy as possible for them or at least sell it to them as being easy.

TIA.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You'll need a switch that supports either 802.3ad link aggregation or Cisco eitherchannel. This is what allows the "bonding". Most any decent switch should support 802.3ad, there is configuration involved on the switch side as well to setup the channel and tell it what kind of algorithym to use to balance traffic.

Then see here for more info on the solaris side.
http://www.sun.com/products/ne...et/suntrunking/faq.xml

Just be aware that bonding multiple interfaces DOES NOT GIVE YOU more bandwidth for a single conversation like a file transfer. You'll find that for a particular conversation, only a single nic will be used.
 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
Originally posted by: spidey07
Just be aware that bonding multiple interfaces DOES NOT GIVE YOU more bandwidth for a single conversation like a file transfer. You'll find that for a particular conversation, only a single nic will be used.

That's fine as this is for a backup server and there will be multiple inbound connections. Also we probably don't have anything that could saturate more than one nic anyways. :)

 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
Originally posted by: spidey07
10 gig is already getting cheap if you want a high bandwidth connection.

Exactly, this is the debate we need to have with the telecom guys. And finance actually. It may be simpler to throw money at it and purchase a couple 10gb blades/cards than to go through all the testing/hassle of bonding.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: SoulAssassin
Originally posted by: spidey07
10 gig is already getting cheap if you want a high bandwidth connection.

Exactly, this is the debate we need to have with the telecom guys. And finance actually. It may be simpler to throw money at it and purchase a couple 10gb blades/cards than to go through all the testing/hassle of bonding.

The financial impact of 10g and proper design can be a bigger hit than you think. It is absolutely something you should be looking at for a backup solution. The bigger question is "why are you doing backups across the LAN? That's what the SAN is for".

Food for thought.

But I'd have to deal directly with both your server/SAN/network team to really be of value. Just throwing out another option, but your backups shouldn't be across a LAN - that's what the SAN is for.

You have a design decision (keep money out of this) on what direction you want to head. You need an experienced consultant. ;)
 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
Originally posted by: spidey07
Originally posted by: SoulAssassin
Originally posted by: spidey07
10 gig is already getting cheap if you want a high bandwidth connection.

Exactly, this is the debate we need to have with the telecom guys. And finance actually. It may be simpler to throw money at it and purchase a couple 10gb blades/cards than to go through all the testing/hassle of bonding.

The financial impact of 10g and proper design can be a bigger hit than you think. It is absolutely something you should be looking at for a backup solution. The bigger question is "why are you doing backups across the LAN? That's what the SAN is for".

Food for thought.

But I'd have to deal directly with both your server/SAN/network team to really be of value. Just throwing out another option, but your backups shouldn't be across a LAN - that's what the SAN is for.

You have a design decision (keep money out of this) on what direction you want to head. You need an experienced consultant. ;)



I'm very well versed in the world of SAN backups just not with NIC bonding in Solaris. ;) Unfortunately my predecessor wasn't good at either (this is a new gig I started last week). I already have the ball rolling on converting a 8TB SAS and 4.7TB Oracle box over to SAN media servers (why in god's name you would even try to backup 8TB over the network I dunno) as well as the additional of an incremental dedicated media server. Problem is still that the NIC on the master/media servers is still a bottleneck for LAN clients. The bus, DSSU's, and tape drives can push more but the NIC is still the bottleneck. Converting everything to a SAN media server sounds nice but is obviously not realistic. NBU starts to crap out and do weird stuff when you get about 200-250 tape drive instances as well as the management overhead of say swapping a drive and having to update serial #'s on a large # of media servers. Plus we're doing BCV's and other stuff like that. A dedicated backup network is often a better option for "mid-sized" clients. Point being, you can't and don't want to make -everything- a media server.

Unfortunately, money comes into play as I can only work w the budget allocated. A sad fact in the business world but it is what it is. I did Netbackup consulting for about a year and a half including one of the world's largest NBU environments (3rd actually) until taking this gig. I've worked with it doing enterprise backups since about 2001.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Well you could always do a few media servers with a 4 gig channel to each (although I think you'll max out around 2 gig depending on server architecture, possibly do 2x2 gig channels one primary, the other failover to another switch). Would work well as long as their SAN connections could handle it.

If it's for backup you should get a decent load distribution if you use src/dst tcp ports as the link selection criteria (hash).
 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
Originally posted by: spidey07
Well you could always do a few media servers with a 4 gig channel to each (although I think you'll max out around 2 gig depending on server architecture, possibly do 2x2 gig channels one primary, the other failover to another switch). Would work well as long as their SAN connections could handle it.

If it's for backup you should get a decent load distribution if you use src/dst tcp ports as the link selection criteria (hash).

That's a good scenario, limiting factor here is slots in some cases. They (well, I shoudl say we now) have a DL380G5 media server running Red Hat...a 380 is a piss poor media server because of it's lack of slots...why they chose that I dunno. They also like to mix tape and disk on the same HBA but I won't even go down that road.

Thanks again for the link on Sun's site...that was exactly what I needed.