Two NICs on same subnet?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

freegeeks

Diamond Member
May 7, 2001
5,460
1
81
Wow. Not sure if serious. That is never done nor recommended on even the largest VoIP installations and I've done plenty.

Are you serious? Are you telling me that you are deploying servers in professional environments with only 1 ip interface. That you don't split your traffic meaning that signalling, RTP, OAM, API calls, billing interface are all on the same interface / IP address? I mean if that is the case, I need to call Sun and tell them that they are doing it wrong because they make all these servers with 4 to 6 network interfaces :)
 
Last edited:

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Are you serious? Are you telling me that you are deploying servers in professional environments with only 1 ip interface. That you don't split your traffic meaning that signalling, RTP, mgmt, API calls, billing interface are all on the same interface / IP address? I mean if that is the case, I need to call Sun and tell them that they are doing it wrong because they make all these servers with 4 - 6 network interfaces

Notice this thread is about two NICs in the SAME subnet, not different ones. And the normal practice for multiple NICs is to use teaming/link aggregation (LAG) or sun multipathing (same thing).

Yes, I'm dead serious. Professional deployments only have a single IP interface with up to 8 physical 1 gig NICs into two switches for high throughput and redundancy. Large VMware clusters built for performance and redundancy you'll use two of the nics for service console/vmotion and the other two trunks for the various guest OS vlans. On different swtiches for redundancy, even better if you can do cross chassis link aggregation (VSS or nexus VPCs). But again the SC/vmotion nics don't have a default gateway and are all one large layer2 network.

For large data base clusters you use those extra nics for heartbeat/replication, but still DIFFERENT subnets, without a default gateway.

But you don't put two NICs into the same layer2 broadcast domain (subnet). It is just never done because of the problems and complexity it causes. It is never done if designed by a competent professional.
 

freegeeks

Diamond Member
May 7, 2001
5,460
1
81
Notice this thread is about two NICs in the SAME subnet, not different ones. And the normal practice for multiple NICs is to use teaming/link aggregation (LAG) or sun multipathing (same thing).

Yes, I'm dead serious. Professional deployments only have a single IP interface with up to 8 physical 1 gig NICs into two switches for high throughput and redundancy. Large VMware clusters built for performance and redundancy you'll use two of the nics for service console/vmotion and the other two trunks for the various guest OS vlans. On different swtiches for redundancy, even better if you can do cross chassis link aggregation (VSS or nexus VPCs). But again the SC/vmotion nics don't have a default gateway and are all one large layer2 network.

For large data base clusters you use those extra nics for heartbeat/replication, but still DIFFERENT subnets, without a default gateway.

But you don't put two NICs into the same layer2 broadcast domain (subnet). It is just never done because of the problems and complexity it causes. It is never done if designed by a competent professional.

We are mixing things up here, putting two NIC into the same layer2 bd is indeed bad but you made the statement that if you need a route on a host then you are doing something wrong. When you say 1 ip interface do you really mean 1 subnet (doesnt matter if its a bonded interface or not)? just want to know that before making other statements :)

edit: just to be sure, you install a server for whatever application. Backup, mgmt interface, external systems connecting all go to the same IP on this server (the server only has 1 ip?)
 
Last edited:

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
There are very few sane reasons to need static routes on a host. Out of band management or provisioning networks are about the only reason.
 

freegeeks

Diamond Member
May 7, 2001
5,460
1
81
There are very few sane reasons to need static routes on a host. Out of band management or provisioning networks are about the only reason.

I respect your posts in the networking forum but that is not true at all. I have done a lot of designs (mysql clusters, oracle RAC, radius, ...) and implemented a lot of stuff that other architects designed and host with different ip interfaces with static routes are VERY common and is good practice. When I implement a server it has at least 2 ip interfaces (1 OAM and at least 1 for the application itself). Simple example is an apache server. When I need to implement snmp monitoring, my management station will poll this server on the mgmt IP and not on the interface where all the http traffic is coming. Typically i will use a bonded interface with 802.1q tagging on top to seperate my different IP interfaces.

Or there must be a big difference in design between Europe and US :)
 

seepy83

Platinum Member
Nov 12, 2003
2,132
3
71
If you put all of your management traffic on its own subnet, then you don't need to go mucking around with the routing table on the host...
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
If you put all of your management traffic on its own subnet, then you don't need to go mucking around with the routing table on the host...

What's normally done for OOB or management networks is you have that interface/network NAT'd on the router for it. This way no static routes are needed since it's always the management station talking to the management IP.

Since the management station is NAT'd, the host management IP sees it as coming from the same subnet, no routing needed on the host.