Two DHCP Servers

Rogue

Banned
Jan 28, 2000
5,774
0
0
We have two DHCP servers on our network to serve IP addresses out. They are geographically located in our two most populace areas, not that physical distance matters much in this situation. Anyway, each is configured to lease out roughly half of a range, with no ranges overlapping.

DHCP Server #1: 10.2.3.30 to 10.2.3.127 available for lease (14 day lease time)
DHCP Server #2: 10.2.3.128 to 10.2.3.254 available for lease (14 day lease time)

The ultimate goal is to have them serve more or less in a redundant manner, however, one server is serving out 99% of the IP leases and the other is serving the remaining 1%. One server is running Windows 2000 Server and the other is still on NT4 Server. After much research, I have come to the conclusion that DHCP works by sending a request out and the first DHCP server found on the network becomes the host for that DHCP request. Is this correct?

If that is true, then I have tested the theory by doing a trace to each server from different locations. I have seen mixed results. Some locations are 4 hops to Server #1 and 5 to Server #2, however, Server #2 is leasing the IP, not #1 which is the first server it *should* reach based on my network tracing and the number of hops.

My question is, has anyone here managed to make a situation like this work? Two servers, two different operating systems, two different scopes split down the middle for each subnet?

Any help is appreciated. If you need more info, ask and you shall receive. Also keep in mind, I didn't set this up, it was here when I started this job but I've been tasked to fix it. My recommendation has been to remove one of them and configure them identically. Keep them both running, however, keep one disconnected from the network. If one goes down, bring the other online. Hell, it's a 14 day lease after all!
 

Santa

Golden Member
Oct 11, 1999
1,168
0
0
Is this across the WAN or on LAN segments?

Do you use DHCP Relays?

You can always test your theory whether the server 1 would host if server 2 goes down by turning off DHCP on server 2 after all.

One server hosting the majority of the lease is not bad. Actually it could be good since only one is responsible for the IP table.

I wouldn't worry about equal distribution after all but whether the failover will work fine and whether all the clients in question will get a lease if one server goes down.
 

Rogue

Banned
Jan 28, 2000
5,774
0
0
Both servers are on the same LAN segment.

I guess I didn't really say, but one issue we have is that the "primary DHCP server" runs out of IP addresses and machines on that subnet don't look beyond that server to the other server, that's the main issue.

ie.- DHCP Server #1 leases out all IPs for the requesting subnet (10.1.1.30 to 10.1.1.127 are all leased) but one more machine comes on the network from their subnet and it doesn't look to the other server at all for 10.1.1.128 where the next lease scope resides. Should that happen? I have simply been widening the scope on Server #1 and shortening it on Server #2 to compensate, but in the event that #1 goes down, I will have to add to the scope on #2 to compensate (that is unless I get #1 back up in less than 14 days;)). Does that make sense?
 

Garion

Platinum Member
Apr 23, 2001
2,331
7
81
Unless MS has something that I don't know about, two DHCP servers on the same subnet are, in general, bad karma. Using redundancy here is pretty tough.

A client DHCP is a very, very low level network request which is broadcast out to the entire subnet, meaing that both DHCP servers will get it. Unless you have some very smart DHCP servers, it is likely that both will respond back to the request, with the fastest-responding server "winning" and the workstation taking it's address. This could result in two addresses being assigned (one from each DHCP server) and possibly registered in WINS/AD/DDNS with two different addresses.

There are also issues with redundant DHCP servrs when one fails - If you expand the scope, you're going to get problems with the surviving server, as it tries to assign addresses which are already in use, given out by the old server but aren't in the database of the surviving server.. This could result in the IP's getting blacklisted or a duplicate IP on the network.

If you have a network built of *mostly* PC's and only a few laptops, DHCP doesn't usually need to be redundant. If a lease expires and can't be renewed, the PC just keeps on using the same address. I've seen times when a DHCP server has gone down for 2+weeks and nobody noticed. If you have a lot of laptops or machines going on/off the network, it's more crucial.

What I'd do would be to set a primary machine for DHCP services. If it ever crashes or goes down keep the second one configured and ready to go, but with the service stopped and document what to do if you're not around. That gives you a quick way to roll to another server if it does fail and avoid the hassles of having redundant DHCP servers in your network.

- G
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I guess I didn't really say, but one issue we have is that the "primary DHCP server" runs out of IP addresses and machines on that subnet don't look beyond that server to the other server, that's the main issue.

The DHCP clients don't look for any specific server in general, they broadcast "gimme an IP" and if the primary responds first and says "No, I'm all out" the client goes without a good address. If the secondary server responds first all is well, but it's a really unreliable timing thing.

I believe Novell has addressed pretty much all of your redundancy issues with their DHCP/DNS server's being integrated with NDS so the database can be shared between all the servers, but that's probably not a good solution for you at this point =)
 

Santa

Golden Member
Oct 11, 1999
1,168
0
0
To do dual redundant DHCP servers for redundancy you have to design it so that each server has a scope large enough to handle the entire population of PCs as if it was the only DHCP server.

There is no communication between the servers and because of the way the addresses are assigned you can not hope to evenly distribute the address assignment.

You will need to expand your scope if you are running out on one server.

Beyond that there is no way to really ensure that a paticular machine is assigned from one pool instead of another unless you statically bind the machine to an IP and only assign IP to machines that are bound and ignore any other machines.
 

gaidin123

Senior member
May 5, 2000
962
1
0
The ISC dhcpd server under linux/unix supports load balancing/failover I believe. If you migrated one or both of those servers (depending on what you're running now) to that box, you can take advantage of those features. The lease database is replicated to both servers.

The other option is just to keep a dhcp server with it's scope turned off until you realize the main server went down. Turn it on if need be but of course it won't maintain the lease database and you'll get some hicupps...

Gaidin
 

Rogue

Banned
Jan 28, 2000
5,774
0
0
I thought this might be the situation based on what I've read, but I wanted a second opinion of sorts to convince my peers. They insist that it worked fine with two DHCP servers before, but I think it was just coincidental. They're convinced that the move to Win2k Server on one of them "broke" everything. I'll go forward with my recommendation for a "hot spare" on the network with the service disabled. Thanks for all your recommendations and knowledge.
 

Santa

Golden Member
Oct 11, 1999
1,168
0
0
We run dual DHCP servers in every location. It should not matter for DHCP protocol uses multiple handshakes before an address is assigned. Its not just a request that gets the assignment of the lease but the communication back and forth between server and client to establish an IP and a lease.

So having two servers answering DHCP request isn't a problem for the server to begin the conversation with the requsting client first is the one that will assign the lease.

As long as there is enough IP address in the pool to handle the entire network and there is no overlap in scope you should be fine. We run this setup in all of our networks for redundancy.