How would one go about setting up a redundant DNS server?

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
I'm dealing with a huge server issue because for some reason my UPS did not trip fast enough when power went out and now I'm having tons of DNS issues making it even harder to troubleshoot what's going on. I thought I had solved that by having two DNS servers, but the issue is if one of the DNS servers is down, then clients just try to connect to the first one and don't bother trying to connect to the secondary one.

How do I make it fail over more gracefully? I'm wondering if I should just ditch the virtualized DNS server and stick with a single physical box. It's kind of hard to troubleshoot VM server issues when the VM that does DNS is also down.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Elaborate on what's going on. If DNS is down on your primary server the clients would use the second one. If DNS is responding but giving bad data, that's a different story. I'm also not seeing how DNS being down is impacting the ability to troubleshoot a VM.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
That's what I thought would happen, but instead, it tries the first one every single time, and then has to wait till it times out. This slows down everything on the network because everything is trying to connect to it and failing and causes everything to hang. NFS seems to be the worst, as it relies on DNS to ensure that a particular hostname can access a share so it keeps timing out and shares randomly drop. At one point while troubleshooting why the ESXi LUNs were not coming back online I restarted the NFS server. That took over half an hour because it must be doing DNS lookups or something in the background and it kept trying the server that was down. (a VM) Every now and then clients will try the 2nd one but most of the time it seems clients just try the 1st, then time out and give up but then it tries again later next time it has to do a query instead of remembering that it's down. I was doing nslookup as a test on different servers and it would sometimes fail, other times it would hang for about a minute then finally work because it decides to try the other server while other times it worked right away. It seemed to be really sporadic.

I ended up losing all my luns because the ESXi server could not resolve the name, despite there being a secondary server setup. I ended up just removing the primary altogether from ESXi as there is no point of having it there given it is a VM on the same server. Overall it was a huge nightmare dealing with everything because lot of stuff could not connect etc and I was getting all sorts of weird issues. Even SSH logins are super slow when DNS is down/partially down. In fact I've noticed in the past that if DNS is down completely you can't even SSH anywhere. I think it does a reverse lookup of the IP when you connect or something so if DNS is down it just fails. No error or anything, it just hangs forever.

So what I'm thinking is I probably need a setup with two physical servers that have the same IP but only one is online at a time, and if one fails then the other one turns on. I could probably do that with two raspberry PIs. I'm thinking a script that monitors the "A" one continuously, if it detects that the DNS service is not running or the server itself is down, it flips to the B side. Could probably use Gpio pins to talk to them and a script would just turn the interface on/off.

Though I'm thinking there must be an actual industry standard solution for this. Basically, need a way for clients to KNOW that the primary is down, so it stops trying it.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
The following is how I understand it. Anyone who knows better please correct me.

===

If the client is a Windows PC, the Windows PC will keep a DNS cache for the query result, depends on the TTL (Time To Live) set by DNS server. So if the PC makes a same DNS query is and the DNS cache still has the record, Windows will not go out and request from the DNS server 1, instead it will take the result from the cache. It will only send a new DNS query to DNS 2 until the record expired.

===

So if you maintain your own DNS servers, maybe you should set TTL to a very short time.

But I have no idea how Linux PC client really works.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
You know what I totally forgot about TTL, it may very well be set too low, I think originally I set it to like 5 minutes, but I should set it to like a day or so. I just copy a random record when creating a new one so they are probably mostly all the same. I'll have to go through and increase them.

That might also prevent the ESXi server from losing all the LUNs if DNS goes down. Though I think I will still make the physical DNS server the primary. Just makes more sense really.

A future project I actually want to do is a RPI blade server, at that point I may even start offloading VMs to Raspberry PIs instead. At least more important stuff. VMs are great and all, but it's also a more fragile system when there is no redundancy, like mine.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
How do you have your storage setup that losing DNS took your storage down? That's a bad storage setup and should be fixed. VM's are only fragile if you're doing something wrong. You should really give us more info about your network setup because most of these details you're giving make zero sense. I've got no shortage of clients that run all their VM's on a single host, including their domain controllers. You don't need multiple physical boxes to have a stable environment, although you obviously will be in for a bad time if that physical box fails.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
Well if the LUN hostnames can't be resolved, or the NFS server can't resolve the name of the VM server (to determine if it should have access) then the LUNs drop. Then everything else drops. My setup is rather basic, single VM server, single storage server, LUNs are setup via NFS. Couple physical servers too for other stuff.

Though I realized something interesting, I falsely thought my environmental server went down (that also does backup DNS) since when I started investigating I could not SSH into it and my alarm display web page was showing a time out, but turns out when there are DNS issues, SSH does not work well (no idea why - does it try to resolve client IPs on login and then fails?). I keep an alarm display up on a Raspberry PI and since the browser in the RPI closes on it's own after a while I use Firefox via an X11 session on the environmental server. If that server DID go down I would have lost that X11 session. I never realized this till now while recollecting the situation. I was just in pure panic so not really thinking straight. So that server probably never actually went down and I rebooted it for nothing. That is also the backup DNS. The primary DNS would have been down at this point since the LUNs were down. (I did not know this at that point since I could not login to the ESX server due to DNS being down)

I just checked the uptime of my storage server, and turns out it DID drop. That would have brought down all the VMs including primary DNS. I originally thought it didn't, because I would not expect that box to actually reboot gracefully if it did, but it actually did reboot gracefully. My file shares on my workstation were actually working. Honestly I'm surprised I did not sustain more damage, with the new knowledge that it actually did go down.


So what I figure happened:

- AC power fail, very dirty power drop and not clean
- UPS failed to trip fast enough
- File server dropped, other servers not affected (as far as I know) - I guess the storage server PSUs are more sensitive to dips in the AC cycle.
- All VMs dropped/stopped responding, including primary DNS
- Even though file server came back up, the VM server could not resolve the LUN hostnames so it could not reconnect
- Storage server could also not resolve any other host names so even if VM could resolve LUNs when it tried to connect, storage server could not allow access as it could not resolve hostnames in the allow list

The problem still remains though that my DNS is not failing over properly. It should not have taken this much effort to get the LUNs to show up again.

Also connecting to anything was very hit and miss, as I was getting lot of DNS resolvation failures. SSH also seems to get really flacky if DNS is down, takes like 5-10 minutes to actually connect to anything. Like even if I get a successful name resolution for the server I'm connecting to, it still takes a long time for the password prompt to come.

I think my easiest bet might be to just go to a single physical DNS and then have a failover that is offline but has the same IP. I don't get it though, I should be able to just have two DNS servers and then put both IPs in each client, but for some reason it just does not fail over properly. The clients keep trying the primary even if it's down and then that hangs up everything.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Why are you mounting your NFS volumes using DNS names? Ditto for restricting by host name. That accomplishes nothing and is just asking for trouble. If I recall, you've got managed switches and have vlans setup. If it's not already, put your NFS traffic on it's own vlan and allow that entire subnet in your exports file. Then mount to your hosts using the IP.

SSH has nothing to do with DNS. That would make SSHing into infrastructure devices to fix issues all but impossible. There's something else going on here or you're leaving out part of the puzzle.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
Suppose I could mount NFS shares by IP, but that just seems kinda dirty. I guess the odds that I decide to change my IP numbering scheme is fairly slim, it was well planned from the get go. Though it would solve the issue by removing reliance on DNS.

As for SSH, it seems like some bad bug or something, I'm not the only one that has experienced this: https://www.turnkeylinux.org/blog/slow-ssh

Though still not sure why DNS was not failing over properly to the backup as it turned out to not be down. (until I rebooted that server for nothing, but that server boots up very fast)
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Suppose I could mount NFS shares by IP, but that just seems kinda dirty. I guess the odds that I decide to change my IP numbering scheme is fairly slim, it was well planned from the get go. Though it would solve the issue by removing reliance on DNS.

There's nothing dirty about it. I'd consider mounting storage by host name in an enterprise environment with full redundancy. Maybe. Possibly. Probably not. I don't know anyone that just goes around re-IPing their storage and if they're building new storage, then it's moot because it's going to be a new IP and hostname anyways. Using hostnames for storage mounts is a solution looking for a problem.

As for SSH, it seems like some bad bug or something, I'm not the only one that has experienced this: https://www.turnkeylinux.org/blog/slow-ssh

Though still not sure why DNS was not failing over properly to the backup as it turned out to not be down. (until I rebooted that server for nothing, but that server boots up very fast)

I find the fact that link is timing out amusing considering the topic at hand. But assuming that link is talking about slow SSH on Linux, I'd say that sound like a bug with either the Linux kernel or whatever client you're using. I've SSH'd into oodles of things without functional DNS in the environment.

Without knowing specifics on how your DNS is setup, hard to say on failover question. I'd love to help you figure this out but really need a better picture of your network. What server is hosting your DNS? It's implied it's a VM. What about the second server you mention? Are you using FQDN's or just short host names? What is your DNS servers using for look ups? Was your internet up and functioning while this was going on?
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
Using FQDN's (.loc if it matters) and they are also both configured to forward queries to the internet. (I can't recall, think it just uses root DNS directly via hint file) During the outage, resolving internet hostnames was also sporadic but did work. Sometimes it worked sometimes it did not. Depends what server it uses for lookups, as it only seems to try one then give up if resolution fails. Using named/bind for DNS.

The way the backup DNS is setup is I have a script that runs and just rsyncs zone files over to backup and reloads it. I had originally looked into zone transfers which would be proper way, but it seems you have to configure that on a PER RECORD basis. That's retarded and ridiculously tedious. So I found it was easier to just rsync everything over. Since they are both technically setup like a primary is that maybe an issue? What would be a better way of doing it without having to configure zone transfers for every single record? You would think they'd implement a way to just mirror the entire server.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
So, your DNS is not configured correctly which would explain the behavior you were seeing. You don't configure zone transfers for every DNS record. You do configure/specify the zones the secondary server is going to handle as the backup/slave. In your case there should only be one forward zone and one reverse zone. Unless there's other details you've left out.

http://www.elinuxbook.com/how-to-configure-secondary-dns-server-with-bind-in-linux/

That looks like it should get you most of the way there. Except it doesn't seem to have you configure a second A record in the forward lookup zone for the root domain.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,543
421
126
It hard to believe the UPS failing specifically just create a DNS prblem and it can be solved by another DNS.

There is probably much more damage to Hardware, and or Server/Client OS/Software.


:cool:
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
I could very well have some superficial damage to hardware too, as having machines go down hard is obviously a bad thing. So far stuff "appears" ok but I won't be surprised if I start getting some drive failures soon. I've been wanting to build out a telecom style -48v system for a while but it's a lot of money. I may expedite that. If I lose my file server and have to rebuild it from scratch, that is basically the cost of a proper redundant -48v system. Downside is that it's really hard to buy telecom stuff. I have not found much sources other than Ebay/Aliexpress. Consumer UPSes are all standby and don't support adding 100's of AH of battery so a standard UPS is not enough. Not only do I need a more seamless backup but I want good run time too. I could maybe try to modify the existing UPS by adding a faster relay, as a temporary fix. Really I don't know why they don't use solid state relays, those are more than likely much faster.

As for that zone transfer stuff, so is there actually a way to make it so it transfers ALL zones, and not have to configure it for each zone? Since when I read into it, I had to configure it on a per zone basis. That is insane, there is no way I'm going to do that when I have 100's of zones. Suppose I could find a way to script it.
 

MrBill10

Member
Apr 28, 2016
44
0
6
As a temporary fix you may want to plug your UPS into a second UPS. I had good success with this arrangement but the ugliest transients will still get through.

I bit the bullet and upgraded to a 5000VA APC rackmount unit which does flow-through power conditioning and is immune to incoming power warpage. With 2 servers, 10 hard drives, a 48-port PoE switch and a handful of smaller loads, it gives me roughly 50 minutes run time. Bonus here is the APC software is configured to start shutting things down based on time remaining, and restart once the batteries are above minimum threshold.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
My current UPS has 4 hours and expandable to as many as I want so I don't want to lose that by going with a more "proprietary" solution. But I'm thinking maybe if I can find a dual conversion pure sine unit I can plug my UPS into that. Plugging it into another standard square wave one would probably not be good good. Though I'm kinda reluctant to play with power stuff now given it's the second time this happens so think I'm just going to expedite my purchase of a 48v dual conversion system. Since it's hard to buy that stuff I may need to just build my own. I work for a telco so I'll ask if by chance I can buy it through work. Basically I need like 2kw worth of rectifiers and two 2kw inverters (for each PDU for redundancy) and I should be set. Can add more rectifiers as I need.
 

MrBill10

Member
Apr 28, 2016
44
0
6
Sounds like you have it covered. We less smart guys are restricted to adding matching battery packs for more run time.

My primary DNS is a centos VM and secondary DNS is a RPi3; this way only one should ever go offline at a time; no issues so far. My configuration is pretty simple, but I'm going to try the setup XavierMace suggested; it looks pretty cool.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
How/why do you have 100's of zones? Again, either you're leaving out relevant info or there's confusion as to how this works.

Ditto for the UPS. The reason data centers still have UPS's in their racks is to cover the gap between power going out and generators coming on. If your UPS didn't switch fast enough, that's a bad UPS or at least one not suited for your use case. I've got an SMX1500+ 1 ERM at the house which gives me about 4.5 hours of run time with my normal VM load running. I can shutdown non-critical VM's to extend that out another 2 hours or so. That UPS has never failed to kick in fast enough to keep the entire rack running when everything else in the house went down regardless if it was a brown out or black out.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
I have one zone for each server/device so that ends up being a lot of zones. some of them are also local/dev version of my websites and other projects etc. I don't know what other info you need or what you think I'm leaving out, it's a fairly basic setup, just a DNS server with a bunch of zones for each domain on my network. The backup DNS is mirroed via rsync. When I had looked at how to do it properly it required to configure each individual zone and that's just crazy. I just want to mirror the entire DNS server so that if I add/remove a zone the changes just replicate to the other DNS.

Data centre UPSes are dual conversion (that is what I want to build - buying one with the run time I want would be too expensive) but home UPSes have a relay so they take a certain period of time to switch to battery. In some cases missing almost an entire AC cycle. Unfortunately for some weird reason some of my servers randomly seem to take a hit if the power failure is not clean enough. A clean power failure such as turning a breaker off is not an issue, I can turn breakers off and on all day and the UPS will always trip fast enough, but the really unclean one caused by faults on the grid seem to cause issues.

I don't recall what UPS this is from, think it is from the server one, this is a clean cut. (turning breaker off)



But even then look how much time it spends at 0 volts. A data center UPS would not have this issue as everything is running on inverter all the time. That is my goal in the future. I'd be curious to know what the wave form would look like when the servers do fail, but it's kind of hard to predict when that happens so I would not have my scope hooked up. I don't want to create a fault on purpose, but I did by accident once when I threw the main breaker for the whole house but did not do it fast enough so it arced and then the server went down. The file server seems to be the most susceptible. I recall reading something about active PFC not doing well with square wave UPSes (aka any UPS under a grand) so I wonder if the PSUs are active PFC.
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Why on earth do you have a separate zone for each device? That's by no means a normal/basic setup.

So you're using a home UPS to power what's effectively enterprise equipment. Therefore you're using a product that doesn't fit your needs. You can get surplus enterprise UPS's for cheap. Even factoring in buying new batteries, it's much cheaper than new.
 

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
Isin't a zone per domain/device how it's suppose to be? Suppose I could shove everything in one zone, but I always figured you're suppose to create a zone for each domain. How would you even do more than one domain per zone? Unless you do one zone for the local TLD. Like instead of having a zone for server01.loc, server02.loc etc I could just do a zone for .loc and then put everything in it. That seems dirty. I think that's actually how I did it before and realized it was probably wrong.

Enterprise UPSes are super expensive even used because of shipping and customs (found some on Ebay and shipping is sometimes more than the unit itself), and it's also a gamble as to if they'll work with 100ah deep cycle batteries like what I'm using, since they're not designed for it. The point of my UPS is for extended run time and expandability. I can just keep adding batteries to my heart's desire and expand capacity for fairly cheap. The issue is it seems to be failing me now as it's the 2nd time I get an incident where it does not trip fast enough. It worked fine for all the 100's of other power flickers, outages etc it's gone through since I put in that system.

But the real issue is figuring out why my DNS is not failing over properly. I think I might just go with the Raspberry PI idea. I'll have two, a A and B side, they'll both have the same IP but only one will be active at once. Should the A one fail it will swact to the B side. I can either use a USB dongle for management ports, or maybe communicate via the I/O pins. Could probably have a separate arduino to at as a controller. Essentially it will just be a self contained device in a 1U chassis with a couple fancy leds and maybe a LCD on it. Though really simply making my hardware one the primary will probably solve my issue for the time being. Then the VM can just be a secondary. I'll just have to go on all my servers/clients to switch the order they're listed as and also change it in DHCP. (some systems are hard coded while others use DHCP)
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Isin't a zone per domain/device how it's suppose to be? Suppose I could shove everything in one zone, but I always figured you're suppose to create a zone for each domain. How would you even do more than one domain per zone? Unless you do one zone for the local TLD. Like instead of having a zone for server01.loc, server02.loc etc I could just do a zone for .loc and then put everything in it. That seems dirty. I think that's actually how I did it before and realized it was probably wrong.

You normally have one zone per domain. You normally have multiple devices per domain. You seem to have left out a piece of that puzzle and are using your hostnames as domains. This is obviously Windows but the principal is the same and a picture makes more sense.

20171030225436-31b8e4ab.png


That's my local DNS server. xm.local is my internal domain. So my primary ESX box is xm-esx01.xm.local. You can see my two domain controllers are the nameservers near the top. I've effectively got forward 1 zone, plus the built in zone for AD. So in your case, you should have a zone for redsquirrel.loc or whatever you want to call it, then you'd have records for all your hosts. Then your secondary DNS server would be the slave for redquirrel.loc and would keep all the host records in sync. If you have a separate development environment or what not, create a separate zone for that. squirreldev.loc or whatever.

Technically best practices would be to use a subdomain of your public facing domain, but that unnecessary for a home lab. But if you have internet facing stuff that needs to be accessible via DNS, you should consider it. See: https://www.pluralsight.com/blog/software-development/choose-internal-top-level-domain-name
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
70,125
13,558
126
www.anyf.ca
Windows must do it differently, what is considered a zone here? In Linux each of those entries would be a zone as it's a different domain.

Ex:

Code:
[root@rohan zones]# dir
total 204
4 -rw-r--r-- 1 named named  392 Jun 25  2015 0.10.10.in-addr.arpa
4 -rw-r--r-- 1 named named  432 Jun 25  2015 10.10.10.in-addr.arpa
4 -rw-r--r-- 1 named named  465 May 30 06:05 10.11.10.in-addr.arpa
4 -rw-r--r-- 1 named named  478 Jun 25  2015 1.10.10.in-addr.arpa
4 -rw-r--r-- 1 named named 1419 Dec  1  2015 1.10.in-addr.arpa
4 -rw-r--r-- 1 named named  439 Jun 25  2015 11.11.10.in-addr.arpa
4 -rw-r--r-- 1 named named  424 Jun 25  2015 127.0.0
4 -rw-r--r-- 1 named named  459 Jun 25  2015 1.5.10.in-addr.arpa
4 -rw-r--r-- 1 named named  471 Jun 25  2015 2.168.192.in-addr.arpa
4 -rw-r--r-- 1 named named  428 Jun 25  2015 2.2.10.in-addr.arpa
4 -rw-r--r-- 1 named named  512 Jul 21  2015 7.7.10.in-addr.arpa
4 -rw-r--r-- 1 named named  459 May 22 01:37 8.8.10.in-addr.arpa
4 -rw-r--r-- 1 named named  442 Dec  6  2016 adsb.loc
4 -rw-r--r-- 1 named named  423 Jun 25  2015 aovdb.loc
4 -rw-r--r-- 1 named named  426 Jun 25  2015 aovdev1.loc
4 -rw-r--r-- 1 named named  426 Jun 25  2015 aovprod.loc
4 -rw-r--r-- 1 named named  445 Jun 25  2015 aovtc1.loc
4 -rw-r--r-- 1 named named  527 Jun 25  2015 appdev.loc
(snip)

I have a zones.conf file that just has entries to include those files. That file and all the zone files are then rsynced to the backup DNS nightly so that any changes/additions are recorded to the backup DNS.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
They aren't separate zones because you're running Linux, they are separate zones because of how you've done your setup. If you don't want to use a full domain name, then just create a zone for loc. As I said, what you've effectively done is use your hostnames as individual domains.