• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why is df outputting all this named stuff?

Red Squirrel

No Lifer
On a new server I'm working on migrating my stuff on I noticed that df outputs lots of garbage:

Code:
[root@server03 migration]# df -hl
Filesystem      Size  Used Avail Use% Mounted on
rootfs          1.9T  246G  1.5T  15% /
/dev/root       1.9T  246G  1.5T  15% /
devtmpfs         16G  264K   16G   1% /dev
tmpfs            16G     0   16G   0% /dev/shm
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/named
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/var/named
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/named.conf
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/named.rfc1912.zones
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/rndc.key
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/usr/lib64/bind
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/named.iscdlv.key
/dev/root       1.9T  246G  1.5T  15% /var/named/chroot/etc/named.root.key

Why all that stuff? Is there a way to get rid of that? It's just dirty. I just want to see disk usage for actual volumes. Come to think of it, most of this output makes no sense, what's rootfs, or /dev/root? Where is /dev/sda and such? This output is going to mess up my monitoring program. I usually look for the line that has /dev/sdx.
 
Are you just looking for the partition info, like size, etc?
If so try $ fdisk -l
results look like:
Code:
Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd0f4738c

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2046 105752575 105750530  50.4G  5 Extended
/dev/sda3       105752576 625139711 519387136 247.7G  7 HPFS/NTFS/exFAT
/dev/sda5            2048 101369855 101367808  48.3G 83 Linux
/dev/sda6       101371904 105752575   4380672   2.1G 82 Linux swap / Solaris

Partition table entries are not in disk order.
 
I've always used df though for disk space checking and it's what my monitoring software uses as well, though I guess I could change that if worse comes to worse. Why am I suddenly getting all that stuff in there? It's never did that before.
 
Not sure. I just ran a df -hl and here is my output:
Code:
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda5        48G  9.9G   36G  22% /
udev             10M     0   10M   0% /dev
tmpfs           791M  9.2M  782M   2% /run
tmpfs           2.0G   76K  2.0G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           396M  8.0K  396M   1% /run/user/1000
*Note, my Debian is all on 1 partition except for swap.
 
Yeah it's the first time I see this, I really don't know why it's doing that. In fact this whole version of named/bind is really weird. I really don't get why it's doing this.
 
Yeah it's the first time I see this, I really don't know why it's doing that. In fact this whole version of named/bind is really weird. I really don't get why it's doing this.

New version of something. Or different version of something. I noticed Debian doing this, whereas Ubuntu just has the old /dev/sdX out of the box. But I accepted it and moved on.

It should be fairly simple to modify your scripts to compute disk space based on mount point instead. (df -k /[path]) Which is what we do. It assumes your servers' configs are pretty consistent and static though, as far as mountpoints/fstab config. (Otherwise it's a nightmare to automate, since your script has to keep track of different configs for each machine. Maybe something that parsed fstab first...?)

There's a writeup on this here which may be helpful too:

http://blog.mbentley.net/2013/09/df-reports-root-fs-as-by-uuid-symlink-instead-of-actual-device/

Personally, I'd probably want to get used to the new way - it's too easy for /dev/sdX values to change when you move system bits around. You replace a RAID card and suddenly your system is Mr. McGregg.
 
Does not explain all the weird stuff named is adding in there though. In fact when I try to restart named half the time it tries to unmount and remount something and fails. It's a pretty big pain.
 
Does not explain all the weird stuff named is adding in there though. In fact when I try to restart named half the time it tries to unmount and remount something and fails. It's a pretty big pain.

Who set this system up?

Please post fstab

Also, why are you running named in a chroot environment? That's what VMs are for. 😛
 
The OS installed was done by server provider (OVH). I "set it up" by typing "yum install named".

Named is always run in a chroot environment, if I ran it in a VM, it would still be in a chroot environment.

fstab is as follows:

Code:
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/md2        /       ext3    errors=remount-ro       0       1
/dev/sda3       swap    swap    defaults        0       0
/dev/sdb3       swap    swap    defaults        0       0
proc            /proc   proc    defaults                0       0
sysfs           /sys    sysfs   defaults                0       0
tmpfs           /dev/shm        tmpfs   defaults        0       0
devpts          /dev/pts        devpts  defaults        0       0
 
Do you know how they are provisioning your server? That really looks like some sort of bug/flaw in their systems.
 
It's all automated, not really sure how but I'm going to guess it uses a kickstart file or something. I can actually invoke a reload from the control panel. Considering how fast it happens I don't think there's any human intervention at all. I'm guessing they have some kind of special management card in the server.

I googled and did not find much on this subject, but I did find one post about someone asking how to hide all that stuff. It's probably specific to CentOS 6.5. I'll have to install it in a VM and see.
 
Pretty much the same stuff:

Code:
rootfs / rootfs rw 0 0
/dev/root / ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
devtmpfs /dev devtmpfs rw,relatime,size=16466284k,nr_inodes=4116571,mode=755 0 0
none /proc proc rw,relatime 0 0
none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/dev/root /var/named/chroot/var/named ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/etc/named ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/etc/named.rfc1912.zones ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/etc/rndc.key ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/usr/lib64/bind ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/etc/named.iscdlv.key ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
/dev/root /var/named/chroot/etc/named.root.key ext3 rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=writeback 0 0
 
Back
Top