How do you extend a LVM partition?

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
I have a VM I want to increase size of / partition so I increased the disk in VMware and did echo 1> /sys/class/scsi_disk/2:0:0:0/device/rescan and fdisk now sees the disk being bigger . Normally I'd just run e2fsresize or something, but it's a LVM. I tried lvmresize but it does not do anything. How do I go about making the partition bigger?

Here's some info that might help, I'm not really familiar enough with LVM to really know what's going on.


#fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): p

Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a551a

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 10444 83373056 8e Linux LVM

Command (m for help):



lvm> pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vg_appdev
PV Size 79.51 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 20354
Free PE 0
Allocated PE 20354
PV UUID 1UgSso-qhOR-EHnJ-1t9i-keC5-auzX-WCN1hw

lvm> lvdisplay
--- Logical volume ---
LV Path /dev/vg_appdev/lv_root
LV Name lv_root
VG Name vg_appdev
LV UUID hjGOAy-MNnn-q72Y-Kmrw-IdvO-dihj-HY3zpd
LV Write Access read/write
LV Creation host, time appdev.loc, 2014-06-29 22:20:32 -0400
LV Status available
# open 1
LV Size 77.54 GiB
Current LE 19850
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/vg_appdev/lv_swap
LV Name lv_swap
VG Name vg_appdev
LV UUID jkpedx-5kSB-bET0-VqUa-ctQw-sGvS-2KK2NQ
LV Write Access read/write
LV Creation host, time appdev.loc, 2014-06-29 22:20:50 -0400
LV Status available
# open 1
LV Size 1.97 GiB
Current LE 504
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

lvm> vgdisplay
--- Volume group ---
VG Name vg_appdev
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 79.51 GiB
PE Size 4.00 MiB
Total PE 20354
Alloc PE / Size 20354 / 79.51 GiB
Free PE / Size 0 / 0
VG UUID P9yuQR-wcEL-RXDl-awZh-GndX-McfC-BIIcXh

lvm> vgs
VG #PV #LV #SN Attr VSize VFree
vg_appdev 1 2 0 wz--n- 79.51g 0
lvm> lvscan
ACTIVE '/dev/vg_appdev/lv_root' [77.54 GiB] inherit
ACTIVE '/dev/vg_appdev/lv_swap' [1.97 GiB] inherit
lvm>



It seems it's stuck at being around 80GB even though there's extra space. I tied pvresize /dev/sda2, it says it does it, but nothing actually changes. I tried parted to resize /dev/sda2 as well and it just errors out saying it can't do it with no details as to why. What else can I try?

Worse case scenario I boot into a gparted cd as this is not a production machine, but I figured the whole point of LVM is that you can do stuff live?
 

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
Ended up just adding a secondary virtual disk and a normal file system. Much easier. Next time I want to extend it I can just use the regular resize tools.

When I tried to boot with gparted it did not recognize the vmware drives so that was out of the question too.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
This for me is one of the most frustrating parts of Linux. As you saw, you essentially have to take the filesystem offline to extend the underlying partitions, using gparted or a similar tool. Windows, AIX, etc., does not have this limitation, so it's a bit rough to suddenly see that for LVMs on Linux, you have to do this. That stated, as you also found out, you can simply add drives to the system to get your desired effect. We have some clients who do this, because they want to maintain uptime, so they always just add more disks to the system.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
lvextend -l +100%FREE /dev/myvg/mvlv
Then grow the filesystem inside as normal.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
Right, I think Red Squirrel gets that, as that process is easy enough to google if needed. The issue is how you grow the layers under the Logical Volume.

You can add a new Partition.
You can add a new Disk.
You can't (easily) extend an existing partition to then grow the PV and associated VG and LV from.

Both the first options eventually run into growth issues, but they are both issues related to what LVM was trying to sold. In the "modern era" of Virtualized infrastructure, I personally don't see much of a point in running LVM. It adds an additional layer of abstraction that doesn't add anything significant on the layers of abstraction already provided by your Virtual layer. You no longer need to create spanned disks with multiple PVs to grow your infrastructure. All you need to do is extend your virtual disk, get Linux to see the changes, and grow your partition.

I personally don't run LVMs in my own environment any longer for the exact issues that the OP ran into.

You can technically do an in-place deletion and re-creation of the partition with the extended size, and then extend the PV, but I've had mixed success with this depending on the distro version and whether or not the root filesystem was involved.
 

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
The funny thing is that you'd think it would be easier in a virtualized environment but in a physical server I would live add a new drive in an empty bay (VMware requires to shut down VM to add a new drive) and then simply --add it to the raid array, extend file system, and call it a day. :p

I normally don't setup LVM myself but this was a default install where it did set it up, and figured I would try to make it work and consider learning it, but now I'm having trouble figuring out the point of it, if you end up having to mess around with rebooting anyway. Easier to just use mdadm raid. Mind you I normally would not really want to do that with a VM as it probably adds extra overhead.
 

mv2devnull

Golden Member
Apr 13, 2010
1,506
146
106
VMware requires to shut down VM to add a new drive
In other words, the problem is VMware?

IME, a VM reboots way faster than a physical server (whose POST can take minutes), and reboots are inevitable due to kernel security updates.

LVM can do:
1 Plug in larger volume
2 Add PV
3 Move LVs to new volume's extents (live)
4 Remove old PV
5 Unplug old volume
6 Expand LVs (and filesystems therein)
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
The funny thing is that you'd think it would be easier in a virtualized environment but in a physical server I would live add a new drive in an empty bay (VMware requires to shut down VM to add a new drive) and then simply --add it to the raid array, extend file system, and call it a day. :p

In other words, the problem is VMware?

VMware has supported hot-adding disks to VMs for over a decade. It definitely does not require a reboot. The same host scan that the OP Performed is also useful for getting the new disk to show up in Linux after being added at the Hypervisor level.
 

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
VMware has supported hot-adding disks to VMs for over a decade. It definitely does not require a reboot. The same host scan that the OP Performed is also useful for getting the new disk to show up in Linux after being added at the Hypervisor level.

I'm using the free version so it's quite limited. I eventually want to figure out how to use KVM/Qemu and go that route, it's just that it's not exactly as turn key as vmware. I'd probably want to write some kind of front end for it. When I read that adding new vlans requires to restart the network service (you never want to do that on a production machine) I kinda steered away from it. But maybe things are better now.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
580
126
The free version of ESXi certainly has limitations, but not on adding a disk. You can definitely hot add a disk or increase its size even on the free version of vSphere, unless there's something in the way, such as a Snapshot. :)
 

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
Yeah increasing size I was able to do live, but the option to add was grayed out. But I originally didn't want to actually add, I just wanted to make the existing one bigger. Ended up just adding a separate drive though and just make a separate data partition on non LVM. So if I want to extend it in the future it should be easier.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,071
441
126
Red Squirrel, the command you were originally looking for was pvresize. This lets you change the size of the physical LVM container on a device (so that you can either grow it to fit the size of a LUN which was grown or in your case a virtual disk which was increased in size, or so that you can shrink it to make room for a new physical partition, etc). Afterwards you can then use lvresize (with --resizefs) to change the size of the logical volumes and underlying filesystem (I am assuming this was to add more space).

If you were shrinking everything you can't use lvresize without first fscking the filesystem and then using the approved shrinking methods (google is your friend here and will tell you what to do).
 
Last edited:

Fallen Kell

Diamond Member
Oct 9, 1999
6,071
441
126
When I read that adding new vlans requires to restart the network service (you never want to do that on a production machine) I kinda steered away from it.
If you are using RHEL 6.x or newer, CentOS 6.x or newer, Oracle Linux 6.x or newer, Scientific Linux 6.x or newer, your network devices and service restart all the time without you even knowing it. I believe every 2 hours on 7.x and every time link status changes, as well as several other network related settings (IP, netmask, etc., etc.,) (unless you specifically hand edited the underlying /etc/sysconfig/network-scripts/ifcfg-<network device> config files to strip away NetworkManager from controlling the devices).
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
68,024
12,414
126
www.anyf.ca
Red Squirrel, the command you were originally looking for was pvresize. This lets you change the size of the physical LVM container on a device (so that you can either grow it to fit the size of a LUN which was grown or in your case a virtual disk which was increased in size, or so that you can shrink it to make room for a new physical partition, etc). Afterwards you can then use lvresize (with --resizefs) to change the size of the logical volumes and underlying filesystem (I am assuming this was to add more space).

If you were shrinking everything you can't use lvresize without first fscking the filesystem and then using the approved shrinking methods (google is your friend here and will tell you what to do).

Think thats the one I tried but it ha no effect. (No errors it just didnt change size)