Quantcast
Channel: Debian User Forums
Viewing all articles
Browse latest Browse all 3395

LVM volume group offline after reboot

$
0
0
I have a small server in my closet which is running 4 Debian 12 virtual machines under kvm/libvirt. The virtual machines have been running fine for months. They have unattended-upgrades enabled, and I generally leave them alone. I only reboot them periodically, so that the latest kernel upgrades get applied.

All the machines have an LVM configuration. Generally it's a "debian-vg" volume group on "/dev/vda" for the operating system, which has been configured automatically by the installer, and a "vgdata" volume group on "/dev/vdb" for everything else. All file systems are simple ext4, so nothing fancy. (*)

A couple of days ago, one of the virtual machines didn't come up after a routine reboot and dumped me into a maintenance shell. It complained that it couldn't mount filesystems that were on "vgdata". First I tried simply rebooting the machine, but it kept dumping me into maintenance. Investigating a bit deeper, I noticed that "vgdata" and the block device "/dev/vdb" were detected but the volume group was inactive, and none of the logical volumes were found. I ran "vgchange -a y vgdata" and that brought it back online, and I was able to see and mount the logical volumes. After several test reboots, the problem didn't reoccur, so it seemed to be fixed permanently.

I was willing to write it off as a glitch, but then a day later I rebooted one of the *other* virtual machines, and it also dumped me into maintenance with the same error on its own "vgdata". Again, running "vgchange -y vgdata" fixed the problem. I think two times in two days the same error with different virtual machines is not a coincidence, so something is going on here, but I can't figure out what.

I looked at the host logs, but I didn't find anything suspicious that could indicate a hardware error for example. I should also mention that the virtual disks of both machines live on entirely different physical disks: VM1 is on an HDD and VM2 on an SSD.

I also checked if these VMs had been running kernel 6.1.64-1 with the recent ext4 corruption bug at any point, but this does not appear to be the case.

Below is an excerpt of the systemd journal on the failed boot of the second VM, with what I think are the relevant parts. Full pastebin of the log can be found here

Code:

Dec 16 14:40:35 omega lvm[307]: PV /dev/vdb online, VG vgdata is complete.Dec 16 14:40:35 omega lvm[307]: VG vgdata finished...Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start timed out.Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvbinaries.device - /dev/vgdata/lvbinaries.Dec 16 14:42:05 omega systemd[1]: Dependency failed for binaries.mount - /binaries.Dec 16 14:42:05 omega systemd[1]: Dependency failed for local-fs.target - Local File Systems.Dec 16 14:42:05 omega systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.Dec 16 14:42:05 omega systemd[1]: local-fs.target: Triggering OnFailure= dependencies.Dec 16 14:42:05 omega systemd[1]: binaries.mount: Job binaries.mount/start failed with result 'dependency'.Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start failed with result 'timeout'.Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start timed out.Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvdata.device - /dev/vgdata/lvdata.Dec 16 14:42:05 omega systemd[1]: Dependency failed for data.mount - /data.Dec 16 14:42:05 omega systemd[1]: data.mount: Job data.mount/start failed with result 'dependency'.Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start failed with result 'timeout'.
(*) For reference, the disk layout on the affected machine is as follows:

Code:

# lsblk NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTSvda                   254:0    0   20G  0 disk ├─vda1                254:1    0  487M  0 part /boot├─vda2                254:2    0    1K  0 part └─vda5                254:5    0 19.5G  0 part   ├─debian--vg-root   253:2    0 18.6G  0 lvm  /  └─debian--vg-swap_1 253:3    0  980M  0 lvm  [SWAP]vdb                   254:16   0   50G  0 disk ├─vgdata-lvbinaries   253:0    0   20G  0 lvm  /binaries└─vgdata-lvdata       253:1    0   30G  0 lvm  /data# vgs  VG        #PV #LV #SN Attr   VSize   VFree  debian-vg   1   2   0 wz--n- <19.52g    0   vgdata      1   2   0 wz--n- <50.00g    0 # pvs  PV         VG        Fmt  Attr PSize   PFree  /dev/vda5  debian-vg lvm2 a--  <19.52g    0   /dev/vdb   vgdata    lvm2 a--  <50.00g    0 # lvs  LV         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  root       debian-vg -wi-ao----  18.56g                                                      swap_1     debian-vg -wi-ao---- 980.00m                                                      lvbinaries vgdata    -wi-ao----  20.00g                                                      lvdata     vgdata    -wi-ao---- <30.00g 

Statistics: Posted by debuser9876 — 2023-12-18 17:50 — Replies 0 — Views 309



Viewing all articles
Browse latest Browse all 3395

Trending Articles