lvm のディスクを丸ごとぜんぶ別のサーバーに移植する


とにかくまずは、新しくディスクを受け取る側のサーバーで lvm まわり一式を emerge します。

# emerge -av lvm2

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N     ] dev-libs/libaio-0.3.110::gentoo  USE="-static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-util/boost-build-1.62.0-r1::gentoo  USE="-examples -python {-test}" PYTHON_TARGETS="python2_7" 82533 KiB
[ebuild  N     ] dev-libs/boost-1.62.0-r1:0/1.62.0::gentoo  USE="nls threads -context -debug -doc -icu -mpi -python -static-libs -tools" ABI_X86="(64) -32 (-x32)" PYTHON_TARGETS="python2_7 python3_4 (-python3_5)" 0 KiB
[ebuild  N     ] sys-block/thin-provisioning-tools-0.4.1::gentoo  USE="{-test}" 0 KiB
[ebuild  N     ] sys-fs/lvm2-2.02.145-r2::gentoo  USE="readline systemd thin udev (-clvm) (-cman) -corosync -device-mapper-only -lvm1 -lvm2create_initrd -openais (-selinux) (-static) (-static-libs)" 0 KiB

Total: 5 packages (5 new), Size of downloads: 82533 KiB

Would you like to merge these packages? [Yes/No] y

特に気になる USE もないので、なにも変更せずにどーん。

 * Messages for package sys-fs/lvm2-2.02.145-r2:

 * Notice that "use_lvmetad" setting is enabled with USE="udev" in
 * /etc/lvm/lvm.conf, which will require restart of udev, lvm, and lvmetad
 * if it was previously disabled.
 * Make sure the "lvm" init script is in the runlevels:
 * # rc-update add lvm boot
 * Make sure to enable lvmetad in /etc/lvm/lvm.conf if you want
 * to enable lvm autoactivation and metadata caching.

このサーバー systemd なんだけどなw

# systemctl list-unit-files | grep lvm
lvm2-lvmetad.service                   disabled
lvm2-monitor.service                   disabled
lvm2-pvscan@.service                   static
lvm2-lvmetad.socket                    disabled

lvmetadlvm2-monitor あたりを起動しとけばいいのかな?

# systemctl start lvm2-lvmetad
# systemctl status -l lvm2-lvmetad
* lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib64/systemd/system/lvm2-lvmetad.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2017-04-10 12:37:34 JST; 8s ago
     Docs: man:lvmetad(8)
 Main PID: 1298 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           `-1298 /sbin/lvmetad -f


# systemctl start lvm2-monitor
# systemctl status -l lvm2-monitor
* lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
   Loaded: loaded (/usr/lib64/systemd/system/lvm2-monitor.service; disabled; vendor preset: disabled)
   Active: active (exited) since Mon 2017-04-10 12:37:56 JST; 9s ago
     Docs: man:dmeventd(8)
  Process: 1302 ExecStart=/sbin/lvm vgchange --monitor y --ignoreskippedcluster (code=exited, status=0/SUCCESS)
 Main PID: 1302 (code=exited, status=0/SUCCESS)

こっちは起動してすぐ落ちてしまった…… [Service] Type=oneshot になってるから、たぶんそういうもんなんでしょう。とりあえずは lvm2-lvmetad だけ自動起動させればいいかな。

# systemctl enable lvm2-lvmetad
Created symlink from /etc/systemd/system/ to /usr/lib64/systemd/system/lvm2-lvmetad.service.

別に / とか /usr みたいなシステム領域を lvm 上に持たせるワケでもないのでカーネルレベルの設定も必須ではないような気もしますが……

# cd /usr/src/linux
# make menuconfig
--- Multiple devices driver support (RAID and LVM)
< >   RAID support (NEW)
< >   Block device as cache (NEW)
<*>   Device mapper support
[ ]     request-based DM: use blk-mq I/O path by default (NEW)
[ ]     Device mapper debugging support (NEW)
[ ]     Keep stack trace of persistent data block lock holders (N
< >     Crypt target support (NEW)
<M>     Snapshot target
<M>     Thin provisioning target
< >     Cache target (EXPERIMENTAL) (NEW)
< >     Era target (EXPERIMENTAL) (NEW)
< >     Mirror target (NEW)
< >     RAID 1/4/5/6/10 target (NEW)
< >     Zero target (NEW)
< >     Multipath target (NEW)
< >     I/O delaying target (NEW)
[*]     DM uevents
< >     Flakey target (NEW)
< >     Verity target support (NEW)
< >     Switch target support (EXPERIMENTAL) (NEW)
< >     Log writes target support (NEW)


# make -j3 && make modules_install
# mount /boot
# cp /usr/src/linux/arch/x86_64/boot/bzImage /boot/kernel-4.9.6-gentoo-r1-fs01.20170410.01
# grub-mkconfig -o /boot/grub/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/kernel-4.9.6-gentoo-r1.20170205.01
Found linux image: /boot/kernel-4.9.6-gentoo-r1-fs01.20170410.01
Found linux image: /boot/kernel-4.4.39-gentoo.20170204.04
# umount /boot
# systemctl reboot

/etc/lvm/lvm.conf とかは、そもそももとの環境でも何か変更したような形跡がないですね……こんなもんでいいかな?


virsh # attach-disk --domain fs01 --source /dev/sdc --target vdc --targetbus virtio --config
Disk attached successfully

virsh # attach-disk --domain fs01 --source /dev/sdd --target vdd --targetbus virtio --config
Disk attached successfully

virsh # attach-disk --domain fs01 --source /dev/sde --target vde --targetbus virtio --config
Disk attached successfully

virsh # attach-disk --domain fs01 --source /dev/sdf --target vdf --targetbus virtio --config
Disk attached successfully

virsh # attach-disk --domain fs01 --source /dev/sdg --target vdg --targetbus virtio --config
Disk attached successfully

virsh # attach-disk --domain fs01 --source /dev/sdh --target vdh --targetbus virtio --config
Disk attached successfully

virsh # domblklist fs01
Target     Source
vda        /dev/vg001/lv101
vdb        /dev/vg001/lv102
vdc        /dev/sdc
vdd        /dev/sdd
vde        /dev/sde
vdf        /dev/sdf
vdg        /dev/sdg
vdh        /dev/sdh

これ、kvm って1つのボリュームリソースを同時に複数のドメインに割り当てることができるんですね……同時起動したらどうなるんだろう。怖くて試せないですけどねw いや単純にデバイスが掴めないって起動エラーになりそうな気もしますが。


virsh # start fs01
# pvs
  PV         VG         Fmt  Attr PSize PFree
  /dev/vdc1  fs01_vg01  lvm2 a--  1.82t      0
  /dev/vdd1  fs01_vg01  lvm2 a--  1.82t 142.31g
  /dev/vde1  fs01_vg11  lvm2 a--  1.82t      0
  /dev/vdf1  fs01_vg11  lvm2 a--  1.82t 158.25g
  /dev/vdg1  fs01_vg01  lvm2 a--  1.82t 354.69g
  /dev/vdh1  fs01_vg11  lvm2 a--  1.82t 354.75g

# vgs
  VG         #PV #LV #SN Attr   VSize VFree
  fs01_vg01    3   4   0 wz--n- 5.46t 497.00g
  fs01_vg11    3   3   0 wz--n- 5.46t 513.00g

# lvs
  LV         VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  fs01_lv51  fs01_vg01  -wi-a-----   4.79t                                      
  fs01_lv52  fs01_vg01  -wi-a----- 160.00g                                      
  fs01_lv53  fs01_vg01  -wi-a-----   8.00g                                      
  fs01_lv99  fs01_vg01  -wi-a-----  16.00g                                      
  fs01_lv61  fs01_vg11  -wi-a-----   4.79t                                      
  fs01_lv62  fs01_vg11  -wi-a----- 160.00g                                      
  fs01_lv63  fs01_vg11  -wi-a-----   8.00g


# mount /dev/fs01_vg01/fs01_lv51 /mnt/test
# df -hT /mnt/test
Filesystem                        Type  Size  Used Avail Use% Mounted on
/dev/mapper/fs01_vg01-fs01_lv51   ext4  4.8T  4.5T   74G  99% /mnt/test


  • wiki/linux/move_lvm_disk_to_new_server
  • 最終更新: 2019/02/17 15:59