LVM与Udev的关系分析
LVM与Udev的关系分析
LastRitter 发表于3个月前
LVM与Udev的关系分析
  • 发表于 3个月前
  • 阅读 23
  • 收藏 0
  • 点赞 0
  • 评论 0

腾讯云 新注册用户 域名抢购1元起>>>   

概述

  • 在较新版本(差不多是最近十年)的Linux发新版中,已经广泛使用Udev机制(Udev其实还有很多变种,但基本类似),用于支持设备的热插拔。

  • 使用Udev可以保证/dev目录下的设备节点和内核中的硬件设备一致,更加方便管理,而不是像早期Linux那样有一堆无用的设备节点。同时也将设备的命名等工作放在了用户层,可以通过配置文件来修改,而不是硬编码进内核,严格体现了只提供机制,不提供策略的思想。

  • 当内核中有设备发生变更时,内核便会通过NetLink套接字向应用层广播uevent消息。用户层的systemd-udevd进程会监听uevent消息,并根据规则文件(Rules)执行相应的动作。

  • 在LVM中,充分的利用了Udev的机制来实现元数据的更新(多路径同样也利用这个特性来更新路径状态)、自动激活等功能。

LVM的Udev规则

LVM的Udev规则主要有两个:

  • 11-dm-lvm.rules 负责处理LV的激活和去激活时,LV的设备节点及其符号链接的创建和删除。当执行LV被激活命令时,会调用libdevicemapper库在内核创建对应的虚拟块设备,此时便会向应用层广播有新的块设备被添加的uevent。被用户层的systemd-udevd进程监听到后,便会更具这个规则文件来创建改块设备的设备节点和符号链接,激活完成。
$ cat /usr/lib/udev/rules.d/11-dm-lvm.rules
# Copyright (C) 2009 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.

# Udev rules for LVM.
#
# These rules create symlinks for LVM logical volumes in
# /dev/VG directory (VG is an actual VG name). Some udev
# environment variables are set (they can be used in later
# rules as well):
#   DM_LV_NAME - logical volume name
#   DM_VG_NAME - volume group name
#   DM_LV_LAYER - logical volume layer (blank if not set)

# "add" event is processed on coldplug only!
ACTION!="add|change", GOTO="lvm_end"
ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="lvm_end"
ENV{DM_UUID}!="LVM-?*", GOTO="lvm_end"

# Use DM name and split it up into its VG/LV/layer constituents.
IMPORT{program}="/usr/sbin/dmsetup splitname --nameprefixes --noheadings --rows $env{DM_NAME}"

# DM_SUBSYSTEM_UDEV_FLAG0 is the 'NOSCAN' flag for LVM subsystem.
# This flag is used to temporarily disable selected rules to prevent any
# processing or scanning done on the LVM volume before LVM has any chance
# to zero any stale metadata found within the LV data area. Such stale
# metadata could cause false claim of the LV device, keeping it open etc.
#
# If the NOSCAN flag is present, backup selected existing flags used to
# disable rules, then set them firmly so those selected rules are surely skipped.
# Restore these flags once the NOSCAN flag is dropped (which is normally any
# uevent that follows for this LV, even an artificially generated one).
ENV{DM_SUBSYSTEM_UDEV_FLAG0}=="1", ENV{DM_NOSCAN}="1", ENV{DM_DISABLE_OTHER_RULES_FLAG_OLD}="$env{DM_UDEV_DISABLE_OTHER_RULES_FLAG}", ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="1"
ENV{DM_SUBSYSTEM_UDEV_FLAG0}!="1", IMPORT{db}="DM_NOSCAN", IMPORT{db}="DM_DISABLE_OTHER_RULES_FLAG_OLD"
ENV{DM_SUBSYSTEM_UDEV_FLAG0}!="1", ENV{DM_NOSCAN}=="1", ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="$env{DM_DISABLE_OTHER_RULES_FLAG_OLD}", \
				   ENV{DM_DISABLE_OTHER_RULES_FLAG_OLD}="", ENV{DM_NOSCAN}=""

ENV{DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG}=="1", GOTO="lvm_end"

OPTIONS+="event_timeout=180"

# Do not create symlinks for inappropriate subdevices.
ENV{DM_LV_NAME}=="pvmove?*|?*_vorigin", GOTO="lvm_disable"
ENV{DM_LV_LAYER}=="?*", GOTO="lvm_disable"

# Create symlinks for top-level devices only.
ENV{DM_VG_NAME}=="?*", ENV{DM_LV_NAME}=="?*", SYMLINK+="$env{DM_VG_NAME}/$env{DM_LV_NAME}", GOTO="lvm_end"

LABEL="lvm_disable"
ENV{DM_UDEV_DISABLE_DISK_RULES_FLAG}="1"
ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="1"
OPTIONS:="nowatch"

LABEL="lvm_end"
  • 69-dm-lvm-metad.rules 负责元数据的更新,当有块设备被添加时,就会激活“lvm2-pvscan@$major:$minor.service”任务,当有块设备被删除时,就会执行"/usr/bin/systemd-run /usr/sbin/lvm pvscan --cache $major:$minor"命令。
$ cat /usr/lib/udev/rules.d/69-dm-lvm-metad.rules
# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.

# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE="LVM2_member" or "LVM1_member").

SUBSYSTEM!="block", GOTO="lvm_end"


ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="lvm_end"

# If the PV label got lost, inform lvmetad immediately.
# Detect the lost PV label by comparing previous ID_FS_TYPE value with current one.
ENV{.ID_FS_TYPE_NEW}="$env{ID_FS_TYPE}"
IMPORT{db}="ID_FS_TYPE"
ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", ENV{.ID_FS_TYPE_NEW}!="LVM2_member|LVM1_member", ENV{LVM_PV_GONE}="1"
ENV{ID_FS_TYPE}="$env{.ID_FS_TYPE_NEW}"
ENV{LVM_PV_GONE}=="1", GOTO="lvm_scan"

# Only process devices already marked as a PV - this requires blkid to be called before.
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ENV{DM_MULTIPATH_DEVICE_PATH}=="1", GOTO="lvm_end"

# Inform lvmetad about any PV that is gone.
ACTION=="remove", GOTO="lvm_scan"

# Create /dev/disk/by-id/lvm-pv-uuid-<PV_UUID> symlink for each PV
ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-id/lvm-pv-uuid-$env{ID_FS_UUID_ENC}"

# If the PV is a special device listed below, scan only if the device is
# properly activated. These devices are not usable after an ADD event,
# but they require an extra setup and they are ready after a CHANGE event.
# Also support coldplugging with ADD event but only if the device is already
# properly activated.
# This logic should be eventually moved to rules where those particular
# devices are processed primarily (MD and loop).

# DM device:
KERNEL!="dm-[0-9]*", GOTO="next"
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}=="1", GOTO="lvm_scan"
GOTO="lvm_end"

# MD device:
LABEL="next"
KERNEL!="md[0-9]*", GOTO="next"
IMPORT{db}="LVM_MD_PV_ACTIVATED"
ACTION=="add", ENV{LVM_MD_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_MD_PV_ACTIVATED}!="1", TEST=="md/array_state", ENV{LVM_MD_PV_ACTIVATED}="1", GOTO="lvm_scan"
ACTION=="add", KERNEL=="md[0-9]*p[0-9]*", GOTO="lvm_scan"
ENV{LVM_MD_PV_ACTIVATED}!="1", ENV{SYSTEMD_READY}="0"
GOTO="lvm_end"

# Loop device:
LABEL="next"
KERNEL!="loop[0-9]*", GOTO="next"
ACTION=="add", ENV{LVM_LOOP_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_LOOP_PV_ACTIVATED}!="1", TEST=="loop/backing_file", ENV{LVM_LOOP_PV_ACTIVATED}="1", GOTO="lvm_scan"
ENV{LVM_LOOP_PV_ACTIVATED}!="1", ENV{SYSTEMD_READY}="0"
GOTO="lvm_end"

# If the PV is not a special device listed above, scan only after device addition (ADD event)
LABEL="next"
ACTION!="add", GOTO="lvm_end"

LABEL="lvm_scan"

# The table below summarises the situations in which we reach the LABEL="lvm_scan".
# Marked by X, X* means only if the special dev is properly set up.
# The artificial ADD is supported for coldplugging. We avoid running the pvscan
# on artificial CHANGE so there's no unexpected autoactivation when WATCH rule fires.
# N.B. MD and loop never actually  reaches lvm_scan on REMOVE as the PV label is gone
# within a CHANGE event (these are caught by the "LVM_PV_GONE" rule at the beginning).
#
#        | real ADD | real CHANGE | artificial ADD | artificial CHANGE | REMOVE
# =============================================================================
#  DM    |          |      X      |       X*       |                   |   X
#  MD    |          |      X      |       X*       |                   |
#  loop  |          |      X      |       X*       |                   |
#  other |    X     |             |       X        |                   |   X
ENV{SYSTEMD_READY}="1"
ACTION!="remove", ENV{LVM_PV_GONE}=="1", RUN+="/usr/bin/systemd-run /usr/sbin/lvm pvscan --cache $major:$minor", GOTO="lvm_end"
ENV{SYSTEMD_ALIAS}="/dev/block/$major:$minor"
ENV{ID_MODEL}="LVM PV $env{ID_FS_UUID_ENC} on /dev/$name"
ENV{SYSTEMD_WANTS}+="lvm2-pvscan@$major:$minor.service"

LABEL="lvm_end"

LVM的系统服务

查看LVM的系统服:

$ ls /usr/lib/systemd/system/*lvm*
/usr/lib/systemd/system/docker-lvm-plugin.service  /usr/lib/systemd/system/lvm2-lvmlocking.service
/usr/lib/systemd/system/docker-lvm-plugin.socket   /usr/lib/systemd/system/lvm2-lvmpolld.service
/usr/lib/systemd/system/lvm2-lvmetad.service       /usr/lib/systemd/system/lvm2-lvmpolld.socket
/usr/lib/systemd/system/lvm2-lvmetad.socket        /usr/lib/systemd/system/lvm2-monitor.service
/usr/lib/systemd/system/lvm2-lvmlockd.service      /usr/lib/systemd/system/lvm2-pvscan@.service

查看LVM的pvscan服务:

$ cat /usr/lib/systemd/system/lvm2-pvscan@.service
[Unit]
Description=LVM2 PV scan on device %i
Documentation=man:pvscan(8)
DefaultDependencies=no
BindsTo=dev-block-%i.device
Requires=lvm2-lvmetad.socket
After=lvm2-lvmetad.socket lvm2-lvmetad.service
Before=shutdown.target
Conflicts=shutdown.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i
ExecStop=/usr/sbin/lvm pvscan --cache %i

$ systemctl list-units | grep lvm2-pvscan
lvm2-pvscan@253:2.service            loaded active exited    LVM2 PV scan on device 253:2

$ systemctl status lvm2-pvscan@253:2.service
● lvm2-pvscan@253:2.service - LVM2 PV scan on device 253:2
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: active (exited) since 五 2017-09-01 15:19:07 CST; 1h 56min ago
     Docs: man:pvscan(8)
  Process: 1588 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=0/SUCCESS)
 Main PID: 1588 (code=exited, status=0/SUCCESS)

9月 01 15:19:07 Develop systemd[1]: Starting LVM2 PV scan on device 253:2...
9月 01 15:19:07 Develop systemd[1]: Started LVM2 PV scan on device 253:2.

可以看到,该服务为一次性执行服务,结合对应的Udev规则文件可以发现,当有块设备被增加时,会执行“/usr/sbin/lvm pvscan --cache --activate ay %i”命令,重新扫描元数据,以及自动激活上面的LV(根据LVM配置文件)。

通过audit程序审计lvm的执行

在系统中有多个多路径设备,通过执行multipath命令增删多路径设备来触发LVM的Udev规则执行(关闭了lvm的元数据服务)。

  1. 创建一个lvm命令的监视器
$ /sbin/auditctl -w /usr/sbin/lvm -p warx -k lvm-run
  1. 增加多路径设备,此时执行了“/usr/sbin/lvm pvscan --cache --activate ay 253:2”命令。
$ multipath -v2

$ /sbin/ausearch -f /usr/sbin/lvm
...
time->Fri Sep  1 11:06:45 2017
type=PATH msg=audit(1504235205.362:285): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=920925 dev=08:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=PATH msg=audit(1504235205.362:285): item=0 name="/usr/sbin/lvm" inode=938899 dev=08:02 mode=0100555 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=CWD msg=audit(1504235205.362:285):  cwd="/"
type=EXECVE msg=audit(1504235205.362:285): argc=6 a0="/usr/sbin/lvm" a1="pvscan" a2="--cache" a3="--activate" a4="ay" a5="253:2"
type=SYSCALL msg=audit(1504235205.362:285): arch=c000003e syscall=59 success=yes exit=0 a0=7f55f51dbd40 a1=7f55f5198c00 a2=7f55f5185160 a3=7f55f4d5d978 items=2 ppid=1 pid=19638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" key="lvm-run"
  1. 删除多路径设备,此时执行了“/usr/sbin/lvm pvscan --cache 253:2”命令。
$ multipath -F

$ /sbin/ausearch -f /usr/sbin/lvm
...
time->Fri Sep  1 11:07:17 2017
type=PATH msg=audit(1504235237.988:287): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=920925 dev=08:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=PATH msg=audit(1504235237.988:287): item=0 name="/usr/sbin/lvm" inode=938899 dev=08:02 mode=0100555 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=CWD msg=audit(1504235237.988:287):  cwd="/"
type=EXECVE msg=audit(1504235237.988:287): argc=4 a0="/usr/sbin/lvm" a1="pvscan" a2="--cache" a3="253:2"
type=SYSCALL msg=audit(1504235237.988:287): arch=c000003e syscall=59 success=yes exit=0 a0=7f55f50c9ae0 a1=7f55f51e0380 a2=7f55f51dda70 a3=7f55f4d5d978 items=2 ppid=1 pid=19742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" key="lvm-run"
  1. 删除规则文件,重复上述测试,不再有lvm命令被执行。
$ mv /usr/lib/udev/rules.d/69-dm-lvm-metad.rules ~/

$ multipath -v2
$ /sbin/ausearch -f /usr/sbin/lvm
# Nothing

$ multipath -F
$ /sbin/ausearch -f /usr/sbin/lvm
# Nothing

$ mv ~/lvm-tmp/69-dm-lvm-metad.rules /usr/lib/udev/rules.d/

因此,要想禁止完全LVM的自动激活,或者自动pvscan,除了关闭元数据服务外,还需要移除这里的Udev规则,另外还需配置LVM配置文件中的“activation/auto_activation_volume_list”选项(配置完后还要使用dracut更新initramfs)。

标签: LVM udev uevent
共有 人打赏支持
粉丝 19
博文 29
码字总数 100790
×
LastRitter
如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
* 金额(元)
¥1 ¥5 ¥10 ¥20 其他金额
打赏人
留言
* 支付类型
微信扫码支付
打赏金额:
已支付成功
打赏金额: