文档章节

LVM与Udev的关系分析

LastRitter
 LastRitter
发布于 2017/09/12 14:32
字数 2289
阅读 82
收藏 0

概述

  • 在较新版本(差不多是最近十年)的Linux发新版中,已经广泛使用Udev机制(Udev其实还有很多变种,但基本类似),用于支持设备的热插拔。

  • 使用Udev可以保证/dev目录下的设备节点和内核中的硬件设备一致,更加方便管理,而不是像早期Linux那样有一堆无用的设备节点。同时也将设备的命名等工作放在了用户层,可以通过配置文件来修改,而不是硬编码进内核,严格体现了只提供机制,不提供策略的思想。

  • 当内核中有设备发生变更时,内核便会通过NetLink套接字向应用层广播uevent消息。用户层的systemd-udevd进程会监听uevent消息,并根据规则文件(Rules)执行相应的动作。

  • 在LVM中,充分的利用了Udev的机制来实现元数据的更新(多路径同样也利用这个特性来更新路径状态)、自动激活等功能。

LVM的Udev规则

LVM的Udev规则主要有两个:

  • 11-dm-lvm.rules 负责处理LV的激活和去激活时,LV的设备节点及其符号链接的创建和删除。当执行LV被激活命令时,会调用libdevicemapper库在内核创建对应的虚拟块设备,此时便会向应用层广播有新的块设备被添加的uevent。被用户层的systemd-udevd进程监听到后,便会更具这个规则文件来创建改块设备的设备节点和符号链接,激活完成。
$ cat /usr/lib/udev/rules.d/11-dm-lvm.rules
# Copyright (C) 2009 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.

# Udev rules for LVM.
#
# These rules create symlinks for LVM logical volumes in
# /dev/VG directory (VG is an actual VG name). Some udev
# environment variables are set (they can be used in later
# rules as well):
#   DM_LV_NAME - logical volume name
#   DM_VG_NAME - volume group name
#   DM_LV_LAYER - logical volume layer (blank if not set)

# "add" event is processed on coldplug only!
ACTION!="add|change", GOTO="lvm_end"
ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="lvm_end"
ENV{DM_UUID}!="LVM-?*", GOTO="lvm_end"

# Use DM name and split it up into its VG/LV/layer constituents.
IMPORT{program}="/usr/sbin/dmsetup splitname --nameprefixes --noheadings --rows $env{DM_NAME}"

# DM_SUBSYSTEM_UDEV_FLAG0 is the 'NOSCAN' flag for LVM subsystem.
# This flag is used to temporarily disable selected rules to prevent any
# processing or scanning done on the LVM volume before LVM has any chance
# to zero any stale metadata found within the LV data area. Such stale
# metadata could cause false claim of the LV device, keeping it open etc.
#
# If the NOSCAN flag is present, backup selected existing flags used to
# disable rules, then set them firmly so those selected rules are surely skipped.
# Restore these flags once the NOSCAN flag is dropped (which is normally any
# uevent that follows for this LV, even an artificially generated one).
ENV{DM_SUBSYSTEM_UDEV_FLAG0}=="1", ENV{DM_NOSCAN}="1", ENV{DM_DISABLE_OTHER_RULES_FLAG_OLD}="$env{DM_UDEV_DISABLE_OTHER_RULES_FLAG}", ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="1"
ENV{DM_SUBSYSTEM_UDEV_FLAG0}!="1", IMPORT{db}="DM_NOSCAN", IMPORT{db}="DM_DISABLE_OTHER_RULES_FLAG_OLD"
ENV{DM_SUBSYSTEM_UDEV_FLAG0}!="1", ENV{DM_NOSCAN}=="1", ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="$env{DM_DISABLE_OTHER_RULES_FLAG_OLD}", \
				   ENV{DM_DISABLE_OTHER_RULES_FLAG_OLD}="", ENV{DM_NOSCAN}=""

ENV{DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG}=="1", GOTO="lvm_end"

OPTIONS+="event_timeout=180"

# Do not create symlinks for inappropriate subdevices.
ENV{DM_LV_NAME}=="pvmove?*|?*_vorigin", GOTO="lvm_disable"
ENV{DM_LV_LAYER}=="?*", GOTO="lvm_disable"

# Create symlinks for top-level devices only.
ENV{DM_VG_NAME}=="?*", ENV{DM_LV_NAME}=="?*", SYMLINK+="$env{DM_VG_NAME}/$env{DM_LV_NAME}", GOTO="lvm_end"

LABEL="lvm_disable"
ENV{DM_UDEV_DISABLE_DISK_RULES_FLAG}="1"
ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}="1"
OPTIONS:="nowatch"

LABEL="lvm_end"
  • 69-dm-lvm-metad.rules 负责元数据的更新,当有块设备被添加时,就会激活“lvm2-pvscan@$major:$minor.service”任务,当有块设备被删除时,就会执行"/usr/bin/systemd-run /usr/sbin/lvm pvscan --cache $major:$minor"命令。
$ cat /usr/lib/udev/rules.d/69-dm-lvm-metad.rules
# Copyright (C) 2012 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.

# Udev rules for LVM.
#
# Scan all block devices having a PV label for LVM metadata.
# Store this information in LVMetaD (the LVM metadata daemon) and maintain LVM
# metadata state for improved performance by avoiding further scans while
# running subsequent LVM commands or while using lvm2app library.
# Also, notify LVMetaD about any relevant block device removal.
#
# This rule is essential for having the information in LVMetaD up-to-date.
# It also requires blkid to be called on block devices before so only devices
# used as LVM PVs are processed (ID_FS_TYPE="LVM2_member" or "LVM1_member").

SUBSYSTEM!="block", GOTO="lvm_end"


ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="lvm_end"

# If the PV label got lost, inform lvmetad immediately.
# Detect the lost PV label by comparing previous ID_FS_TYPE value with current one.
ENV{.ID_FS_TYPE_NEW}="$env{ID_FS_TYPE}"
IMPORT{db}="ID_FS_TYPE"
ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", ENV{.ID_FS_TYPE_NEW}!="LVM2_member|LVM1_member", ENV{LVM_PV_GONE}="1"
ENV{ID_FS_TYPE}="$env{.ID_FS_TYPE_NEW}"
ENV{LVM_PV_GONE}=="1", GOTO="lvm_scan"

# Only process devices already marked as a PV - this requires blkid to be called before.
ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
ENV{DM_MULTIPATH_DEVICE_PATH}=="1", GOTO="lvm_end"

# Inform lvmetad about any PV that is gone.
ACTION=="remove", GOTO="lvm_scan"

# Create /dev/disk/by-id/lvm-pv-uuid-<PV_UUID> symlink for each PV
ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-id/lvm-pv-uuid-$env{ID_FS_UUID_ENC}"

# If the PV is a special device listed below, scan only if the device is
# properly activated. These devices are not usable after an ADD event,
# but they require an extra setup and they are ready after a CHANGE event.
# Also support coldplugging with ADD event but only if the device is already
# properly activated.
# This logic should be eventually moved to rules where those particular
# devices are processed primarily (MD and loop).

# DM device:
KERNEL!="dm-[0-9]*", GOTO="next"
ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}=="1", GOTO="lvm_scan"
GOTO="lvm_end"

# MD device:
LABEL="next"
KERNEL!="md[0-9]*", GOTO="next"
IMPORT{db}="LVM_MD_PV_ACTIVATED"
ACTION=="add", ENV{LVM_MD_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_MD_PV_ACTIVATED}!="1", TEST=="md/array_state", ENV{LVM_MD_PV_ACTIVATED}="1", GOTO="lvm_scan"
ACTION=="add", KERNEL=="md[0-9]*p[0-9]*", GOTO="lvm_scan"
ENV{LVM_MD_PV_ACTIVATED}!="1", ENV{SYSTEMD_READY}="0"
GOTO="lvm_end"

# Loop device:
LABEL="next"
KERNEL!="loop[0-9]*", GOTO="next"
ACTION=="add", ENV{LVM_LOOP_PV_ACTIVATED}=="1", GOTO="lvm_scan"
ACTION=="change", ENV{LVM_LOOP_PV_ACTIVATED}!="1", TEST=="loop/backing_file", ENV{LVM_LOOP_PV_ACTIVATED}="1", GOTO="lvm_scan"
ENV{LVM_LOOP_PV_ACTIVATED}!="1", ENV{SYSTEMD_READY}="0"
GOTO="lvm_end"

# If the PV is not a special device listed above, scan only after device addition (ADD event)
LABEL="next"
ACTION!="add", GOTO="lvm_end"

LABEL="lvm_scan"

# The table below summarises the situations in which we reach the LABEL="lvm_scan".
# Marked by X, X* means only if the special dev is properly set up.
# The artificial ADD is supported for coldplugging. We avoid running the pvscan
# on artificial CHANGE so there's no unexpected autoactivation when WATCH rule fires.
# N.B. MD and loop never actually  reaches lvm_scan on REMOVE as the PV label is gone
# within a CHANGE event (these are caught by the "LVM_PV_GONE" rule at the beginning).
#
#        | real ADD | real CHANGE | artificial ADD | artificial CHANGE | REMOVE
# =============================================================================
#  DM    |          |      X      |       X*       |                   |   X
#  MD    |          |      X      |       X*       |                   |
#  loop  |          |      X      |       X*       |                   |
#  other |    X     |             |       X        |                   |   X
ENV{SYSTEMD_READY}="1"
ACTION!="remove", ENV{LVM_PV_GONE}=="1", RUN+="/usr/bin/systemd-run /usr/sbin/lvm pvscan --cache $major:$minor", GOTO="lvm_end"
ENV{SYSTEMD_ALIAS}="/dev/block/$major:$minor"
ENV{ID_MODEL}="LVM PV $env{ID_FS_UUID_ENC} on /dev/$name"
ENV{SYSTEMD_WANTS}+="lvm2-pvscan@$major:$minor.service"

LABEL="lvm_end"

LVM的系统服务

查看LVM的系统服:

$ ls /usr/lib/systemd/system/*lvm*
/usr/lib/systemd/system/docker-lvm-plugin.service  /usr/lib/systemd/system/lvm2-lvmlocking.service
/usr/lib/systemd/system/docker-lvm-plugin.socket   /usr/lib/systemd/system/lvm2-lvmpolld.service
/usr/lib/systemd/system/lvm2-lvmetad.service       /usr/lib/systemd/system/lvm2-lvmpolld.socket
/usr/lib/systemd/system/lvm2-lvmetad.socket        /usr/lib/systemd/system/lvm2-monitor.service
/usr/lib/systemd/system/lvm2-lvmlockd.service      /usr/lib/systemd/system/lvm2-pvscan@.service

查看LVM的pvscan服务:

$ cat /usr/lib/systemd/system/lvm2-pvscan@.service
[Unit]
Description=LVM2 PV scan on device %i
Documentation=man:pvscan(8)
DefaultDependencies=no
BindsTo=dev-block-%i.device
Requires=lvm2-lvmetad.socket
After=lvm2-lvmetad.socket lvm2-lvmetad.service
Before=shutdown.target
Conflicts=shutdown.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i
ExecStop=/usr/sbin/lvm pvscan --cache %i

$ systemctl list-units | grep lvm2-pvscan
lvm2-pvscan@253:2.service            loaded active exited    LVM2 PV scan on device 253:2

$ systemctl status lvm2-pvscan@253:2.service
● lvm2-pvscan@253:2.service - LVM2 PV scan on device 253:2
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: active (exited) since 五 2017-09-01 15:19:07 CST; 1h 56min ago
     Docs: man:pvscan(8)
  Process: 1588 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay %i (code=exited, status=0/SUCCESS)
 Main PID: 1588 (code=exited, status=0/SUCCESS)

9月 01 15:19:07 Develop systemd[1]: Starting LVM2 PV scan on device 253:2...
9月 01 15:19:07 Develop systemd[1]: Started LVM2 PV scan on device 253:2.

可以看到,该服务为一次性执行服务,结合对应的Udev规则文件可以发现,当有块设备被增加时,会执行“/usr/sbin/lvm pvscan --cache --activate ay %i”命令,重新扫描元数据,以及自动激活上面的LV(根据LVM配置文件)。

通过audit程序审计lvm的执行

在系统中有多个多路径设备,通过执行multipath命令增删多路径设备来触发LVM的Udev规则执行(关闭了lvm的元数据服务)。

  1. 创建一个lvm命令的监视器
$ /sbin/auditctl -w /usr/sbin/lvm -p warx -k lvm-run
  1. 增加多路径设备,此时执行了“/usr/sbin/lvm pvscan --cache --activate ay 253:2”命令。
$ multipath -v2

$ /sbin/ausearch -f /usr/sbin/lvm
...
time->Fri Sep  1 11:06:45 2017
type=PATH msg=audit(1504235205.362:285): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=920925 dev=08:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=PATH msg=audit(1504235205.362:285): item=0 name="/usr/sbin/lvm" inode=938899 dev=08:02 mode=0100555 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=CWD msg=audit(1504235205.362:285):  cwd="/"
type=EXECVE msg=audit(1504235205.362:285): argc=6 a0="/usr/sbin/lvm" a1="pvscan" a2="--cache" a3="--activate" a4="ay" a5="253:2"
type=SYSCALL msg=audit(1504235205.362:285): arch=c000003e syscall=59 success=yes exit=0 a0=7f55f51dbd40 a1=7f55f5198c00 a2=7f55f5185160 a3=7f55f4d5d978 items=2 ppid=1 pid=19638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" key="lvm-run"
  1. 删除多路径设备,此时执行了“/usr/sbin/lvm pvscan --cache 253:2”命令。
$ multipath -F

$ /sbin/ausearch -f /usr/sbin/lvm
...
time->Fri Sep  1 11:07:17 2017
type=PATH msg=audit(1504235237.988:287): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=920925 dev=08:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=PATH msg=audit(1504235237.988:287): item=0 name="/usr/sbin/lvm" inode=938899 dev=08:02 mode=0100555 ouid=0 ogid=0 rdev=00:00 objtype=NORMAL
type=CWD msg=audit(1504235237.988:287):  cwd="/"
type=EXECVE msg=audit(1504235237.988:287): argc=4 a0="/usr/sbin/lvm" a1="pvscan" a2="--cache" a3="253:2"
type=SYSCALL msg=audit(1504235237.988:287): arch=c000003e syscall=59 success=yes exit=0 a0=7f55f50c9ae0 a1=7f55f51e0380 a2=7f55f51dda70 a3=7f55f4d5d978 items=2 ppid=1 pid=19742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" key="lvm-run"
  1. 删除规则文件,重复上述测试,不再有lvm命令被执行。
$ mv /usr/lib/udev/rules.d/69-dm-lvm-metad.rules ~/

$ multipath -v2
$ /sbin/ausearch -f /usr/sbin/lvm
# Nothing

$ multipath -F
$ /sbin/ausearch -f /usr/sbin/lvm
# Nothing

$ mv ~/lvm-tmp/69-dm-lvm-metad.rules /usr/lib/udev/rules.d/

因此,要想禁止完全LVM的自动激活,或者自动pvscan,除了关闭元数据服务外,还需要移除这里的Udev规则,另外还需配置LVM配置文件中的“activation/auto_activation_volume_list”选项(配置完后还要使用dracut更新initramfs)。

© 著作权归作者所有

共有 人打赏支持
LastRitter
粉丝 37
博文 40
码字总数 190940
作品 0
武汉
高级程序员
私信 提问
linux lvm逻辑卷显示失效Inactive

解决办法: # vgchange -ay /dev/VGxxx //VGxxx为失效lv所在的vg名 规避方法:(仅适用于SUSE) 第一步:# chkconfig boot.lvm on 设置boot.lvm开机启动 第二步:/etc/init.d/boot.lvm star...

qq8658868
07/03
0
0
详细了解linux下rc.sysinit文件

我们知道linux下rc.sysinit文件主要功能是设置系统的基本环境,当init服务执行rc.sysinit时,有下面几项工作。 1,启动udev与SElinux子系统 udev负责管理dev中的所有设备文件,SElinux可以增...

haokuan521
07/02
0
0
redhat7.0修改网卡名字为eth0

#vim /etc/sysconfig/grub GRUBCMDLINELINUX="rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us net.ifnames=0 biosdevname=......

zuowuhan
2017/11/15
0
0
Linux kernel模块管理相关详解

Linux内核模块化设计 1. Linux内核设计:单内核、模块化(动态装载和卸载) (1) Linux:单内核设计,但充分借鉴了微内核体系的设计的优点;为内核引入了模块化机制; (2) 内核的组成部分: ke...

linuxprobe16
2016/12/05
3
0
VMware克隆Linux系统后网卡无法启动

克隆linux系统后,克隆版无法启动网卡,提示错误: device eth0 does not seem to be present,delaying initialization 原因分析: Linux使用udev动态管理设备文件。VMware会自动生成虚拟机的...

西饶旺加
06/26
0
0

没有更多内容

加载失败,请刷新页面

加载更多

如何解决 homebrew 更新慢的问题

之前一直困扰于 Homebrew 的更新速度,曾试过修改更新源(清华、中科大等)的方式,但是并没什么卵用;也试过设置 curl 代理的方式,但是 brew 走的好像不是 curl 的方式,所以也没用。 通过...

whoru
16分钟前
1
0
TiDB EcoSystem Tools 原理解读系列(二)TiDB-Lightning Toolset 介绍

简介 TiDB-Lightning Toolset 是一套快速全量导入 SQL dump 文件到 TiDB 集群的工具集,自 2.1.0 版本起随 TiDB 发布,速度可达到传统执行 SQL 导入方式的至少 3 倍、大约每小时 100 GB,适合...

TiDB
18分钟前
1
0
【Visual Studio 扩展工具】如何在ComponentOneFlexGrid树中显示RadioButton

概述 在ComponentOne Enterprise .NET控件集中,FlexGrid表格控件是用户使用频率最高的控件之一。它是一个功能强大的数据管理工具,轻盈且灵动,以分层的形式展示数据(数据呈现更加直观)。...

葡萄城技术团队
20分钟前
1
0
Maven环境隔离

Maven环境隔离 1. 什么是Maven环境隔离 顾名思义,Maven环境隔离就是将开发中的环境与beat环境、生产环境分隔开,方便进行开发和维护。这个在实际项目中用的还是很多的,如果你的项目用的Mav...

蚂蚁-Declan
20分钟前
3
0
day182-2018-12-19-英语流利阅读-待学习

“性感”时代已去,维密将如何转身? Daniel 2018-12-19 1.今日导读 维多利亚的秘密(Victoria's Secret)这个内衣品牌,最近似乎步入了“中年危机”——曾经打遍天下的“性感”内衣,在主打...

飞鱼说编程
21分钟前
5
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部