LVMforLinux测试报告
发表于:2007-07-04来源:作者:点击数:
标签:
一、 测试系统环境 系统平台:Red Hat Linux Advanced Server 2.1 内核版本:2.4.18 服务器:DELL 6300 LVM内核支持版本:1.0.1 LVM工具版本:1.0.1 文件系统:reiserfs 二、 安装步骤 a) 内核编译 make mrproper make menuconfig 内核配置。 [*] Multiple d
一、 测试系统环境
系统平台:Red Hat Linux Advanced Server 2.1
内核版本:2.4.18
服务器:DELL 6300
LVM内核支持版本:1.0.1
LVM工具版本:1.0.1
文件系统:reiserfs
二、 安装步骤
a) 内核编译
make mrproper
make menuconfig
内核配置。
[*] Multiple devices driver support (RAID and LVM)
<*> Logical volume manager (LVM) support
由于是全新的内核,同时增加了AIC7XXXX SCSI驱动支持、reiserfs和EXT3文件系统的支持。
SCSI support --->
SCSI low-level drivers --->
<*> Adaptec AIC7xxx support
(253) Maximum number of TCQ commands per device
(15000) Initial bus reset delay in milli-seconds
File systems --->
<*> Reiserfs support
[*] Have reiserfs do extra internal checking
[*] Stats in /proc/fs/reiserfs
<*> Ext3 journalling file system support (E
XPERIMENTAL)
b) 安装LVM工具
cd /root
tar zxvf lvm_1.0.1.tar.gz
cd LVM/1.0.1
./configure
make
make install
echo “/sbin/vgscan” >>/etc/rc.d/rc.local
echo “/sbin/vgchange -a y “>>/etc/rc.d/rc.local
三、 测试
a) 创建分区并初始化为物理卷
用fdisk创建分区/dev/sda7、/dev/sda8、/dev/sdb1、/dev/sdb2、/dev/sdb3、/deb/sdc1、/dev/sdc2、/dev/sdc3、/dev/sdc4,分区格式为8E(LVM标准分区)
初始化为物理卷
pvcreate /dev/sda7
pvcreate /dev/sda8
pvcreate /dev/sdb1
pvcreate /dev/sdb2
pvcreate /dev/sdb3
pvcreate /dev/sdc1
pvcreate /dev/sdc2
pvcreate /dev/sdc3
pvcreate /dev/sdc4
b) 在不同的硬盘上同时创建LVM卷组
vgcreate lvmtest /dev/sda7 /dev/sdb1
成功创建lvmtest卷组
c) 删除其中的一个卷组
vgreduce lvmtest /dev/sdb1
成功,/dev/sdb1并没有分配给逻辑卷(LV)使用,可以成功删除。已经配空间给LV后的物理卷不能再删除
d) 增加物理卷
将刚才成功从卷组中删除的物理卷重新加入卷组
vgextend lvmtest /dev/sdb1
成功
e) 创建逻辑
lvcreate –L 3G –n lvm1 lvmtest
成功创建一个名为lvm1、大小为3G的逻辑卷
lvcreate –L 3G –n lvm2lvmtest
成功创建一个名为lvm1、大小为3G的逻辑卷
f) 格式化逻辑卷
mkreiserfs /dev/lvmtest/lvm1
mkreiserfs /dev/lvmtest/lvm2
成功
g) 将逻辑卷mount到测试目录
mkdri /mntvm1
mkdir /mnt/lvm2
mount /dev/lvmtest/lvm1 /mnt/lvm1
mount /dev/lvmtest/lvm2 /mnt/lvm2
成功
h) 数据读写测试
cp –rf /var /mnt/lvm1
cp –rf /usr /mnt/lvm1
cp –rf /var /mnt/lvm2
cp –rf /var /mnt/lvm2
总大小为1.2G,写入正常。
i) 系统引导时自动mount
修改/etc/fstab增加如下两行
/dev/lvmtest/lvm1 /mnt/lvm1 reiserfs defaults 1 2
/dev/lvmtest/lvm2 /mnt/lvm2 reiserfs defaults 1 2
重新启动计算机,/mnt/lvm1和/mnt/lvm2两个目录正常mount上去。
[root@lvm root]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.9G 298M 1.5G 16% /
none 0 0 0 - /proc
none 0 0 0 - /dev/pts
/dev/sda5 1.4G 20k 1.3G 1% /home
none 1006M 0 1006M 0% /dev/shm
/dev/sda3 1.4G 1.3G 154M 89% /usr
/dev/sda6 1.4G 27M 1.3G 2% /var
/dev/lvmtest/lvm1 3.0G 1.2G 1.8G 38% /mnt/lvm1
/dev/lvmtest/lvm2 3.0G 1.2G 1.8G 38% /mnt/lvm2
j) 逻辑卷扩容
lvextend –L+2G /dev/lvmtest/lvm2
resize_reiserfs –f /dev/lvmtest/lvm2
用df –ah查看,/mnt/lvm2目录的增长的2G,文件读写正常。
[root@lvm root]# lvextend -L+2G /dev/lvmtest/lvm2
lvextend -- extending logical volume "/dev/lvmtest/lvm2" to 5.00 GB
lvextend -- doing automatic backup of volume group "lvmtest"
lvextend -- logical volume "/dev/lvmtest/lvm2" su
clearcase/" target="_blank" >ccessfully extended
[root@lvm root]# resize_reiserfs -f /dev/lvmtest/lvm2
<-------------resize_reiserfs, 2001------------->
reiserfsprogs 3.x.0j
[root@lvm root]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.9G 298M 1.5G 16% /
none 0 0 0 - /proc
none 0 0 0 - /dev/pts
/dev/sda5 1.4G 20k 1.3G 1% /home
none 1006M 0 1006M 0% /dev/shm
/dev/sda3 1.4G 1.3G 154M 89% /usr
/dev/sda6 1.4G 27M 1.3G 2% /var
/dev/lvmtest/lvm1 3.0G 1.2G 1.8G 38% /mnt/lvm1
/dev/lvmtest/lvm2 5.0G 1.2G 3.8G 23% /mnt/lvm2
k) 逻辑卷减容
lvreduce –L-2G /dev/lvmtest/lvm2
umount /mnt/lvm2
resize_reiserfs –f /dev/lvmtest/lvm2
mount /dev/lvmtest/lvm2 /mnt/lvm2
用df –ah 查看,/mnt/lvm2目录成功减小了2G,读写测试正常。
[root@lvm root]# lvreduce -L-2G /dev/lvmtest/lvm2
lvreduce -- WARNING: reducing active and open logical volume to 3.00 GB
lvreduce -- THIS MAY DESTROY YOUR DATA (filesystem etc.)
lvreduce -- do you really want to reduce "/dev/lvmtest/lvm2"? [y/n]: y
lvreduce -- doing automatic backup of volume group "lvmtest"
lvreduce -- logical volume "/dev/lvmtest/lvm2" successfully reduced
[root@lvm root]# umount /mnt/lvm2
[root@lvm root]# resize_reiserfs /dev/lvmtest/lvm2
<-------------resize_reiserfs, 2001------------->
reiserfsprogs 3.x.0j
reiserfs_open: bread failed reading bitmap #24 (786432)
reiserfs_open: bread failed reading bitmap #25 (819200)
reiserfs_open: bread failed reading bitmap #26 (851968)
reiserfs_open: bread failed reading bitmap #27 (884736)
reiserfs_open: bread failed reading bitmap #28 (917504)
reiserfs_open: bread failed reading bitmap #29 (950272)
reiserfs_open: bread failed reading bitmap #30 (983040)
reiserfs_open: bread failed reading bitmap #31 (1015808)
reiserfs_open: bread failed reading bitmap #32 (1048576)
reiserfs_open: bread failed reading bitmap #33 (1081344)
reiserfs_open: bread failed reading bitmap #34 (1114112)
reiserfs_open: bread failed reading bitmap #35 (1146880)
reiserfs_open: bread failed reading bitmap #36 (1179648)
reiserfs_open: bread failed reading bitmap #37 (1212416)
reiserfs_open: bread failed reading bitmap #38 (1245184)
reiserfs_open: bread failed reading bitmap #39 (1277952)
You are running BETA version of reiserfs shrinker.
This version is only for testing or VERY CAREFUL use.
Backup of you data is recommended.
Do you want to continue? [y/N]:y
Fetching on-disk bitmap..done
Processing the tree: 0%....20%....40%....60%....80%....100% left 0, 5153 /sec
nodes processed (moved):
int 119 (0),
leaves 18732 (0),
unfm 269739 (0),
total 288590 (0).
ReiserFS report:
blocksize 4096
block count 786432 (1310720)
free blocks 489608 (1013880)
bitmap block count 24 (40)
Syncing..done
在对分区减容时,时间相对较长,时间长短与减容分区的容量大小有关。减容不能在线实现,必须要将LV所在的mount点umount下来,并且减容操作有一定的风险。
l) 跨不同物理硬盘创建多VG(卷组)
新建VG。
vgcreate –s 32M lvmtest2 /dev/sda8 /dev/sdb2 /dev/sdc2
[root@lvm log]# vgcreate -s 32M lvmtest2 /dev/sda8 /dev/sdb2 /dev/sdc2
vgcreate -- INFO: maximum logical volume size is 1.00 Terabyte
vgcreate -- doing automatic backup of volume group "lvmtest2"
vgcreate -- volume group "lvmtest2" successfully created and activated
[root@lvm log]# vgdisplay lvmtest2
--- Volume group ---
VG Name lvmtest2
VG Access read/write
VG Status available/resizable
VG # 1
MAX LV 255
Cur LV 0
Open LV 0
MAX LV Size 1.00 TB
Max PV 255
Cur PV 3
Act PV 3
VG Size 13.91 GB
PE Size 32.00 MB
Total PE 445
Alloc PE / Size 0 / 0
Free PE / Size 445 / 13.91 GB
VG UUID bluSUx-TcM3-Yj0o-PubJ-q0YP-ErH6-iaruug
在新的VG(卷组)上创建逻辑卷
mkdir /mnt/lvm3
mkdir /mnt/lvm4
lvcreate –L 7G –n lvm3 lvmtest2
lvcreate –L 5G –n lvm4 lvmtest2
mkreiserfs /dev/lvmtest2/lvm3
mkreiserfs /dev/lvmtest2/lvm4
mount /dev/lvmtest2/lvm3 /mnt/lvm3
mount /dev/lvmtest2/lvm4 /mnt/lvm4
用df –ah 查看
[root@lvm log]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.9G 299M 1.5G 16% /
none 0 0 0 - /proc
none 0 0 0 - /dev/pts
/dev/sda5 1.4G 20k 1.3G 1% /home
none 1006M 0 1006M 0% /dev/shm
/dev/sda3 1.4G 1.3G 154M 89% /usr
/dev/sda6 1.4G 27M 1.3G 2% /var
/dev/lvmtest/lvm1 3.0G 1.2G 1.8G 38% /mnt/lvm1
/dev/lvmtest/lvm2 3.0G 1.2G 1.8G 38% /mnt/lvm2
/dev/lvmtest2/lvm3 7.0G 33M 6.9G 1% /mnt/lvm3
/dev/lvmtest2/lvm4 5.0G 33M 4.9G 1% /mnt/lvm4
对分区做读写操作,无异常。
m) 意外掉电测试
强制关闭计算机系统电源,重启计算机,lvm分区通过检查正常mount。
Jun 13 11:51:13 lvm syslog: syslogd startup succeeded
Jun 13 11:51:13 lvm syslog: klogd startup succeeded
Jun 13 11:51:13 lvm portmap: portmap startup succeeded
Jun 13 11:51:13 lvm nfslock: rpc.statd startup succeeded
Jun 13 11:51:14 lvm keytable: Loading keymap: succeeded
Jun 13 11:51:14 lvm keytable: Loading system font: succeeded
Jun 13 11:51:14 lvm random: Initializing random number generator: succeeded
Jun 13 11:51:15 lvm
.netfs: Mounting other filesystems: succeeded
Jun 13 11:51:15 lvm autofs: automount startup succeeded
Jun 13 11:51:15 lvm sshd: Starting sshd:
Jun 13 11:50:12 lvm rc.sysinit: Mounting proc filesystem: succeeded
Jun 13 11:50:12 lvm sysctl: net.ipv4.ip_forward = 0
Jun 13 11:50:12 lvm sysctl: net.ipv4.conf.default.rp_filter = 1
Jun 13 11:50:12 lvm rc.sysinit: Configuring kernel parameters: succeeded
Jun 13 11:50:12 lvm date: Fri Jun 13 11:49:53 CST 2003
Jun 13 11:50:12 lvm rc.sysinit: Setting clock (localtime): Fri Jun 13 11:49:53CST 2003 succeeded
Jun 13 11:50:12 lvm rc.sysinit: Loading default keymap succeeded
Jun 13 11:51:16 lvm sshd: succeeded
Jun 13 11:50:12 lvm rc.sysinit: Setting default font (lat0-sun16): succeeded
Jun 13 11:51:16 lvm sshd: ^[[60G
Jun 13 11:50:12 lvm rc.sysinit: Activating swap partitions: succeeded
Jun 13 11:50:12 lvm rc.sysinit: Setting hostname lvm: succeeded
Jun 13 11:51:16 lvm sshd:
Jun 13 11:50:12 lvm fsck: / was not cleanly unmounted, check forced.
Jun 13 11:51:16 lvm rc: Starting sshd: succeeded
Jun 13 11:50:12 lvm fsck: /: 21195/256512 files (1.8% non-contiguous), 84227/512064 blocks
Jun 13 11:50:12 lvm rc.sysinit: Checking root filesystem succeeded
Jun 13 11:50:12 lvm rc.sysinit: Remounting root filesystem in read-write mode:succeeded
Jun 13 11:50:12 lvm vgscan: vgscan -- reading all physical volumes (this may take a while...)
Jun 13 11:50:12 lvm vgscan: vgscan -- found inactive volume group lvmtest
Jun 13 11:50:12 lvm vgscan: vgscan -- /etc/lvmtab and /etc/lvmtab.d successfully created
Jun 13 11:50:12 lvm vgscan: vgscan -- WARNING: This program does not do a VGDA backup of your volume group
Jun 13 11:50:12 lvm rc.sysinit: Setting up Logical Volume Management: succeeded
Jun 13 11:50:12 lvm rc.sysinit: Finding module dependencies: succeeded
Jun 13 11:50:13 lvm fsck: /home
Jun 13 11:50:13 lvm fsck: was not cleanly unmounted, check forced.
Jun 13 11:50:14 lvm fsck: /home: 11/192000 files (0.0% non-contiguous), 6041/383544 blocks
Jun 13 11:50:14 lvm fsck: /usr
Jun 13 11:50:14 lvm fsck: was not cleanly unmounted, check forced.
Jun 13 11:50:52 lvm fsck: /usr: 70569/192000 files (2.6% non-contiguous), 324794/383551 blocks
Jun 13 11:50:52 lvm fsck: /var
Jun 13 11:50:52 lvm fsck: was not cleanly unmounted, check forced.
Jun 13 11:50:53 lvm fsck: Inode 96086, i_blocks is 104, should be
Jun 13 11:50:53 lvm fsck: 48. FIXED.
Jun 13 11:50:53 lvm fsck: /var: Inode 96087, i_blocks is 64, should be 56. FIXED.
Jun 13 11:50:53 lvm fsck: /var: Inode 96101, i_blocks is 64, should be 8. FIXED.
Jun 13 11:50:53 lvm fsck: /var: Inode 96102, i_blocks is 64, should be 8. FIXED.
Jun 13 11:50:53 lvm fsck: /var:
Jun 13 11:50:53 lvm fsck: Inode 96009, i_blocks is 352, should be 336. FIXED.
Jun 13 11:50:53 lvm fsck: /var: 564/192000 files (4.3% non-contiguous), 12700/383544 blocks
Jun 13 11:50:54 lvm fsck: <-------------reiserfsck, 2001------------->
Jun 13 11:50:54 lvm fsck: reiserfsprogs 3.x.0j
Jun 13 11:50:54 lvm rc.sysinit: Checking filesystems succeeded
Jun 13 11:50:54 lvm fsck: <-------------reiserfsck, 2001------------->
Jun 13 11:50:54 lvm fsck: reiserfsprogs 3.x.0j
Jun 13 11:50:57 lvm rc.sysinit: Mounting local filesystems: succeeded
Jun 13 11:50:57 lvm rc.sysinit: Enabling local filesystem quotas: succeeded
Jun 13 11:50:58 lvm rc.sysinit: Enabling swap space: succeeded
Jun 13 11:51:00 lvm kudzu: Updating /etc/fstab succeeded
Jun 13 11:51:10 lvm kudzu: succeeded
Jun 13 11:51:10 lvm sysctl: net.ipv4.ip_forward = 0
Jun 13 11:51:10 lvm sysctl: net.ipv4.conf.default.rp_filter = 1
Jun 13 11:51:10 lvm network: Setting network parameters: succeeded
Jun 13 11:51:10 lvm network: Bringing up interface lo: succeeded
Jun 13 11:51:13 lvm network: Bringing up interface eth0: succeeded
Jun 13 11:51:19 lvm xinetd: xinetd startup succeeded
Jun 13 11:51:21 lvm sendmail: sendmail startup succeeded
Jun 13 11:51:21 lvm gpm: gpm startup succeeded
Jun 13 11:51:21 lvm crond: crond startup succeeded
Jun 13 11:51:22 lvm xfs: xfs startup succeeded
Jun 13 11:51:22 lvm anacron: anacron startup succeeded
Jun 13 11:51:22 lvm atd: atd startup succeeded
四、 总结
Linux下的LVM线性增长的稳定性和
可靠性是没有问题的,通过多LV和VG可以快速实现跨越不同物理磁盘设备在线磁盘分区的增长,通过读写操作,各个LV均无异常。对于系统的意外宕机,由于采用的是reiserfs日志文件系统,可以进行块速的fsck。
LVM对于在线磁盘空间的减容现在无法实现,只能在LV没有被mount时,可以对LV时行减容操作,并且会有一定的风险性,时间也稍长,时间的长短视LV大小而定。
原文转自:http://www.ltesting.net