- A+
一、RAID是什么
简单描述:
RAID:(Redundant Array of indenpensive Disk) 独立磁盘冗余阵列: 磁盘阵列是把多个磁盘组成一个阵列,当作单一磁盘使用,它将数据以分段或条带(striping)的方式储存在不同的磁盘中,存取数据时,阵列中的相关磁盘一起动作,大幅减低数据的存取时间,同时有更佳的空间利用率。磁盘阵列利用的不同的技术,称为RAID level,不同的level针对不同的系统及应用,以解决数据安全的问题。简单来说,RAID把多个硬盘组合成为一个逻辑扇区,因此,操作系统只会把它当作一个硬盘。
二、RAID 优缺点
优点:
1. 提高传输速率。RAID通过在多个磁盘上同时存储和读取数据来大幅提高存储系统的数据吞吐量(Throughput)。在RAID中,可以让很多磁盘驱动器同时传输数据,而这些磁盘驱动器在逻辑上又是一个磁盘驱动器,所以使用RAID可以达到单个磁盘驱动器几倍、几十倍甚至上百倍的速率。这也是RAID最初想要解决的问题。因为当时CPU的速度增长很快,而磁盘驱动器的数据传输速率无法大幅提高,所以需要有一种方案解决二者之间的矛盾。RAID最后成功了。
2. 通过数据校验提供容错功能。普通磁盘驱动器无法提供容错功能,如果不包括写在磁盘上的CRC(循环冗余校验)码的话。RAID容错是建立在每个磁盘驱动器的硬件容错功能之上的,所以它提供更高的安全性。在很多RAID模式中都有较为完备的相互校验/恢复的措施,甚至是直接相互的镜像备份,从而大大提高了RAID系统的容错度,提高了系统的稳定冗余性。
缺点:
1. 做不同的RAID,有RAID模式硬盘利用率低,价格昂贵。
2. RAID0 没有冗余功能,如果一个磁盘(物理)损坏,则所有的数据都无法使用。
3. RAID1 磁盘的利用率却只有50%,是所有RAID级别中最低的。
4. RAID5 可以理解为是RAID 0和RAID 1的折中方案。RAID5 可以为系统提供数据安全保障,但保障程度要比 RAID1 低而磁盘空间利用率要比 RAID1 高。
三、RAID 样式
外接式磁盘阵列柜:最常被使用大型服务器上,具可热抽换(Hot Swap)的特性,不过这类产品的价格都很贵。
内接式磁盘阵列卡:因为价格便宜,但需要较高的安装技术,适合技术人员使用操作。
利用软件来仿真:由于会拖累机器的速度,不适合大数据流量的服务器。
四、RAID 分类
RAID 0 数据分条(条带)盘 --- 只需要2块以上的硬盘,成本低,可以提高整个磁盘的性能和吞吐量
striping(条带模式),至少需要两块磁盘,做RAID分区的大小最好是相同的(可以充分发挥并优势);而数据分散存储于不同的磁盘上,在读写的时候可以实现并发,所以相对其读写性能最好;但是没有容错功能,任何一个磁盘的损坏将损坏全部数据;
RAID 1 磁盘镜像盘 --- 数据在写入一块磁盘的同时,会在另一块闲置的磁盘上生成镜像文件
mirroring(镜像卷),至少需要两块硬盘,raid大小等于两个raid分区中最小的容量(最好将分区大小分为一样),可增加热备盘提供一定的备份能力;数据有冗余,在存储时同时写入两块硬盘,实现了数据备份;但相对降低了写入性能,但是读取数据时可以并发,几乎类似于raid-0的读取效率;
RAID 2与RAID 3类似 海明码检验盘 --- 在数据发生错误的情况下将错误校正,以保证输出的正确性
RAID 3 奇偶校验码的并行传送 --- 只能查错不能纠错
RAID 4 带奇偶校验码的独立磁盘结构 --- 对数据的访问是按数据块进行的,也就是按磁盘进行的,RAID3是一次一横条,而RAID4一次一竖条
RAID 5 分布式奇偶校验的独立磁盘结构
需要三块或以上硬盘,可以提供热备盘实现故障的恢复;采用奇偶效验,可靠性强,且只有同时损坏两块硬盘时数据才会完全损坏,只损坏一块硬盘时,系统会根据存储的奇偶校验位重建数据,临时提供服务;此时如果有热备盘,系统还会自动在热备盘上重建故障磁盘上的数据;
RAID 6 带有两种分布存储的奇偶校验码的独立磁盘结构
RAID 7 优化的高速数据传送磁盘结构 --- 高速缓冲存储器
这是一种新的RAID标准,其自身带有智能化实时操作系统和用于存储管理的软件工具,可完全独立于主机运行,不占用主机CPU资源。RAID 7可以看作是一种存储计算机(Storage Computer),它与其他RAID标准有明显区别。
RAID 1+0 高可靠性与高效磁盘结构
RAID 0+1 高效率与高性能磁盘结构
RAID1+0与RAID0+1的区别:
RAID 1+0是先镜射再分区数据,再将所有硬盘分为两组,视为是RAID 0的最低组合,然后将这两组各自视为RAID 1运作。RAID 0+1则是跟RAID 1+0的程序相反,是先分区再将数据镜射到两组硬盘。它将所有的硬盘分为两组,变成RAID 1的最低组合,而将两组硬盘各自视为RAID 0运作。性能上,RAID 0+1比RAID 1+0有着更快的读写速度。可靠性上,当RAID 1+0有一个硬盘受损,其余三个硬盘会继续运作。RAID 0+1 只要有一个硬盘受损,同组RAID 0的另一只硬盘亦会停止运作,只剩下两个硬盘运作,可靠性较低。因此,RAID 10远较RAID 01常用,零售主板绝大部份支持RAID 0/1/5/10,但不支持RAID 01。
五、常见 RAID 总结
RAID Level | 性能提升 | 冗余能力 | 空间利用率 | 磁盘数量(块) |
RAID 0 | 读、写提升 | 无 | 100% | 至少2 |
RAID 1 | 读性能提升,写性能下降 | 有 | 50% | 至少2 |
RAID 5 | 读、写提升 | 有 | (n-1)/n% | 至少3 |
RAID 1+0 | 读、写提升 | 有 | 50% | 至少4 |
RAID 0+1 | 读、写提升 | 有 | 50% | 至少4 |
RAID 5+0 | 读、写提升 | 有 | (n-2)/n% | 至少6 |
六、mdadm 工具介绍
简介:mdadm(multiple devices admin)是 linux下标准的的软raid管理工具,是一个模式化工具(在不同的模式下);程序工作在内存用户程序区,为用户提供RAID接口来操作内核的模块,实现各种功能;
[root@ZhongH100 ~]# lsb_release -a LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Distributor ID: CentOS Description: CentOS release 6.6 (Final) Release: 6.6 Codename: Final [root@ZhongH100 ~]# rpm -qa | grep mdadm mdadm-3.3-6.el6_6.1.x86_64 [root@ZhongH100 ~]#
mdadm 命令基本语法
mdadm [mode] <raid-device> [options] <component-devices>
目前支持的模式
LINEAR(线性模式)、RAID0(striping条带模式)、RAID1(mirroring)、 RAID-4、RAID-5、 RAID-6、 RAID-10、 MULTIPATH和FAULTY
LINEAR:线性模式,该模式不是raid的标准模式,其主要作用是可以实现将几块小的硬盘组合为一块大硬盘来使用,数组存储时一次存满一个硬盘在使用下一个硬盘,对上层来说操作的是一个大硬盘。
主要模式(7种)
- Assemble:装配模式:加入一个以前定义的阵列,可以正在使用阵列或从其他主机移出的阵列
- Build: 创建:创建一个没有超级块的阵列
- Create: 创建一个新的阵列,每个设备具有超级块
- Follow or Monitor: 监控RAID的状态,一般只对RAID-1/4/5/6/10等有冗余功能的模式来使用
- Grow:(Grow or shrink) 改变RAID的容量或阵列中的设备数目;收缩一般指的是数据收缩或重建
- Manage: 管理阵列(如添加spare盘和删除故障盘)
- Incremental Assembly:添加一个设备到一个适当的阵列
- Misc: 允许单独对阵列中的某个设备进行操作(如抹去superblocks 或停止阵列)
- Auto-detect: 此模式不作用于特定的设备或阵列,而是要求在Linux内核启动任何自动检测到的阵列
主要选项:(Options for selecting a mode)
-A, --assemble: 加入并开启一个以前定义的阵列 -B, --build: 创建一个没有超级块的阵列(Build a legacy array without superblocks) -C, --create: 创建一个新的阵列 -F, --follow, --monitor:选择监控(Monitor)模式 -G, --grow: 改变激活阵列的大小或形态 -I, --incremental: 添加一个单独的设备到合适的阵列,并可能启动阵列 --auto-detect: 请求内核启动任何自动检测到的阵列
创建模式
-C --create: 创建一个新的阵列 专用选项: -l: 级别 -n #: 设备个数 -a {yes|no}: 是否自动为其创建设备文件 -c: CHUNK大小, 2^n,默认为64K -x #: 指定空闲盘个数
监控模式
-F --follow, --monitor:选择监控(Monitor)模式 -m --mail: 设置一个mail地址,在报警时给该mail发信;该地址可写入conf文件,在启动阵列是生效 -p --program, --alert:当检测到一个事件时运行一个指定的程序 -y --syslog: 设置所有的事件记录于syslog中 -t --test: 给启动时发现的每个阵列生成test警告信息;该信息传递给mail或报警程序;(以此来测试报警信息是否能正确接收)
增长模式
-G --grow: 改变激活阵列的大小或形态 -n --raid-devices=: 指定阵列中活动的device数目,不包括spare磁盘,这个数目只能由--grow修改 -x --spare-devices=:指定初始阵列的冗余device 数目即spare device数目 -c --chunk=: Specify chunk size of kibibytes. 缺省为 64. chunk-size是一个重要的参数,决定了一次向阵列中每个磁盘写入数据的量 -z --size=:组建RAID1/4/5/6后从每个device获取的空间总数;但是大小必须为chunk的倍数,还需要在每个设备最后给RAID的superblock留至少128KB的大小。 --rounding=: Specify rounding factor for linear array (==chunk size) -l --level=: 设定 raid level.raid的几倍 --create: 可用:linear, raid0, 0, stripe, raid1,1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp. --build: 可用:linear, raid0, 0, stripe. -p --layout=:设定raid5 和raid10的奇偶校验规则;并且控制故障的故障模式;其中RAID-5的奇偶校验可以在设置为::eft-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs.缺省为left-symmetric --parity: 类似于--layout= --assume-clean:目前仅用于 --build 选项 -R --run: 阵列中的某一部分出现在其他阵列或文件系统中时,mdadm会确认该阵列。此选项将不作确认。 -f --force: 通常mdadm不允许只用一个device 创建阵列,而且此时创建raid5时会使用一个device作为missing drive。此选项正相反 -N --name=: 设定阵列的名称 #Chunk :,可以理解为raid分储数据时每个数据段的大小(通常为32/64/128等这类数字大小);合理的选择chunk大小非常重要, #若chunk过大可能一块磁盘上的带区空间就可以满足大部分的I/O操作,使得数据的读写只局限于一块硬盘上,这便不能充分发挥RAID并发的优势; #如果chunk设置过小,任何很小的I/O指令都 可能引发大量的读写操作,不能良好发挥并发性能,占用过多的控制器总线带宽, #也影响了阵列的整体性能。所以,在创建带区时,我们应该根据实际应用的需要,合理的选择带区大小。
装配模式
-A, --assemble: 加入并开启一个以前定义的阵列
MISC模式选项
-Q, --query: 查看一个device,判断它为一个 md device 或是 一个 md 阵列的一部分 -D, --detail: 打印一个或多个md device 的详细信息 -E, --examine:打印 device 上的 md superblock 的内容
查看RAID阵列的详细信息
mdadm -D /dev/md# --detail 停止阵列
停止RAID阵列
mdadm -S /dev/md# --stop
开启RAID阵列
mdadm –A /dev/md# --start
其它选项
-c, --config=: 指定配置文件,缺省为 /etc/mdadm.conf -s, --scan: 扫描配置文件或 /proc/mdstat以搜寻丢失的信息。默认配置文件:/etc/mdadm.conf -h, --help: 帮助信息,用在以上选项后,则显示该选项信息 -v, --verbose: 显示细节,一般只能跟 --detile 或 --examine一起使用,显示中级的信息 -b, --brief: 较少的细节。用于 --detail 和 --examine 选项 --help-options: 显示更详细的帮助 -V, --version: 版本信息 -q,--quit: 安静模式;加上该选项能使mdadm不显示纯消息性的信息,除非那是一个重要的报告
七、创建RAID
试验环境:CentOS release 6.6 (Final) x86_64 mdadm版本:mdadm-3.3-6.el6_6.1.x86_64
实验要求:创建raid 0 1 5 6 10 1+0(双层架构)
便于做实验 这里就不用多块磁盘来做了 我直接在虚拟机上试用多个磁盘分区来做 下面是我新建的4个磁盘分区每个分区大小是10G
[root@ZhongH100 ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to' switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb1f25cf Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-7832, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-7832, default 7832): +10G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (1307-7832, default 1307): Using default value 1307 Last cylinder, +cylinders or +size{K,M,G} (1307-7832, default 7832): +10G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 First cylinder (2613-7832, default 2613): Using default value 2613 Last cylinder, +cylinders or +size{K,M,G} (2613-7832, default 7832): +10G Command (m for help): n Command action e extended p primary partition (1-4) p Selected partition 4 First cylinder (3919-7832, default 3919): Using default value 3919 Last cylinder, +cylinders or +size{K,M,G} (3919-7832, default 7832): +10G Command (m for help): p Disk /dev/sdb: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb1f25cf Device Boot Start End Blocks Id System /dev/sdb1 1 1306 10490413+ 83 Linux /dev/sdb2 1307 2612 10490445 83 Linux /dev/sdb3 2613 3918 10490445 83 Linux /dev/sdb4 3919 5224 10490445 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information. Syncing disks.
1、创建一个RAID 0设备:这里使用/dev/sdb1 /dev/sdb2 /dev/sdb3做实验
[root@ZhongH100 ~]# mdadm --create /dev/md0 --level=0 --chunk=32 --raid-devices=3 /dev/sdb1 /dev/sdb2 /dev/sdb3 #将/dev/sdb1 sdb2 sdb3建立为raid0的md0 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdb3[2] sdb2[1] sdb1[0] 31446688 blocks super 1.2 32k chunks unused devices: <none> [root@ZhongH100 ~]# mdadm --stop /dev/md0 #停止raid mdadm: stopped /dev/md0 [root@ZhongH100 ~]# mdadm --misc --zero-superblock /dev/sdb1 #移除sdb1 [root@ZhongH100 ~]# mdadm --misc --zero-superblock /dev/sdb2 #移除sdb2 [root@ZhongH100 ~]# mdadm --misc --zero-superblock /dev/sdb3 #移除sdb3 [root@ZhongH100 ~]# cat /proc/mdstat #查看系统上的raid Personalities : [raid0] unused devices: <none> [root@ZhongH100 ~]# mdadm -C /dev/md0 -l0 -c32 -n3 /dev/sdb{1,2,3} #这是简写方式创建raid0 和上面的命令是一个意思 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdb3[2] sdb2[1] sdb1[0] 31446688 blocks super 1.2 32k chunks unused devices: <none>
2、创建一个RAID 1设备: 这里采取3个分区做实验 /dev/sdb1 /dev/sdb2 /dev/sdb3
[root@ZhongH100 ~]# mdadm --create /dev/md0 --level=1 --chunk=128 --raid-devices=2 --spare-devices=1 /dev/sdb1 /dev/sdb2 /dev/sdb3 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md0 : active raid1 sdb3[2](S) sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] [=>...................] resync = 9.5% (1000064/10482176) finish=0.6min speed=250016K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md0 : active raid1 sdb3[2](S) sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] [==========>..........] resync = 53.4% (5600000/10482176) finish=0.3min speed=207407K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md0 : active raid1 sdb3[2](S) sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@ZhongH100 ~]# mdadm -C /dev/md0 -l1 -c128 -n2 -x1 /dev/sdb{1,2,3} mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md0 : active raid1 sdb3[2](S) sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] [>....................] resync = 0.1% (19648/10482176) finish=17.7min speed=9824K/sec unused devices: <none>
3、创建一个RAID 5设备: 这里采取5个分区做实验/dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdc1
[root@ZhongH100 ~]# fdisk -l /dev/sd{b,c} Disk /dev/sdb: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb1f25cf Device Boot Start End Blocks Id System /dev/sdb1 1 1306 10490413+ c W95 FAT32 (LBA) /dev/sdb2 1307 2612 10490445 83 Linux /dev/sdb3 2613 3918 10490445 83 Linux /dev/sdb4 3919 5224 10490445 83 Linux Disk /dev/sdc: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3e911db5 Device Boot Start End Blocks Id System /dev/sdc1 1 1306 10490413+ 83 Linux /dev/sdc2 1307 2610 10474380 83 Linux
[root@ZhongH100 ~]# mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 --spare-devices=1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid5 sdb4[5] sdc1[4](S) sdb3[2] sdb2[1] sdb1[0] 31446528 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] [=>...................] recovery = 5.8% (608936/10482176) finish=1.3min speed=121787K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid5 sdb4[5] sdc1[4](S) sdb3[2] sdb2[1] sdb1[0] 31446528 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none>
【注意:】我们在创建RAID 5 或者RAID 6的时候,因为其还要对硬盘进行一些检查的操作,而且根据硬盘的大小时间可能会不同,我们在输入完 mdadm 命令以后,还必须要去查看 /proc/mdstat 这个文件,查看这个文件里面的进度信息是否已经完整,例如上面创建时进度才只有 5.8% ,我们必须要等进度显示完整以后才能做接下来的操作
4、创建一个RAID 6设备: 实验分区是/dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdc1 /dev/sdc2
[root@ZhongH100 ~]# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 --spare-devices=2 /dev/sdc1 /dev/sdc2 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid6 sdc2[5](S) sdc1[4](S) sdb4[3] sdb3[2] sdb2[1] sdb1[0] 20931584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [>....................] resync = 4.7% (494108/10465792) finish=2.6min speed=61763K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md0 : active raid6 sdc2[5](S) sdc1[4](S) sdb4[3] sdb3[2] sdb2[1] sdb1[0] 20931584 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [==>..................] resync = 14.3% (1502564/10465792) finish=2.4min speed=60102K/sec unused devices: <none>
5、创建一个RAID 10设备:实验使用的分区 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdc1
[root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] unused devices: <none> [root@ZhongH100 ~]# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 --spare-devices=1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid10 sdc1[4](S) sdb4[3] sdb3[2] sdb2[1] sdb1[0] 20964352 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] [>....................] resync = 2.8% (594112/20964352) finish=1.7min speed=198037K/sec unused devices: <none> [root@ZhongH100 ~]#
6、创建一个RAID1+0设备(双层架构):实验使用分区 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdc1 /dev/sdc2
[root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> [root@ZhongH100 ~]# mdadm --create /dev/md0 --level=1 --chunk=128 --raid-devices=2 /dev/sdb1 /dev/sdb2 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# mdadm --create /dev/md1 --level=1 --chunk=128 --raid-devices=2 /dev/sdb3 /dev/sdb4 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. [root@ZhongH100 ~]# mdadm --create /dev/md2 --level=1 --chunk=128 --raid-devices=2 /dev/sdc1 /dev/sdc2 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md2 started. [root@ZhongH100 ~]# mdadm --create /dev/md2 --level=0 --chunk=128 --raid-devices=3 /dev/md0 /dev/md1 /dev/md2 mdadm: /dev/md2 is already in use. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdc2[1] sdc1[0] 10466176 blocks super 1.2 [2/2] [UU] [====>................] resync = 21.0% (2200064/10466176) finish=0.6min speed=220006K/sec md1 : active raid1 sdb4[1] sdb3[0] 10482240 blocks super 1.2 [2/2] [UU] resync=DELAYED md0 : active raid1 sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] [==========>..........] resync = 53.4% (5600000/10482176) finish=0.3min speed=207407K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md3 : active raid0 md2[2] md1[1] md0[0] 31405952 blocks super 1.2 128k chunks md2 : active raid1 sdc2[1] sdc1[0] 10466176 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb4[1] sdb3[0] 10482240 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb2[1] sdb1[0] 10482176 blocks super 1.2 [2/2] [UU] unused devices: <none>
[root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 14:49:18 2015 Raid Level : raid1 Array Size : 10482176 (10.00 GiB 10.73 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Fri May 22 14:51:33 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 98e5333b:6ac94997:6916ad42:02cedaad Events : 17 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 [root@ZhongH100 ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Fri May 22 14:49:27 2015 Raid Level : raid1 Array Size : 10482240 (10.00 GiB 10.73 GB) Used Dev Size : 10482240 (10.00 GiB 10.73 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Fri May 22 14:51:33 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : ZhongH100.wxjr.com.cn:1 (local to host ZhongH100.wxjr.com.cn) UUID : f5cadd85:08c02702:aaf14690:aba5c22c Events : 17 Number Major Minor RaidDevice State 0 8 19 0 active sync /dev/sdb3 1 8 20 1 active sync /dev/sdb4 [root@ZhongH100 ~]# mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Fri May 22 14:49:35 2015 Raid Level : raid1 Array Size : 10466176 (9.98 GiB 10.72 GB) Used Dev Size : 10466176 (9.98 GiB 10.72 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Fri May 22 14:51:33 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : ZhongH100.wxjr.com.cn:2 (local to host ZhongH100.wxjr.com.cn) UUID : 018486a4:9aabc4bc:7c0cb466:d1568080 Events : 17 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 34 1 active sync /dev/sdc2 [root@ZhongH100 ~]# mdadm -D /dev/md3 /dev/md3: Version : 1.2 Creation Time : Fri May 22 14:51:32 2015 Raid Level : raid0 Array Size : 31405952 (29.95 GiB 32.16 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Fri May 22 14:51:32 2015 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Chunk Size : 128K Name : ZhongH100.wxjr.com.cn:3 (local to host ZhongH100.wxjr.com.cn) UUID : cba2da8d:066f4344:e9415254:aad1240a Events : 0 Number Major Minor RaidDevice State 0 9 0 0 active sync /dev/md0 1 9 1 1 active sync /dev/md1 2 9 2 2 active sync /dev/md2
[root@ZhongH100 ~]# mke2fs -t ext4 -b 4096 -L RAID1+0 /dev/md3 mke2fs 1.41.12 (17-May-2010) 文件系统标签=RAID1+0 操作系统:Linux 块大小=4096 (log=2) 分块大小=4096 (log=2) Stride=32 blocks, Stripe width=96 blocks 1966080 inodes, 7851488 blocks 392574 blocks (5.00%) reserved for the super user 第一个数据块=0 Maximum filesystem blocks=4294967296 240 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 正在写入inode表: 完成 Creating journal (32768 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 23 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. # –t 指定文件系统类型 # –b 表示块大小有三种类型分别为 1024/2048/4096 # –L 指定卷标
[root@ZhongH100 ~]# mkdir /RAID1+0 [root@ZhongH100 ~]# ls / bin data etc lib lost+found media mnt opt RAID1+0 sbin srv tmp var boot dev home lib64 LVtest1 misc net proc root selinux sys usr [root@ZhongH100 ~]# mount /dev/md3 /RAID1+0 [root@ZhongH100 ~]# df -hP Filesystem Size Used Avail Use% Mounted on /dev/mapper/vgzhongH-root 30G 3.3G 25G 12% / tmpfs 932M 0 932M 0% /dev/shm /dev/sda1 477M 34M 418M 8% /boot /dev/mapper/vgzhongH-data 4.8G 10M 4.6G 1% /data /dev/md3 30G 44M 28G 1% /RAID1+0
[root@ZhongH100 ~]# echo -e "$(blkid /dev/md3 | awk '{for(i=1;i<=NF;i++)if($i~/UUID/){print $i}}')\t/RAID1+0\text4\tdefaults\t0 0" >> /etc/fstab [root@ZhongH100 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Thu May 21 17:38:09 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vgzhongH-root / ext4 defaults,acl 1 1 UUID=90e719a1-0cd5-4358-b1cf-4fecfd1a61fd /boot ext4 defaults 1 2 /dev/mapper/vgzhongH-data /data ext4 defaults,acl 1 2 /dev/mapper/vgzhongH-swap swap swap defaults,acl 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="946527de-d2d5-4d91-a465-ec97b97d0b87" /RAID1+0 ext4 defaults 0 0 [root@ZhongH100 ~]#
这里使用MD3的UUID的方式将md3开机挂载到/RAID1+0的目录上
8. 生成mdadm的配置文件
/etc/mdadm.conf作为默认的配置文件,主要作用是方便跟踪软RAID的配置,尤其是可以配置监视和事件上报选项。Assemble命令也可以使用--config(或者其缩写-c)来指定配置文件。我们通常可以如下命令来建立配置文件。
使用配置文件启动阵列时,mdadm会查询配置文件中的设备和阵列内容,然后启动运行所有能运行RAID阵列。如果指定阵列的设备名字,则只启动对应的阵列。
[root@ZhongH100 ~]# echo DEVICE /dev/sdb{1,2,3,4} /dev/sdc{1,2} > /etc/mdadm.conf [root@ZhongH100 ~]# mdadm -Ds >>/etc/mdadm.conf [root@ZhongH100 ~]# cat /etc/mdadm.conf DEVICE /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdc1 /dev/sdc2 ARRAY /dev/md0 metadata=1.2 name=ZhongH100.wxjr.com.cn:0 UUID=98e5333b:6ac94997:6916ad42:02cedaad ARRAY /dev/md1 metadata=1.2 name=ZhongH100.wxjr.com.cn:1 UUID=f5cadd85:08c02702:aaf14690:aba5c22c ARRAY /dev/md2 metadata=1.2 name=ZhongH100.wxjr.com.cn:2 UUID=018486a4:9aabc4bc:7c0cb466:d1568080 ARRAY /dev/md3 metadata=1.2 name=ZhongH100.wxjr.com.cn:3 UUID=cba2da8d:066f4344:e9415254:aad1240a
八、RAID的管理
1. 给raid-5新增一个spare(空)盘,添加磁盘到阵列中做备用盘(spare)
先来删除之前的raid1+0
[root@ZhongH100 ~]# mdadm -S /dev/md3 mdadm: stopped /dev/md3 [root@ZhongH100 ~]# mdadm -S /dev/md2 mdadm: stopped /dev/md2 [root@ZhongH100 ~]# mdadm -S /dev/md1 mdadm: stopped /dev/md1 [root@ZhongH100 ~]# mdadm -S /dev/md0 mdadm: stopped /dev/md0 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdb1 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdb2 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdb3 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdb4 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdc1 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdc2
这时候/dev/sdb{1,2,3,4} /dev/sdc{1,2}分区就都解放了
再来创建个raid5
[root@ZhongH100 ~]# mdadm --create /dev/md0 -a yes --level=5 --raid-devices=3 /dev/sdb1 /dev/sdb2 /dev/sdb3 --spare-devices=1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb3[4] sdc1[3](S) sdb2[1] sdb1[0] 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [========>............] recovery = 42.0% (4407112/10482176) finish=1.0min speed=93486K/sec unused devices: <none> [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb3[4] sdc1[3](S) sdb2[1] sdb1[0] 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [================>....] recovery = 81.7% (8571024/10482176) finish=0.3min speed=94896K/sec unused devices: <none> [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:18:14 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 16:20:09 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 887e7116:4a923af2:1f0d24be:362fc61f Events : 19 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 3 8 33 - spare /dev/sdc1 [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb3[4] sdc1[3](S) sdb2[1] sdb1[0] 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@ZhongH100 ~]# mdadm -a /dev/md0 /dev/sdb4 mdadm: added /dev/sdb4 [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb4[5](S) sdb3[4] sdc1[3](S) sdb2[1] sdb1[0] 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:18:14 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 5 Persistence : Superblock is persistent Update Time : Fri May 22 16:25:44 2015 State : clean Active Devices : 3 Working Devices : 5 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 887e7116:4a923af2:1f0d24be:362fc61f Events : 20 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 3 8 33 - spare /dev/sdc1 #备用盘 5 8 20 - spare /dev/sdb4 #后加的备用盘
[root@ZhongH100 ~]# mdadm -f /dev/md0 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md0 [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb4[5] sdb3[4] sdc1[3](S) sdb2[1] sdb1[0](F) 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU] [=>...................] recovery = 6.5% (688400/10482176) finish=1.8min speed=86050K/sec unused devices: <none> [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:28:50 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 5 Persistence : Superblock is persistent Update Time : Fri May 22 16:36:36 2015 State : clean, degraded, recovering Active Devices : 2 Working Devices : 4 Failed Devices : 1 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 32% complete Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 08cc226e:cb0ab8a5:ffea4c14:57ae6a60 Events : 26 Number Major Minor RaidDevice State 5 8 20 0 spare rebuilding /dev/sdb4 #重建RAID5 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 0 8 17 - faulty /dev/sdb1 3 8 33 - spare /dev/sdc1
[root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:28:50 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 5 Persistence : Superblock is persistent Update Time : Fri May 22 16:37:56 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 08cc226e:cb0ab8a5:ffea4c14:57ae6a60 Events : 38 Number Major Minor RaidDevice State 5 8 20 0 active sync /dev/sdb4 #同步完成 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 0 8 17 - faulty /dev/sdb1 #故障盘 3 8 33 - spare /dev/sdc1 #备用盘
[root@ZhongH100 ~]# mdadm -r /dev/md0 /dev/sdb1 mdadm: hot removed /dev/sdb1 from /dev/md0 [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:28:50 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 16:45:52 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 08cc226e:cb0ab8a5:ffea4c14:57ae6a60 Events : 39 Number Major Minor RaidDevice State 5 8 20 0 active sync /dev/sdb4 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 3 8 33 - spare /dev/sdc1
[root@ZhongH100 ~]# mdadm -E /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 2950f301:0e7ed3a0:4d0ad552:75ed3235 Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) Creation Time : Fri May 22 16:50:29 2015 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 20964443 (10.00 GiB 10.73 GB) Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 20964352 (10.00 GiB 10.73 GB) Data Offset : 16384 sectors Super Offset : 8 sectors Unused Space : before=16296 sectors, after=91 sectors State : clean Device UUID : 6c93f0d8:cd0fc86c:00b453c7:f30dc7a2 Update Time : Fri May 22 16:52:31 2015 Bad Block Log : 512 entries available at offset 72 sectors Checksum : e7a5c94 - correct Events : 18 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 16:50:29 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 16:52:31 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 2950f301:0e7ed3a0:4d0ad552:75ed3235 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 3 8 33 - spare /dev/sdc1 [root@ZhongH100 ~]# mkdir /RAID5 [root@ZhongH100 ~]# mount /dev/md0 /RAID5 mount: unknown filesystem type 'linux_raid_member' #这里忘记给新建的raid5格式化了 所以报错了 [root@ZhongH100 ~]# mke2fs -t ext4 /dev/md0 mke2fs 1.41.12 (17-May-2010) 文件系统标签= 操作系统:Linux 块大小=4096 (log=2) 分块大小=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 1310720 inodes, 5241088 blocks 262054 blocks (5.00%) reserved for the super user 第一个数据块=0 Maximum filesystem blocks=4294967296 160 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 正在写入inode表: 完成 Creating journal (32768 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@ZhongH100 ~]# mount /dev/md0 /RAID5 [root@ZhongH100 ~]# df -hP Filesystem Size Used Avail Use% Mounted on /dev/mapper/vgzhongH-root 30G 3.3G 25G 12% / tmpfs 932M 0 932M 0% /dev/shm /dev/sda1 477M 34M 418M 8% /boot /dev/mapper/vgzhongH-data 4.8G 10M 4.6G 1% /data /dev/md0 20G 44M 19G 1% /RAID5
5. 停止RAID 删除RAID
[root@ZhongH100 ~]# mdadm -S /dev/md0 mdadm: stopped /dev/md0 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdb{1,2,3,4} [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sdc{1,2} mdadm: Unrecognised md component device - /dev/sdc2
6. 删除 RAID
[root@ZhongH100 ~]# umount /dev/md0 [root@ZhongH100 ~]# df -hP Filesystem Size Used Avail Use% Mounted on /dev/mapper/vgzhongH-root 30G 3.3G 25G 12% / tmpfs 932M 0 932M 0% /dev/shm /dev/sda1 477M 34M 418M 8% /boot /dev/mapper/vgzhongH-data 4.8G 10M 4.6G 1% /data [root@ZhongH100 ~]# mdadm -Ss /dev/md0 mdadm: stopped /dev/md0 [root@ZhongH100 ~]# mdadm --zero-superblock /dev/sd{b{1,2,3},c1} # --zero-superblock 加上该选项时,会判断如果该阵列是否包 # 含一个有效的阵列超级快,若有则将该超级块中阵列信息抹除。 [root@ZhongH100 ~]# rm -rf /etc/mdadm.conf #删除RAID配置文件 You are going to execute "/bin/rm -rf /etc/mdadm.conf",please confirm (yes or no):yes [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> [root@ZhongH100 ~]#
九、RAID 优化
设定良好的stripe值,可以在后期使用时,减少写入数据时对数据块计算的负担,从而提高RAID性能;
mk2fs -j -b 4096 -E stripe=16 /dev/md0 # 设置时,需要用-E选项进行扩展
十、RIAD 监控
配置每300秒mdadm监控进程查询MD设备一次,当阵列出现错误,会发送邮件给指定的用户,执行事件处理的程序并且记录上报的事件到系统的日志文件。使用--daemonise参数,使程序持续在后台运行。如果要发送邮件需要sendmail程序运行,当邮件地址被配置为外网地址应先测试是否能发送出去。
[root@ZhongH100 ~]# mdadm --create /dev/md0 -a yes --level=5 --raid-devices=3 /dev/sdb1 /dev/sdb2 /dev/sdb3 --spare-devices=1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb3[4] sdc1[3](S) sdb2[1] sdb1[0] 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@ZhongH100 ~]# mdadm --monitor --mail=root@localhost --program=/root/md.sh --syslog --delay=300 /dev/md0 --daemonise 8975 [root@ZhongH100 ~]# mdadm -f /dev/md0 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md0 [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 17:01:54 2015 Raid Level : raid5 Array Size : 20964352 (19.99 GiB 21.47 GB) Used Dev Size : 10482176 (10.00 GiB 10.73 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 17:11:18 2015 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 61% complete Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 77398670:7b977386:cd786e3e:10f1c91c Events : 29 Number Major Minor RaidDevice State 3 8 33 0 spare rebuilding /dev/sdc1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 0 8 17 - faulty /dev/sdb1 [root@ZhongH100 ~]# tail .-f /var/log/messages May 22 17:10:48 ZhongH100 kernel: md/raid:md0: Operation continuing on 2 devices. May 22 17:10:48 ZhongH100 kernel: md: recovery of RAID array md0 May 22 17:10:48 ZhongH100 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. May 22 17:10:48 ZhongH100 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. May 22 17:10:48 ZhongH100 kernel: md: using 128k window, over a total of 10482176k. May 22 17:10:49 ZhongH100 mdadm[8975]: Fail event detected on md device /dev/md0, component device /dev/sdb1 May 22 17:10:49 ZhongH100 mdadm[8975]: RebuildStarted event detected on md device /dev/md0 May 22 17:11:41 ZhongH100 kernel: md: md0: recovery done. May 22 17:11:41 ZhongH100 mdadm[8975]: RebuildFinished event detected on md device /dev/md0 May 22 17:11:41 ZhongH100 mdadm[8975]: SpareActive event detected on md device /dev/md0, component device /dev/sdc1 ^C [root@ZhongH100 ~]# mail Heirloom Mail version 12.4 7/29/08. Type ? for help. "/var/spool/mail/root": 1 message 1 new >N 1 mdadm monitoring Fri May 22 17:10 30/1137 "Fail event on /dev/md0:ZhongH100.wxjr.com.cn" & 1 Message 1: From root@ZhongH100.wxjr.com.cn Fri May 22 17:10:49 2015 Return-Path: <root@ZhongH100.wxjr.com.cn> X-Original-To: root@localhost Delivered-To: root@localhost.wxjr.com.cn From: mdadm monitoring <root@ZhongH100.wxjr.com.cn> To: root@localhost.wxjr.com.cn Subject: Fail event on /dev/md0:ZhongH100.wxjr.com.cn Date: Fri, 22 May 2015 17:10:49 +0800 (CST) Status: R This is an automatically generated mail message from mdadm running on ZhongH100.wxjr.com.cn A Fail event had been detected on md device /dev/md0. It could be related to component device /dev/sdb1. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb3[4] sdc1[3] sdb2[1] sdb1[0](F) 20964352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU] [>....................] recovery = 0.0% (0/10482176) finish=10918.9min speed=0K/sec unused devices: <none> & q Held 1 message in /var/spool/mail/root
十一、RAID 扩展
如果在创建阵列时不想使用整个块设备,可以指定用于创建RAID阵列每个块设备使用的设备大小。然后在阵列需要扩展大小时,使用模式--grow(或者其缩写-Q)以及--size参数(或者其缩写-z) 在加上合适的大小数值就能分别扩展阵列所使用每个块设备的大小。
[root@ZhongH100 ~]# mdadm --create /dev/md0 -a yes --level=5 --raid-devices=3 /dev/sdb1 /dev/sdb2 /dev/sdb3 --spare-devices=1 /dev/sdc1 --size=1024000^C [root@ZhongH100 ~]# mdadm -C /dev/md0 -a yes -l5 -n3 /dev/sdb{1,2,3} -x1 /dev/sdc1 --size=1024000 mdadm: largest drive (/dev/sdb1) exceeds size (1024000K) by more than 1% Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 17:21:37 2015 Raid Level : raid5 Array Size : 2048000 (2000.34 MiB 2097.15 MB) Used Dev Size : 1024000 (1000.17 MiB 1048.58 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 17:21:50 2015 State : clean, degraded, recovering Active Devices : 2 Working Devices : 4 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 91% complete Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 7010df04:db91f301:f82dc99b:9f52b389 Events : 15 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 spare rebuilding /dev/sdb3 3 8 33 - spare /dev/sdc1 [root@ZhongH100 ~]# mdadm --grow /dev/md0 --size=2048000 mdadm: component size of /dev/md0 has been set to 2048000K [root@ZhongH100 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 22 17:21:37 2015 Raid Level : raid5 Array Size : 4096000 (3.91 GiB 4.19 GB) Used Dev Size : 2048000 (2000.34 MiB 2097.15 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri May 22 17:22:21 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : ZhongH100.wxjr.com.cn:0 (local to host ZhongH100.wxjr.com.cn) UUID : 7010df04:db91f301:f82dc99b:9f52b389 Events : 28 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 4 8 19 2 active sync /dev/sdb3 3 8 33 - spare /dev/sdc1 [root@ZhongH100 ~]#
dd if=/dev/zero of=/dev/sdb bs=514 count=1 parted -s /dev/sdb mklabel gpt parted -s /dev/sdb mkpart raid1 0G 3G parted -s /dev/sdb mkpart raid1 3G 6G parted -s /dev/sdb mkpart raid1 6G 9G parted -s /dev/sdb mkpart raid1 9G 12G parted -s /dev/sdb mkpart raid1 12G 15G parted -s /dev/sdb mkpart raid1 15G 18G parted -s /dev/sdb mkpart raid1 18G 21G parted -s /dev/sdb mkpart raid1 21G 24G parted -s /dev/sdb mkpart raid1 24G 27G parted -s /dev/sdb mkpart raid1 27G 30G parted -s /dev/sdb mkpart raid1 30G 33G parted -s /dev/sdb mkpart raid1 33G 36G parted -s /dev/sdb mkpart raid1 36G 39G parted -s /dev/sdb mkpart raid1 39G 42G parted -s /dev/sdb mkpart raid1 42G 45G parted -s /dev/sdb mkpart raid1 45G 48G mdadm -C /dev/md1 -a yes -l1 -c128 -n2 /dev/sdb{1,2} mdadm -C /dev/md2 -a yes -l1 -c128 -n2 /dev/sdb{3,4} mdadm -C /dev/md3 -a yes -l1 -c128 -n2 /dev/sdb{5,6} mdadm -C /dev/md4 -a yes -l1 -c128 -n2 /dev/sdb{7,8} mdadm -C /dev/md5 -a yes -l1 -c128 -n2 /dev/sdb{9,10} mdadm -C /dev/md6 -a yes -l1 -c128 -n2 /dev/sdb{11,12} mdadm -C /dev/md7 -a yes -l1 -c128 -n2 /dev/sdb{13,14} mdadm -C /dev/md8 -a yes -l1 -c128 -n2 /dev/sdb{15,16} mdadm -C /dev/md9 -l6 -n4 /dev/md[1-4] mdadm -C /dev/md10 -l6 -n4 /dev/md[5-8] mdadm -C /dev/md11 -l0 -n2 /dev/md{9,10}
[root@ZhongH100 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md11 : active raid0 md10[1] md9[0] 11692032 blocks super 1.2 512k chunks md10 : active raid6 md8[3] md7[2] md6[1] md5[0] 5851136 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] md9 : active raid6 md4[3] md3[2] md2[1] md1[0] 5849088 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] md8 : active raid1 sdb16[1] sdb15[0] 2927616 blocks super 1.2 [2/2] [UU] md7 : active raid1 sdb14[1] sdb13[0] 2927616 blocks super 1.2 [2/2] [UU] md6 : active raid1 sdb12[1] sdb11[0] 2927616 blocks super 1.2 [2/2] [UU] md5 : active raid1 sdb10[1] sdb9[0] 2927616 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdb8[1] sdb7[0] 2927616 blocks super 1.2 [2/2] [UU] md3 : active raid1 sdb6[1] sdb5[0] 2927616 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdb4[1] sdb3[0] 2927616 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb2[1] sdb1[0] 2926592 blocks super 1.2 [2/2] [UU] unused devices: <none>
装x结束,那么问题来了
上面这个raid160 挂载起来是多大空间?
这个raid160 最多同时允许损坏几块分区?
本文到此结束
2015年5月23日 上午3:42 沙发
期待各位回答文章末尾的问题,有答案的请在评论区留答案