文档章节

keepalived+ceph rbd配置nfs的高可用

加油2018
 加油2018
发布于 2014/12/31 11:12
字数 1204
阅读 405
收藏 1

原理比较简单,利用ceph的rbd导出nfs,然后配置keepalived,在不同主机映射的rbd块间进行vip漂移。

1. 创建并映射rbd块设备

试验机器为ceph中的osd1和osd2主机,ip分别为192.168.32.3、192.168.32.4,vip为192.168.32.5。

先创建一个rbd块设备,然后在两台机器上导出此相同的块:

[root@osd1 keepalived]# rbd ls test
test.img
[root@osd1 keepalived]# rbd showmapped
id pool image    snap device    
0  test test.img -    /dev/rbd0 
[root@osd2 keepalived]# rbd showmapped
id pool image    snap device    
0  test test.img -    /dev/rbd0

然后将/dev/rbd0格式化后进行挂载,如/mnt目录。

2. 配置keepalived

http://www.keepalived.org/download.html下载最新的keepalived 1.2.15版本,安装比较简单,直接按照INSTALL的说明进行默认安装即可。为了方便管理keepalived服务,需进行以下操作:

cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
mkdir /etc/keepalived
cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/sbin/keepalived /usr/sbin/
chkconfig --add keepalived
chkconfig keepalived on
chkconfig --list keepalived
配置/etc/keepalived.conf。osd1的配置如下:
[root@osd1 keepalived]# cat keepalived.conf
global_defs
{
    notification_email
    {
    }
    router_id osd1
}

vrrp_instance VI_1 {
    state MASTER
    interface em1
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.32.5/24
    }
}
osd2的配置如下:
[root@osd2 keepalived]# cat keepalived.conf 
global_defs
{
    notification_email
    {
    #    admin@example.com
    }
    #notification_email_from admin@example.com
    #smtp_server 127.0.0.1
    #stmp_connect_timeout 30
    router_id osd2
}

vrrp_instance VI_1 {
    state BACKUP
    interface em1
    virtual_router_id 100
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    notify_master "/etc/keepalived/ChangeToMaster.sh"
    notify_backup "/etc/keepalived/ChangeToBackup.sh"
    virtual_ipaddress {
        192.168.32.5/24
    }
}
在osd2写了两个控制脚本,用于osd2的keepalived状态改变时执行。ChangeToMaster.sh:
[root@osd2 keepalived]# cat ChangeToMaster.sh 
#!/bin/bash
service nfs start
ssh lm "umount -f /mnt"
ssh lm "mount -t nfs 192.168.32.5:/mnt /mnt"
ChangeToBackup.sh:
[root@osd2 keepalived]# cat ChangeToBackup.sh 
#!/bin/bash
ssh lm "umount -f /mnt"
ssh osd1 "service nfs stop"
ssh osd1 "umount /mnt"
ssh osd1 "rbd unmap /dev/rbd0"
ssh osd1 "rbd map test/test.img"
ssh osd1 "mount /dev/rbd0 /mnt"
ssh osd1 "service nfs start"
ssh lm "mount -t nfs 192.168.32.5:/mnt /mnt"
service nfs stop
umount /mnt
rbd unmap /dev/rbd0
rbd map test/test.img
mount /dev/rbd0

3. 配置nfs

在ceph的一个节点利用rbd map一个块设备,然后格式化并挂载在一个目录,如/mnt。在此节点上安装nfs的rpm包:

yum -y install nfs-utils

设置挂载目录:

cat /etc/exports 
/mnt 192.168.101.157(rw,async,no_subtree_check,no_root_squash)
/mnt 192.168.108.4(rw,async,no_subtree_check,no_root_squash)

启动并导出:

service nfs start
chkconfig nfs on
exportfs -r

客户端查看一下:

showmount -e mon0
Export list for mon0:
/mnt 192.168.108.4,192.168.101.157
然后挂载:
mount -t nfs mon0:/mnt /mnt

需要注意的是,NFS默认是用UDP协议,如果网络不稳定,换成TCP协议即可:

mount -t nfs mon0:/mnt /mnt -o proto=tcp -o nolock

4. 测试

关闭osd1的网卡em1查看结果:

[root@osd1 keepalived]# ifdown em1
[root@osd1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether c8:1f:66:de:5e:65 brd ff:ff:ff:ff:ff:ff
查看osd2的网卡:
[root@osd2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether c8:1f:66:f7:61:5d brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.4/24 brd 192.168.32.255 scope global em1
       valid_lft forever preferred_lft forever
    inet 192.168.32.5/24 scope global secondary em1
       valid_lft forever preferred_lft forever
    inet6 fe80::ca1f:66ff:fef7:615d/64 scope link 
       valid_lft forever preferred_lft forever
vip已经漂移到osd2,查看客户端挂载情况:
[root@lm /]# df -hT
Filesystem        Type   Size  Used Avail Use% Mounted on
/dev/sda1         ext4   454G   79G  353G  19% /
tmpfs             tmpfs  1.7G  4.6M  1.7G   1% /dev/shm
192.168.32.5:/mnt nfs    100G   21G   80G  21% /mnt
打开osd1的网卡em1:
[root@osd1 keepalived]# ifup em1
Determining if ip address 192.168.32.3 is already in use for device em1...
[root@osd1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether c8:1f:66:de:5e:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.3/24 brd 192.168.32.255 scope global em1
       valid_lft forever preferred_lft forever
    inet 192.168.32.5/24 scope global secondary em1
       valid_lft forever preferred_lft forever
    inet6 fe80::ca1f:66ff:fede:5e65/64 scope link 
       valid_lft forever preferred_lft forever
osd2的em1:
[root@osd2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether c8:1f:66:f7:61:5d brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.4/24 brd 192.168.32.255 scope global em1
       valid_lft forever preferred_lft forever
    inet6 fe80::ca1f:66ff:fef7:615d/64 scope link 
       valid_lft forever preferred_lft forever
现在的客户端:
[root@lm /]# df -hT
Filesystem        Type   Size  Used Avail Use% Mounted on
/dev/sda1         ext4   454G   79G  353G  19% /
tmpfs             tmpfs  1.7G  4.6M  1.7G   1% /dev/shm
192.168.32.5:/mnt nfs    100G   21G   80G  21% /mnt
[root@lm /]# ls /mnt
31.txt  a.txt  b.txt  c.txt  etc  linux-3.17.4  linux-3.17.4.tar  m2.txt  test.img  test.img2

也可以为ceph rbd的iscsi配置vip,后面会进行记录。

© 著作权归作者所有

共有 人打赏支持
上一篇: docker常用命令
下一篇: 逼仓
加油2018
粉丝 150
博文 276
码字总数 243077
作品 0
海淀
架构师
私信 提问
加载中

评论(1)

刘鹏鹏
性能怎么样,比如上传速度?
基于rbd+iscsi+corosync+pacemaker高可用nfs

一、软件安装 1.1一键安装包地址 1.2配置ntp(略) 1.3配置hosts(略) 1.4配置双机互信(略) 二、配置corosync 2.1 corosync配置文件(两台) # mv /etc/corosync/corosync.conf.example /etc/co...

安静美男子
2015/07/06
0
2
ceph部署步骤

本存储方案为 4个主机,其中一个主机跑nfs,挂载ceph rbd。其余3个主机各跑一个mon和7个osd,每个osd用一个1T磁盘,集群共有21T容量,其中有两个冗余副本,可用容量为:7T。 1.首先第一步使集...

Yashin
2014/09/16
17
0
Openstack 中cinder backup三种backend的对比

K版Openstack的Cinderbackup service新增加NFS作为backend,同时增加对增量备份的支持。这样当前已经支持四种backend,这里主要对其中三种ceph、nfs、swift的备份实现机制做一下对比介绍。另...

hNicholas
2018/09/28
0
0
cephfs vs rbd+nfs

以下测试全部基于1.4g 1万多小文件 du -sh 对比: cephfs fuse: root@git-app-2:/cephfs# time du -sh osctool/ 1.4G osctool/ real 0m23.102s user 0m1.316s sys 0m5.119s rbd + nfs: root......

Yashin
2014/09/28
38
1
ceph采用块存储相关问题

ceph提供一个可变大小的虚拟块设备,client 格式化该设备并挂载,当ceph调整该块大小的时候,client 需要重新为该虚拟块分区并重新格式化该分区,但这样会丢失原有数据。问题是:client是否可...

Yashin
2014/09/10
6
1

没有更多内容

加载失败,请刷新页面

加载更多

Quartz监听器Listerner

概述 Quartz的监听器用于当任务调度中你所关注事件发生时,能够及时获取这一事件的通知。Quartz监听器主要有JobListener、TriggerListener、SchedulerListener三种,顾名思义,分别表示任务、...

大笨象会跳舞吧
5分钟前
0
0
Call exception, tries=10, retries=35, started=38348 ms ago, cancelled=false, msg=pc-node1 row

写hbase的问题,2019-01-18 23:23:28,082 | INFO | [hconnection-0x6431d54d-shared--pool2-t5] | Call exception, tries=10, retries=35, started=38348 ms ago, cancelled=false, msg=p......

stys35
8分钟前
0
0
docker 安装portainer、gogs、redis、mongodb、es、rabbitmq、mysql、jenkins、harbor

1、准备三台虚拟机ip如下 编号 Ip 1 192.168.100.101 2 192.168.100.102 3 192.168.100.103 2、镜像应用编排 192.168.100.101 主要安装系统运维相关服务 192.168.100.102 主要安装mysql、mon...

北岩
18分钟前
0
0
storm 提交任务报SocketException错误及解决办法

提交任务爆错: org.apache.storm.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed) ..... Caused by: org.apache.storm.thrift.transport.TTr......

jingshishengxu
22分钟前
0
0
值得收藏:一份非常完整的MySQL规范

一、数据库命令规范 所有数据库对象名称必须使用小写字母并用下划线分割 所有数据库对象名称禁止使用mysql保留关键字(如果表名中包含关键字查询时,需要将其用单引号括起来) 数据库对象的命...

Java干货分享
32分钟前
3
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部