文档章节

配置基于IPv6的单节点Ceph

 柠檬lemon
发布于 2017/07/14 09:16
字数 4812
阅读 8
收藏 0

引言

为什么突然想起搭建一个基于IPv6的Ceph环境?纯属巧合,原本有一个项目需要搭建一个基于IPv6的文件系统,可惜Hadoop不支持(之前一直觉得Hadoop比较强大),几经折腾,Ceph给了我希望,好了闲话少说,直接进入正题。

实验环境

  1. Linux操作系统版本:CentOS Linux release 7.2.1511 (Core)
  2. Ceph版本:0.94.9(hammer版本)
    1. 原本选取的为jewel最新版本,环境配置成功后,在使用Ceph的对象存储功能时,导致不能通过IPv6访问,出现类似如下错误提示,查阅资料发现是Ceph jewel版本的一个bug,正在修复,另外也给大家一个建议,在生产环境中,尽量不要选择最新版本。

      set_ports_option:[::]8888:invalid port sport spec

预检

网络配置

参考之前的一篇文章CentOS7 设置静态IPv6/IPv4地址完成网络配置

修改主机名

 

1

2

3

4

5

 

[root@localhost ~]# hostnamectl set-hostname ceph001 #ceph001即为你想要修改的名字

[root@localhost ~]# vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

2001:250:4402:2001:20c:29ff:fe25:8888 ceph001 #新增,前面IPv6地址即主机ceph001的静态IPv6地址

修改yum源

由于某些原因,可能导致官方的yum在下载软件时速度较慢,这里我们将yum源换为aliyun源

 

1

2

3

4

5

6

7

 

[root@localhost ~]# yum clean all #清空yum源

[root@localhost ~]# rm -rf /etc/yum.repos.d/*.repo

[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #下载阿里base源

[root@localhost ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo #下载阿里epel源

[root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo

[root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo

[root@localhost ~]# sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo

 

添加ceph源

 

1

2

3

4

5

6

7

8

9

10

 

[root@localhost ~]# vim /etc/yum.repos.d/ceph.repo

[ceph]

name=ceph

baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/ #可以选择需要安装的版本

gpgcheck=0

[ceph-noarch]

name=cephnoarch

baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/ #可以选择需要安装的版本

gpgcheck=0

[root@localhost ~]# yum makecache

 

安装ceph与ceph-deploy

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

 

[root@localhost ~]# yum install ceph ceph-deploy

Loaded plugins: fastestmirror, langpacks

Loading mirror speeds from cached hostfile

Resolving Dependencies

--> Running transaction check

---> Package ceph.x86_64 1:0.94.9-0.el7 will be installed

--> Processing Dependency: librbd1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-rbd = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-cephfs = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: libcephfs1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: librados2 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-rados = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: ceph-common = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-requests for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: python-flask for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: redhat-lsb-core for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: hdparm for package: 1:ceph-0.94.9-0.el7.x86_64

--> Processing Dependency: libcephfs.so.1()(64bit) for package: 1:ceph-0.94.9-0.el7.x86_64

.......

Dependencies Resolved

=======================================================================================

Package Arch Version Repository Size

=======================================================================================

Installing:

ceph x86_64 1:0.94.9-0.el7 ceph 20 M

ceph-deploy noarch 1.5.36-0 ceph-noarch 283 k

Installing for dependencies:

boost-program-options x86_64 1.53.0-25.el7 base 155 k

ceph-common x86_64 1:0.94.9-0.el7 ceph 7.2 M

...

Transaction Summary

=======================================================================================

Install 2 Packages (+24 Dependent packages)

Upgrade ( 2 Dependent packages)

Total download size: 37 M

Is this ok [y/d/N]: y

Downloading packages:

No Presto metadata available for ceph

warning: /var/cache/yum/x86_64/7/base/packages/boost-program-options-1.53.0-25.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

Public key for boost-program-options-1.53.0-25.el7.x86_64.rpm is not installed

(1/28): boost-program-options-1.53.0-25.el7.x86_64.rpm | 155 kB 00:00:00

(2/28): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00:00

(3/28): ceph-deploy-1.5.36-0.noarch.rpm | 283 kB 00:00:00

(4/28): leveldb-1.12.0-11.el7.x86_64.rpm | 161 kB 00:00:00

...

---------------------------------------------------------------------------------------

Total 718 kB/s | 37 MB 00:53

Retrieving key from http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

Importing GPG key 0xF4A80EB5:

Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"

Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5

From : http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

Is this ok [y/N]: y

...

Complete!

验证安装版本

 

1

2

3

4

 

[root@localhost ~]# ceph-deploy --version

1.5.36

[root@localhost ~]# ceph -v

ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)

 

安装NTP(如果是多节点还需要配置服务端与客户端),并设置selinux与firewalld

 

1

2

3

4

5

6

7

 

[root@localhost ~]# yum install ntp

[root@localhost ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

[root@localhost ~]# setenforce 0

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

创建Ceph集群

在管理节点(ceph001)

[root@ceph001 ~]# mkdir cluster
[root@ceph001 ~]# cd cluster/

创建集群

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

 

[root@ceph001 cluster]# ceph-deploy new ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy new ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] func : <function new at 0xfe0668>

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x104c680>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] ssh_copykey : True

[ceph_deploy.cli][INFO ] mon : ['ceph001']

[ceph_deploy.cli][INFO ] public_network : None

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] cluster_network : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] fsid : None

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ip link show

[ceph001][INFO ] Running command: /usr/sbin/ip addr show

[ceph001][DEBUG ] IP addresses found: [u'192.168.122.1', u'49.123.105.124']

[ceph_deploy.new][DEBUG ] Resolving host ceph001

[ceph_deploy.new][DEBUG ] Monitor ceph001 at 2001:250:4402:2001:20c:29ff:fe25:8888

[ceph_deploy.new][INFO ] Monitors are IPv6, binding Messenger traffic on IPv6

[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph001']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['[2001:250:4402:2001:20c:29ff:fe25:8888]']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[root@ceph001 cluster]# ll

total 12

-rw-r--r--. 1 root root 244 Nov 6 21:54 ceph.conf

-rw-r--r--. 1 root root 3106 Nov 6 21:54 ceph-deploy-ceph.log

-rw-------. 1 root root 73 Nov 6 21:54 ceph.mon.keyring

[root@ceph001 cluster]# cat ceph.conf

[global]

fsid = 865e6b01-b0ea-44da-87a5-26a4980aa7a8

ms_bind_ipv6 = true

mon_initial_members = ceph001

mon_host = [2001:250:4402:2001:20c:29ff:fe25:8888]

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

由于我们采用的单节点部署,将默认的复制备份数改为1(原本是3)

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

 

[root@ceph001 cluster]# echo "osd_pool_default_size = 1" >> ceph.conf

[root@ceph001 cluster]# ceph-deploy --overwrite-conf config push ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --overwrite-conf config push ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : True

[ceph_deploy.cli][INFO ] subcommand : push

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x14f9710>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] client : ['ceph001']

[ceph_deploy.cli][INFO ] func : <function config at 0x14d42a8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.config][DEBUG ] Pushing config to ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

 

创建监控节点

将ceph001作为监控节点

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

 

[root@ceph001 cluster]# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : create-initial

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x23865a8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function mon at 0x237e578>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] keyrings : None

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph001

[ceph_deploy.mon][DEBUG ] detecting platform for host ceph001 ...

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core

[ceph001][DEBUG ] determining if provided host has same hostname in remote

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] deploying mon to ceph001

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] remote hostname: ceph001

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph001][DEBUG ] create the mon path if it does not exist

[ceph001][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph001/done

[ceph001][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph001/done

[ceph001][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] create the monitor keyring file

[ceph001][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i ceph001 --keyring /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] ceph-mon: mon.noname-a [2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0 is local, renaming to mon.ceph001

[ceph001][DEBUG ] ceph-mon: set fsid to 865e6b01-b0ea-44da-87a5-26a4980aa7a8

[ceph001][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph001 for mon.ceph001

[ceph001][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

[ceph001][DEBUG ] create a done file to avoid re-doing the mon deployment

[ceph001][DEBUG ] create the init path if it does not exist

[ceph001][DEBUG ] locating the `service` executable...

[ceph001][INFO ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph001

[ceph001][DEBUG ] === mon.ceph001 ===

[ceph001][DEBUG ] Starting Ceph mon.ceph001 on ceph001...

[ceph001][WARNIN] Running as unit ceph-mon.ceph001.1478441156.735105300.service.

[ceph001][DEBUG ] Starting ceph-create-keys on ceph001...

[ceph001][INFO ] Running command: systemctl enable ceph

[ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

[ceph001][WARNIN] Executing /sbin/chkconfig ceph on

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph001][DEBUG ] ********************************************************************************

[ceph001][DEBUG ] status for monitor: mon.ceph001

[ceph001][DEBUG ] {

[ceph001][DEBUG ] "election_epoch": 2,

[ceph001][DEBUG ] "extra_probe_peers": [],

[ceph001][DEBUG ] "monmap": {

[ceph001][DEBUG ] "created": "0.000000",

[ceph001][DEBUG ] "epoch": 1,

[ceph001][DEBUG ] "fsid": "865e6b01-b0ea-44da-87a5-26a4980aa7a8",

[ceph001][DEBUG ] "modified": "0.000000",

[ceph001][DEBUG ] "mons": [

[ceph001][DEBUG ] {

[ceph001][DEBUG ] "addr": "[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0",

[ceph001][DEBUG ] "name": "ceph001",

[ceph001][DEBUG ] "rank": 0

[ceph001][DEBUG ] }

[ceph001][DEBUG ] ]

[ceph001][DEBUG ] },

[ceph001][DEBUG ] "name": "ceph001",

[ceph001][DEBUG ] "outside_quorum": [],

[ceph001][DEBUG ] "quorum": [

[ceph001][DEBUG ] 0

[ceph001][DEBUG ] ],

[ceph001][DEBUG ] "rank": 0,

[ceph001][DEBUG ] "state": "leader",

[ceph001][DEBUG ] "sync_provider": []

[ceph001][DEBUG ] }

[ceph001][DEBUG ] ********************************************************************************

[ceph001][INFO ] monitor: mon.ceph001 is running

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph_deploy.mon][INFO ] processing monitor mon.ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph_deploy.mon][INFO ] mon.ceph001 monitor has reached quorum!

[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum

[ceph_deploy.mon][INFO ] Running gatherkeys...

[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpgY2IT7

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] get remote short hostname

[ceph001][DEBUG ] fetch remote file

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph001.asok mon_status

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.admin

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-mds

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-osd

[ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-rgw

[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpgY2IT7

 

查看集群状态

 

1

2

3

4

5

6

7

8

9

10

11

12

 

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_ERR

64 pgs stuck inactive

64 pgs stuck unclean

no osds

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 2, quorum 0 ceph001

osdmap e1: 0 osds: 0 up, 0 in

pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

 

添加OSD

查看硬盘

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

 

[root@ceph001 cluster]# ceph-deploy disk list ceph001

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk list ceph001

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1c79bd8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1c70e60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('ceph001', None, None)]

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph_deploy.osd][DEBUG ] Listing disks on ceph001...

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk list

[ceph001][DEBUG ] /dev/sda :

[ceph001][DEBUG ] /dev/sda1 other, xfs, mounted on /boot

[ceph001][DEBUG ] /dev/sda2 other, LVM2_member

[ceph001][DEBUG ] /dev/sdb other, unknown

[ceph001][DEBUG ] /dev/sdc other, unknown

[ceph001][DEBUG ] /dev/sdd other, unknown

[ceph001][DEBUG ] /dev/sr0 other, iso9660

 

添加第一个OSD(/dev/sdb)

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

 

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdb

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk zap ceph001:/dev/sdb

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : zap

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1b14bd8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1b0be60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph001

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph001][DEBUG ] zeroing last few blocks of device

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/sdb

[ceph001][DEBUG ] Creating new GPT entries.

[ceph001][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or

[ceph001][DEBUG ] other utilities.

[ceph001][DEBUG ] Creating new GPT entries.

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] partx: specified range <1:0> does not make sense

 

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

 

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdb

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy osd create ceph001:/dev/sdb

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

[ceph_deploy.cli][INFO ] dmcrypt : False

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] bluestore : None

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : create

[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x19b6680>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] fs_type : xfs

[ceph_deploy.cli][INFO ] func : <function osd at 0x19aade8>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] zap_disk : False

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdb:

[ceph001][DEBUG ] connected to host: ceph001

[ceph001][DEBUG ] detect platform information from remote host

[ceph001][DEBUG ] detect machine type

[ceph001][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001

[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdb journal None activate True

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type

[ceph001][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb

[ceph001][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:ae307314-3a81-4da2-974b-b21c24d9bba1 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partition 2

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

[ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:16a6298d-59bb-4190-867a-10a5b519e7c0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on created device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

[ceph001][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1

[ceph001][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks

[ceph001][DEBUG ] = sectsz=512 attr=2, projid32bit=1

[ceph001][DEBUG ] = crc=0 finobt=0

[ceph001][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25

[ceph001][DEBUG ] = sunit=0 swidth=0 blks

[ceph001][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0

[ceph001][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2

[ceph001][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1

[ceph001][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0

[ceph001][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.2SMGIk with options noatime,inode64

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.2SMGIk/journal -> /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

[ceph001][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.2SMGIk

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb

[ceph001][DEBUG ] Warning: The kernel is still using the old partition table.

[ceph001][DEBUG ] The new table will be used at the next reboot.

[ceph001][DEBUG ] The operation has completed successfully.

[ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

[ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

[ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

[ceph001][INFO ] Running command: systemctl enable ceph

[ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

[ceph001][WARNIN] Executing /sbin/chkconfig ceph on

[ceph001][INFO ] checking OSD status...

[ceph001][DEBUG ] find the location of an executable

[ceph001][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

[ceph001][WARNIN] there is 1 OSD down

[ceph001][WARNIN] there is 1 OSD out

[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.

查看集群状态

 

1

2

3

4

5

6

7

8

9

10

11

 

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_WARN

64 pgs stuck inactive

64 pgs stuck unclean

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e3: 1 osds: 0 up, 0 in

pgmap v4: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

 

继续添加其他OSD

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

 

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdc

[root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdd

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdc

[root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdd

[root@ceph001 cluster]# ceph -s

cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

health HEALTH_WARN

64 pgs stuck inactive

64 pgs stuck unclean

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e7: 3 osds: 0 up, 0 in

pgmap v8: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

 

重启机器,查看集群状态

 

1

2

3

4

5

6

7

8

9

10

 

[root@ceph001 ~]# ceph -s

cluster 2818c750-8724-4a70-bb26-f01af7f6067f

health HEALTH_WARN

too few PGs per OSD (21 < min 30)

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e9: 3 osds: 3 up, 3 in

pgmap v11: 64 pgs, 1 pools, 0 bytes data, 0 objects

102196 kB used, 284 GB / 284 GB avail

64 active+clean

 

错误处理

我们可以看到,目前集群状态为HEALTH_WARN,存在以下警告提示

 

1

 

too few PGs per OSD (21 < min 30)

 

增大rbd的pg数(too few PGs per OSD (21 < min 30))

 

1

2

3

4

 

[root@ceph001 cluster]# ceph osd pool set rbd pg_num 128

set pool 0 pg_num to 128

[root@ceph001 cluster]# ceph osd pool set rbd pgp_num 128

set pool 0 pgp_num to 128

 

查看集群状态

 

1

2

3

4

5

6

7

8

9

 

[root@ceph001 ~]# ceph -s

cluster 2818c750-8724-4a70-bb26-f01af7f6067f

health HEALTH_OK

monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

election epoch 1, quorum 0 ceph001

osdmap e13: 3 osds: 3 up, 3 in

pgmap v17: 128 pgs, 1 pools, 0 bytes data, 0 objects

101544 kB used, 284 GB / 284 GB avail

128 active+clean

 

小结

  1. 本教程只是简单的搭建了一个单节点的Ceph环境,如果要换成多节点也很简单,操作大同小异
  2. 在基于IPv6的Ceph配置上,个人觉得与IPv4操作相差不大,只需要注意两点
    1. 配置静态的IPv6地址
    2. 修改主机名并添加域名解析,将主机名对应于前面设置的静态IPv6地址

坚持原创技术分享,您的支持将鼓励我继续创作!

© 著作权归作者所有

共有 人打赏支持
粉丝 0
博文 6
码字总数 12676
作品 0
长沙
基于redhat7.3 ceph对象存储集群搭建+owncloud S3接口整合生产实践

一、环境准备 安装redhat7.3虚拟机四台 在四台装好的虚拟机上分别加一块100G的硬盘。如图所示: 3.在每个节点上配置主机名 4.集群配置信息如下 5.各节点配置yum源 #需要在每个主机上执行以下...

盖世英雄iii
06/27
0
0
【转】Ceph对象存储(rgw)的IPv6环境配置

引言 在搭建成功Ceph集群后,对于如何使用,其实我还是一脸MB的,心想竟然提供三种存储接口(对象,文件,快),口气也未免太大。在结合项目需求之后,我选择了对象存储接口。那么问题又来了...

lemon2013
05/11
0
0
分布式存储系统ceph部署分区问题

本人小白,用ubuntu12.04 想在自己的电脑上配置一下ceph做测试。但众所周知,配置ceph需要多个节点:client,mon,mds和osd。 我在网上看ceph的单节点安装,都说可以通过分区来承载这4个节点。...

Anier
2015/06/15
308
1
基于Centos7.4搭建Ceph

本文使用ceph-deploy工具,能快速搭建出一个ceph集群。 一、环境准备 修改主机名 [root@admin-node ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 设置DNS解析(我们这...

Zai坚强一点
06/28
0
0
译:块设备和 OpenStack

libvirt 配置了 librbd 的 QEMU 接口,通过它可以在 OpenStack 中使用 Ceph 块设备镜像。Ceph 块设备镜像被当作集群对象,这意味着它比独立的服务器有更好的性能。 在 OpenStack 中使用 Ceph...

Jerry_Baby
2015/02/09
0
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

jetbrains系产品IDEA:mac上面提示快捷键设置

原因 由于Mac上面的Ctrl+空格变成输入法切换的快捷键,在使用IDEA的过程中,代码提示很不方便,需要使用option+/这种传统eclipse上面的代码提示快捷键作为主要快捷键。 怎么修改? 移除【opt...

亚林瓜子
31分钟前
0
0
Exclipse 输出结果时换行

System.out.println(f1 + "\n" + d1 + "\n" + d2);

笑丶笑
32分钟前
1
0
怎样治疗标签不能触发onblur事件

I realize this was over a year ago, but it showed up for me in Google while trying to solve this same issue. It seems Chrome does not consider some elements, like body and ancho......

Weijuer
35分钟前
0
0
vue常见库安装

移动设备上的浏览器默认会在用户点击屏幕大约延迟300毫秒后才会触发点击事件,这是为了检查用户是否在做双击。为了能够立即响应用户的点击事件,才有了FastClick。 安装fastclick npm insta...

林夏夕
37分钟前
0
0
kafka 教程(三) kafka Java API 编程

下午写

MrPei
38分钟前
0
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部