Linux, 应用部署

Ceph集群的搭建

准备工作:

1、先在所有的node上安装依赖

yum -y install python-setuptools

2、配置主机名和主机名绑定(所有节点都要绑定)

 # hostnamectl set-hostname --static node1
[root@node1 ceph]# cat /etc/hosts
192.168.126.131 client
192.168.126.133 node1
192.168.126.135 node2
192.168.126.137 node3

3、关闭防火墙,selinux(使用iptables -F清一下规则)

4、时间同步

集群部署

1、在node1配置ssh免密

[root@node1 ~]# ssh-keygen 
[root@node1 ~]# ssh-copy-id -i node1
[root@node1 ~]# ssh-copy-id -i node2
[root@node1 ~]# ssh-copy-id -i node3
[root@node1 ~]# ssh-copy-id -i client

2、在node1上安装部署工具

[root@node1 ~]# yum install ceph-deploy -y

3、在node1上创建集群

建立一个集群配置目录==注意: 后面的大部分操作都会在此目录==
 [root@node1 ~]# mkdir /etc/ceph
 [root@node1 ~]# cd /etc/ceph

创建一个ceph集群:
[root@node1 ceph]# ceph-deploy new node1
[root@node1 ceph]# ls
 ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

4、所有ceph集群节点安装ceph

yum install ceph ceph-radosgw -y

5、客户端安装ceph common

[root@client ~]# yum install ceph-common -y

6、在node1上创建mon(监控)

增加public网络用于监控:

[root@node1 ceph]# cat /etc/ceph/ceph.conf
[global]
fsid = 1f1a7c3d-6fdb-4637-b108-0dc71c0637be
mon_initial_members = node1
mon_host = 192.168.126.133
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.126.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
mon_allow_pool_delete = true

监控节点初始化:
[root@node1 ceph]# ceph-deploy mon create-initial
[root@node1 ceph]# ceph health          
HEALTH_OK

将配置文件信息同步到所有节点:
[root@node1 ceph]# ceph-deploy admin node2 node3

在所有node上重启ceph-mon
#systemctl restart ceph-mon.target

添加多个mon节点:(建议奇数个,因为有quorum仲裁投票)
[root@node1 ceph]# ceph-deploy mon add node2    
[root@node1 ceph]# ceph-deploy mon add node3
[root@node1 ceph]# ceph -s
 cluster:
 id:     
c05c1f28-ea78-41b7-b674-a069d90553ac
 health: HEALTH_OK                               
services:
 mon: 3 daemons, quorum node1,node2,node3        
mgr: no daemons active                          
osd: 0 osds: 0 up, 0 in
 data:
 pools:   
0 pools, 0 pgs
 objects: 0  objects, 0 B
 usage:   
0 B used, 0 B / 0 B avail
 pgs:  

7、创建mgr(管理)

创建一个mgr:
 [root@node1 ceph]# ceph-deploy mgr create node1

添加多个mgr可以实现HA:
 [root@node1 ceph]# ceph-deploy mgr create node2
 [root@node1 ceph]# ceph-deploy mgr create node3
 [root@node1 ceph]# ceph -s
 cluster:
 id:     
node1为mgr
 c05c1f28-ea78-41b7-b674-a069d90553ac
 health: HEALTH_OK                               
services:
 mon: 3 daemons, quorum node1,node2,node3        
mgr: node1(active), standbys: node2, node3      
osd: 0 osds: 0 up, 0 in                         
data:
 pools:   
0 pools, 0 pgs
 objects: 0  objects, 0 B
 usage:   
0 B used, 0 B / 0 B avail
 pgs:  

8、创建osd(存储盘)

列表查看节点上的磁盘
[root@node1 ceph]# ceph-deploy disk list node1
 [root@node1 ceph]# ceph-deploy disk list node2
 [root@node1 ceph]# ceph-deploy disk list node3
 zap磁盘上的数据,相当于格式化
[root@node1 ceph]# ceph-deploy disk zap node1 /dev/sdb
 [root@node1 ceph]# ceph-deploy disk zap node2 /dev/sdb
 [root@node1 ceph]# ceph-deploy disk zap node3 /dev/sdb
将磁盘创建为osd
 [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node1
 [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node2
 [root@node1 ceph]# ceph-deploy osd create --data /dev/sdb node3
[root@node1 ceph]# ceph -s
 cluster:
 id:     
c05c1f28-ea78-41b7-b674-a069d90553ac
 health: HEALTH_OK
 data:
 pools:   
services:
 mon: 3 daemons, quorum node1,node2,node3
 mgr: node1(active), standbys: node2, node3
 osd: 3 osds: 3 up, 3 in                                 
0 pools, 0 pgs
 objects: 0  objects, 0 B
 usage:   
41 MiB used, 2.9 GiB / 3.0 GiB avail           
pgs:  

Leave a Reply