Linux, 应用部署

glusterfs群架的搭建

glusterfs没有专用的管理节点,每个glusterfs节点服务器都是管理节点和存储节点,不存在单独故障问题。glusterfs分布式文件系统提供文件存储类型共享服务。

一、准备工作

1、服务器规划

192.168.168.147 storage1
192.168.168.146 storage2
192.168.168.144 storage3
192.168.168.143 storage4
192.168.168.142 client

2、设置时间同步

storage1作为时间服务器

[root@localhost ~]# grep -Ev "^[[:space:]]*(#|$)" /etc/chrony.conf
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
local stratum 10
logdir /var/log/chrony

其它服务器作为客户端:


[root@localhost gv0]# grep -Ev "^[[:space:]]*(#|$)" /etc/chrony.conf
server 192.168.168.147 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

验证

[root@localhost ~]# timedatectl
      Local time: Tue 2024-12-17 15:19:25 CST
  Universal time: Tue 2024-12-17 07:19:25 UTC
        RTC time: Tue 2024-12-17 07:19:26
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

二、安装glusterfs-server

1、在所有storage服务器上(不包括client)安装glusterfs-server软件包,并启动服务

# yum install glusterfs-server
# systemctl start glusterd
# systemctl enable glusterd
# systemctl status glusterd

2、所有storage服务器建立集群

[root@storage1 ~]# gluster peer probe storage2
[root@storage1 ~]# gluster peer probe storage3
[root@storage1 ~]# gluster peer probe storage4

查看peer情况
[root@localhost ~]# gluster peer status
Number of Peers: 3

Hostname: storage2
Uuid: 6967d3bf-5ecd-4fc8-822f-93b2d30427e0
State: Peer in Cluster (Connected)

Hostname: storage3
Uuid: 8040be49-39a0-4990-b49b-8213f3ffef24
State: Peer in Cluster (Connected)

Hostname: storage4
Uuid: 13c69026-2527-4af0-b1a6-35ca26c68011
State: Peer in Cluster (Connected)

3、创建存储卷(在任意一个storage服务器上做)

gluster volume create gv1 replica 4 storage1:/data/gv0/ 
storage2:/data/gv0/ storage3:/data/gv0/ storage4:/data/gv0/ force

卷类型支持:[stripe <COUNT>] [replica <COUNT>] [disperse [<COUNT>]] [redundancy <COUNT>]
因为在根分区创建所以需要force参数强制(其他分区不需要force)

查看存储卷信息:
[root@localhost ~]# gluster volume info gv1

Volume Name: gv1
Type: Distribute
Volume ID: 3e7a0134-387e-4ffe-b45a-7cff0660ae36
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: storage1:/data/gv0
Brick2: storage2:/data/gv0
Brick3: storage3:/data/gv0
Brick4: storage4:/data/gv0
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

4、启动存储卷

[root@localhost ~]# gluster volume start gv1
volume start: gv1: success

三、client安装使用

安装客户端并挂载分区

[root@localhost ~]#yum -y install glusterfs-4.1.9-1.el7.x86_64 glusterfs-fuse-4.1.9-1.el7.x86_64

[root@localhost ~]# mount -t glusterfs storage1:gv1 /mnt
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
storage1:gv0              17G  1.9G   16G  11% /mnt

这里client是挂载storage1,也可以挂载storage2,storage3,storage4任意一个。(也就是说这4个storage都干活。这是glusterfs的一个特点,其它的分布式存储软件基本上都会有专门的管理server)

四、glusterfs卷的删除

1、先在客户端umount已经挂载的目录

2、在任一个storage服务器上使用下面的命令停止gv1并删除

[root@localhost ~]# gluster volume stop gv1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv1: success

[root@localhost ~]# gluster volume delete gv1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv1: success

五、在线裁减与在线扩容

在线裁减不是所有模式的卷都支持。下面我以distributed卷来做裁减与 扩容

[root@localhost ~]# gluster volume create gv1 storage1:/data/gv0 storage2:/data/gv0 storage3:/data/gv0 force

[root@localhost ~]# gluster volume remove-brick gv1 storage4:/data/gv0 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@localhost ~]# gluster volume info gv1

Volume Name: gv1
Type: Distribute
Volume ID: 3e7a0134-387e-4ffe-b45a-7cff0660ae36
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: storage1:/data/gv0
Brick2: storage2:/data/gv0
Brick3: storage3:/data/gv0
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

[root@localhost ~]# gluster volume add-brick gv1 storage4:/data/gv0 force
volume add-brick: success
[root@localhost ~]# gluster volume info gv1

Volume Name: gv1
Type: Distribute
Volume ID: 3e7a0134-387e-4ffe-b45a-7cff0660ae36
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: storage1:/data/gv0
Brick2: storage2:/data/gv0
Brick3: storage3:/data/gv0
Brick4: storage4:/data/gv0
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

注意事项:

1、关于glusterfs卷的创建、删除等工作不需要同时在多台glusterfs集群服务器上面进行操作,只需在其中一台操作即可。

2、glusterfs文件系统中的数据没有进行加密,在集群的任何一条服务器上都可以直接查看文件的内容。

Leave a Reply