文档章节

安装 GlusterFS - 快速开始

openthings
 openthings
发布于 2018/10/30 15:42
字数 1196
阅读 124
收藏 0

安装 GlusterFS - 快速开始

本文说明

This document is intended to provide a step-by-step guide to setting up GlusterFS for the first time. For the purposes of this guide, it is required to use Fedora 26 (or, higher) virtual machine instances.

After you deploy GlusterFS by following these steps, we recommend that you read the GlusterFS Admin Guide to learn how to administer GlusterFS and how to select a volume type that fits your needs. Read the GlusterFS Install Guide for a more detailed explanation of the steps we took here. We want you to be successful in as short a time as possible.

If you would like a more detailed walkthrough with instructions for installing using different methods (in local virtual machines, EC2 and baremetal) and different distributions, then have a look at the Install guide.

使用 Ansible 部署和管理 GlusterFS

If you are already an Ansible user, and are more comfortable with setting up distributed systems with Ansible, we recommend you to skip all these and move over to gluster-ansible repository, which gives most of the details to get the systems running faster.

部署 GlusterFS(GlusterD2), 下一代Gluster管理界面

While GlusterD2 project continues to be under active development, contributors can start by setting up the cluster to understand the aspect of peer and volume management.Please refer to GD2 quick start guide here. Feedback on the new CLI and the ReST APIs are welcome at gluster-users@gluster.org & gluster-devel@gluster.org.

自动部署 GlusterFS,使用Puppet-Gluster+Vagrant

To deploy GlusterFS using scripted methods, please read this article.

Step 1 – 至少三个节点

  • Fedora 26 (or later) on 3 nodes named "server1", "server2" and "server3"
  • A working network connection
  • At least two virtual disks, one for the OS installation, and one to be used to serve GlusterFS storage (sdb), on each of these VMs. This will emulate a real-world deployment, where you would want to separate GlusterFS storage from the OS install.
  • Setup NTP on each of these servers to get the proper functioning of many applications on top of filesystem.

Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd. If at any point in time GlusterFS is unable to write to these files (for example, when the backing filesystem is full), it will at minimum cause erratic behavior for your system; or worse, take your system offline completely. It is recommended to create separate partitions for directories such as /var/log to reduce the chances of this happening.

Step 2 - 分配和格式化存储块

Perform this step on all the nodes, "server{1,2,3}"

Note: We are going to use the XFS filesystem for the backend bricks. But Gluster is designed to work on top of any filesystem, which supports extended attributes.

The following examples assume that the brick will be residing on /dev/sdb1.

    mkfs.xfs -i size=512 /dev/sdb1
    mkdir -p /data/brick1
    echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
    mount -a && mount

You should now see sdb1 mounted at /data/brick1

Step 3 - 安装 GlusterFS

Install the software

    yum install glusterfs-server

Start the GlusterFS management daemon:

    service glusterd start
    service glusterd status
    glusterd.service - LSB: glusterfs server
           Loaded: loaded (/etc/rc.d/init.d/glusterd)
       Active: active (running) since Mon, 13 Aug 2012 13:02:11 -0700; 2s ago
      Process: 19254 ExecStart=/etc/rc.d/init.d/glusterd start (code=exited, status=0/SUCCESS)
       CGroup: name=systemd:/system/glusterd.service
           ├ 19260 /usr/sbin/glusterd -p /run/glusterd.pid
           ├ 19304 /usr/sbin/glusterfsd --xlator-option georep-server.listen-port=24009 -s localhost...
           └ 19309 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/...

Step 4 - 配置防火墙

The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.

            iptables -I INPUT -p all -s <ip-address> -j ACCEPT

where ip-address is the address of the other node.

Step 5 - 配置信任的存储服务池

From "server1"

    gluster peer probe server2
    gluster peer probe server3

Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.

From "server2"

    gluster peer probe server1

Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.

Check the peer status on server1

            gluster peer status

You should see something like this (the UUID will differ)

            Number of Peers: 2

            Hostname: server2
            Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4
            State: Peer in Cluster (Connected)

            Hostname: server3
            Uuid: f0e7b138-4532-4bc0-ab91-54f20c701241
            State: Peer in Cluster (Connected)

Step 6 - 设置 GlusterFS volume

On all servers:

    mkdir -p /data/brick1/gv0

From any single server:

    gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0
    gluster volume start gv0

Confirm that the volume shows "Started":

    gluster volume info

You should see something like this (the Volume ID will differ):

            Volume Name: gv0
            Type: Replicate
            Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f
            Status: Started
            Snapshot Count: 0
            Number of Bricks: 1 x 3 = 3
            Transport-type: tcp
            Bricks:
            Brick1: server1:/data/brick1/gv0
            Brick2: server2:/data/brick1/gv0
            Brick3: server3:/data/brick1/gv0
            Options Reconfigured:
            transport.address-family: inet

Note: If the volume does not show "Started", the files under /var/log/glusterfs/glusterd.log should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured.

Step 7 - 测试 GlusterFS volume

For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first , as if it were that "client".

    mount -t glusterfs server1:/gv0 /mnt
      for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done

First, check the client mount point:

    ls -lA /mnt/copy* | wc -l

You should see 100 files returned. Next, check the GlusterFS brick mount points on each server:

    ls -lA /data/brick1/gv0/copy*

You should see 100 files on each server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 33 files on each one.

本文转载自:https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/

openthings
粉丝 322
博文 1134
码字总数 685726
作品 1
东城
架构师
私信 提问
Kubernetes中挂载GlusterFS的volume

Kubernetes可以直接挂载多种文件系统,其中包括GlusterFS(https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs)。 这里采用最简单的方法,挂载宿主系统的GlusterFS卷给Kuber...

openthings
2018/11/26
492
0
GlusterFS 3.7.1安装手记

1、最小化安装centos 6.6 完成后执行yum update 2、执行yum list glusterfs* 发现centos官方包中缺少glusterfs-server包。解决办法:使用glusterfs官方yun源。 3、再次执行 yum list gluster...

亮公子
2018/06/26
0
0
glusterfs的一些基本知识

基本概念 (1) brick:The brick is the storage filesystem that has been assigned to a volume. (2) subvolume:A brick after being processed by at least one translator. (3) volume:......

hncscwc
2014/03/20
4.3K
0
使用Glusterfs作为kvm后端存储

1.测试环境 centos6.4 x86-64 gluster-3.4 qemu-1.5.2 机器: 192.168.1.100:glusterfs+kvm 192.168.1.101-103:glusterfs 2.Glusterfs存储集群部署 先部署好glusterfs集群,部署教程参考这里h......

kisops
2013/08/18
7.8K
17
GlusterFS For Ubuntu

GlusterFS For Ubuntu BASE安装 有多种对Gluster配置的方式,此处介绍三种. 在一台服务器上建立Distributed Volume 假设服务器为:192.168.113.173(Server)假设客户端为:192.168.113.179(Clie...

行者深蓝
2015/04/21
795
0

没有更多内容

加载失败,请刷新页面

加载更多

Jenkins World 贡献者峰会及专家答疑展位

本文首发于:Jenkins 中文社区 原文链接 作者:Marky Jackson 译者:shunw Jenkins World 贡献者峰会及专家答疑展位 本文为 Jenkins World 贡献者峰会活动期间的记录 Jenkins 15周岁啦!Jen...

Jenkins中文社区
24分钟前
8
0
杂谈:面向微服务的体系结构评审中需要问的三个问题

面向微服务的体系结构如今风靡全球。这是因为更快的部署节奏和更低的成本是面向微服务的体系结构的基本承诺。 然而,对于大多数试水的公司来说,开发活动更多的是将现有的单块应用程序转换为...

liululee
39分钟前
7
0
OSChina 周二乱弹 —— 我等饭呢,你是不是来错食堂了?

Osc乱弹歌单(2019)请戳(这里) 【今日歌曲】 @ 自行车丢了:给主编推荐首歌 《クリスマスの夜》- 岡村孝子 手机党少年们想听歌,请使劲儿戳(这里) @烽火燎原 :国庆快来,我需要长假! ...

小小编辑
今天
417
9
玩转 Springboot 2 之热部署(DevTools)

Devtools 介绍 SpringBoot 提供了热部署的功能,那啥是热部署累?SpringBoot官方是这样说的:只要类路径上的文件发生更改,就会自动重新启动应用程序。在IDE中工作时,这可能是一个有用的功能...

桌前明月
今天
6
0
CSS--列表

一、列表标识项 list-style-type none:去掉标识项 disc:默认实心圆 circle:空心圆 squire:矩形 二、列表项图片 list-style-img: 取值:url(路径) 三、列表项位置 list-style-position:...

wytao1995
今天
10
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部