Tag站长知识库:分享、传递知识使我们更快乐,更幸福,更和谐!

最近更新热门图文热门文章全站推荐Tag标签网站地图
您现在的位置:首页 > 服务器 > Linux/Uinux>>部署Ceph存储集群及块设备测试

部署Ceph存储集群及块设备测试

2019-06-17 17:21作者:佚名来源:Linux社区浏览:19 评论:19

集群环境

配置基础环境

添加ceph.repo

wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repoyum makecache

配置NTP

yum -y install ntpdate ntpntpdate cn.ntp.org.cnsystemctl restart ntpd ntpdate;systemctl enable ntpd ntpdate

创建用户和ssh免密登录

useradd ceph-adminecho "ceph-admin"|passwd --stdin ceph-adminecho "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-adminsudo chmod 0440 /etc/sudoers.d/ceph-admin
配置host解析cat >>/etc/hosts<<EOF10.1.10.201 ceph0110.1.10.202 ceph0210.1.10.203 ceph03EOF

配置sudo不需要tty

sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers

使用ceph-deploy部署集群

配置免密登录

su - ceph-adminssh-keygenssh-copy-id ceph-admin@ceph01ssh-copy-id ceph-admin@ceph02ssh-copy-id ceph-admin@ceph03

安装ceph-deploy

sudo yum install -y ceph-deploy Python-pip

部署节点

mkdir my-cluster;cd my-clusterceph-deploy new ceph01 ceph02 ceph03

编辑ceph.conf配置文件

echo >>/home/ceph-admin/my-cluster/ceph.conf<<EOFpublic network = 10.1.10.0/16cluster network = 10.1.10.0/16EOF

安装ceph包(代替ceph-deploy install node1 node2,下面命令需要在每台node上安装)

sudo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
sudo
yum install -y ceph ceph-radosgw

配置初始monitor(s),收集所有密钥

ceph-deploy mon create-initialls -l *.keyring

把配置信息拷贝到各节点

ceph-deploy admin ceph01 ceph02 ceph03

配置osd

su - ceph-admincd /home/my-cluster
for dev in /dev/sdb /dev/sdc /dev/sdddoceph-deploy disk zap ceph01 $devceph-deploy osd create ceph01 --data $devceph-deploy disk zap ceph02 $devceph-deploy osd create ceph02 --data $devceph-deploy disk zap ceph03 $devceph-deploy osd create ceph03 --data $devdone

部署mgr,Luminous版以后才需要部署

ceph-deploy mgr create ceph01 ceph02 ceph03

开启dashboard模块

sudo chown -R ceph-admin /etc/ceph/ceph mgr module enable dashboardnetstat -lntup|grep 7000

http://10.1.10.201:7000

配置ceph块存储

检查是否复合块设备环境要求

uname -rmodprobe rbdecho $?

创建池和块设备

ceph osd lspoolsceph osd pool create rbd 128

确定pg_num取值是强制性的,因为不能自动计算,下面是几个常用的值

少于5个OSD时,pg_num设置为128
OSD数量在5到10个时,pg_num设置为512
OSD数量在10到50个时,pg_num设置为4096
OSD数量大于50时,理解权衡方法、以及如何自己计算pg_num取值

客户端创建块设备

rbd create rbd1 --size 1G --image-feature layering --name client.admin

映射块设备

rbd map --image rbd1 --name client.admin

创建文件系统并挂载

fdisk -l /dev/rbd0mkfs.xfs /dev/rbd0mkdir /mnt/ceph-disk1mount /dev/rbd0 /mnt/ceph-disk1df -h /mnt/ceph-disk1

写入数据测试

dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

采用fio软件压力测试

安装fio压测软件

yum install zlib-devel -yyum install ceph-devel -ygit clone git://git.kernel.dk/fio.gitcd fio/./configuremake;make install

测试磁盘性能

fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=readiopsfio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=writeiopsfio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=randreadiopsfio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - filename=/dev/rbd0 -name=randwriteiops

Linux公社的RSS地址:https://www.linuxidc.com/rssFeed.aspx

本文永久更新链接地址:https://www.linuxidc.com/Linux/2019-08/159915.htm

Tags: 责任编辑:Tag站长知识库
顶一下(19)
87.08%

精彩信息

     

精彩信息

     

精彩信息