<!DOCTYPE html>
在redhat6.2as下安装centoscluster集群
在redhat6.2as下安装centoscluster集群
Thursday, October 17, 2013
3:51 PM
- 一. 配置yum源
- 二、安装cluster套件
- RHCS(Red Hat Cluster Suite)是一个能够提供高可用性、高可靠性、负载均衡、存储共享且经济廉价的集群工具集合.LUCI:是一个基于web的集群配置方式,通过luci可以轻松的搭建一个功能强大的集群系统。CLVM:Cluster逻辑卷管理,是LVM的扩展,这种扩展允许cluster中的机器使用LVM来管理共享存储。CMAN:分布式集群管理器。实验规划:节点两台,管理主机一台节点一:192.168.0.54 (desktop54.example.com)节点二:192.168.0.85 (desktop85.example.com)管理主机:192.168.0.22 (desktop22.example.com)一、【准备工作】1、将三台电脑的解析分别写入到各自的 hosts 文件,这里是这样:
- 192.168.0.54 desktop54.example.com192.168.0.85 desktop85.example.com192.168.0.22 desktop22.example.com2、两台节点主机关闭 selinux、iptables、和NetworkManager
- [root@desktop54 node1]# iptables -F[root@desktop54 node1]# service iptables save[root@desktop54 node1]# grep ^SELINUX= /etc/selinux/configSELINUX=enforcing[root@desktop54 node1]# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config[root@desktop54 node1]# yum remove NetworkManager NetworkManager-glib NetworkManager-gnome NetworkManager-devel NetworkManager-glib-devel[root@desktop54 node1]# reboot(node1和node2都关闭selinux、iptables和NetworkManager)一、【管理主机】
- 安装luci:[root@desktop22 manager]# yum install luci -y[root@desktop22 manager]# chkconfig luci on[root@desktop22 manager]# service luci start二、【节点配置】
- 安装 ricci、rgmanager、cman,并且启动这些服务(无先后顺序),其中cman会启动失败,可以先不启动[root@desktop54 node1]# yum install ricci rgmanager cman -y [root@desktop54 node1]# chkconfig ricci on
- [root@desktop54 node1]# chkconfig rgmanager on[root@desktop54 node1]# chkconfig cman on
- 设置用户ricci的密码,启动服务。在web中加入node时使用的密码就是ricci的密码[root@desktop54 node1]# service ricci startStarting ricci: [ OK ][root@desktop54 node1]# service rgmanager startStarting Cluster Service Manager: [ OK ][root@desktop85 node2]# service cman startStarting cluster:Checking Network Manager... [ OK ]Global setup... [ OK ]Loading kernel modules... [ OK ]Mounting configfs... [ OK ]Starting cman... xmlconfig cannot find /etc/cluster/cluster.conf[FAILED]这是因为还没有加入集群没有产生配置文件/etc/cluster/cluster.conf下面这些服务都要为running状态
三、【管理界面配置】
登录网址:https://desktop22.example.com:8084 用户名和密码为系统的root账户和密码
1、添加集群:
Manager Clusters -> Create {集群名字,注意及群名最好不要有空格和特殊符号,否则后面添加gfs名称时会报错 。节点的密码是否一样,输入主机名和ricci用户密码,端口不用改。这里选择在线下载所需的软件包,允许加入节点前重启,支持共享存储。} ->Create Cluster
- 2、配置集群:(点击创建的集群进去)**创建节点:创建完集群之后:会将添加的节点加进去,但都是显示红色,是因为相互通信的cman服务没有开启,手动开启cman服务:
- [root@desktop54 node1]# service cman start (节点1)Starting cluster:Checking Network Manager... [ OK ]Global setup... [ OK ]Loading kernel modules... [ OK ]Mounting configfs... [ OK ]Starting cman... [ OK ]Waiting for quorum... [ OK ]Starting fenced... [ OK ]Starting dlm_controld... [ OK ]Starting gfs_controld... [ OK ]Unfencing self... [ OK ]Joining fence domain... [ OK ][root@desktop86 node2]# service cman start (节点2)Starting cluster:Checking Network Manager... [ OK ]Global setup... [ OK ]Loading kernel modules... [ OK ]Mounting configfs... [ OK ]Starting cman... [ OK ]Waiting for quorum... [ OK ]Starting fenced... [ OK ]Starting dlm_controld... [ OK ]Starting gfs_controld... [ OK ]Unfencing self... [ OK ]Joining fence domain... [ OK ]此时再刷新管理页面,节点都显示正常了。**添加fence设备:Fence Devices -> Add{Fence virt(Multicast Mode) (然后fence type 会变为fence xvm ) ; 名字:kevin_virt_fence} -> Submit 确定
- 在各个节点中分别绑定该fence。
- Nodes->node1->add fence methed(名字一样)->add fence instance(名字一样)
- **添加Failover Domains 故障转移域:Prioritized:优先级,故障转移时选择优先级高的。Restricted:服务只运行在指定的节点上。No Failback:当故障节点又正常的时候,不必把服务切换回去。Failover Domains -> Add {名字:kevin_failover ;勾选Prioritized,No Failback具体情况自己设定;将实验的两台节点勾选,设定其优先级。}-> Create**添加资源:(具体的自己添加,实验以apache服务)Resources -> Add -> 选择IP Address{IP address:192.168.0.234『虚拟ip地址,用于访问的,确保没被使用』;Netmask bits (optional):24『掩码位数』;
- Monitor link:勾选上;Number of seconds to sleep after removing an IP address:默认 }->SubmitResources -> Add -> 选择Script{Name:httpd;->Submit**添加服务:(具体的自己添加)Services -> Add{Service name:apache ;Automatically start this service:勾上『自动启动服务』;Run exclusive:Failover domain:选择刚刚加入的故障转移域kevin_failover;Recovery policy:轮循方式}-> Add a resource 选择刚刚添加的虚拟ip -> Add a resource 选择刚刚添加的脚本httpd ->Submit
- 用 ip addr show 可查看虚拟ip的情况
- 四、测试节点均装上httpd服务并开启服务:[root@desktop54 node1]# yum install httpd -y (节点1)[root@desktop54 node1]# service httpd start[root@desktop54 node1]# echo `hostname` > /var/www/html/index.html[root@desktop86 node2]# yum install httpd -y (节点2)[root@desktop86 node2]# service httpd start[root@desktop86 node2]# echo `hostname` > /var/www/html/index.html没有截图,用文本方式访问:[root@desktop22 server]# elinks -dump 192.168.0.54 (管理)desktop54.example.com[root@desktop22 server]# elinks -dump 192.168.0.86desktop86.example.comOK! 都正常[root@desktop22 server]# elinks -dump 192.168.0.234 (虚拟ip)desktop54.example.com (54优先级高)在node1上模拟故障,看服务还能继续吗?用web方式刷新更直观[root@desktop54 node1]# echo b > /proc/sysrq-trigger[root@desktop22 server]# elinks -dump 192.168.0.234 (虚拟ip)desktop86.example.com再等node1开接启动服务后:[root@desktop22 server]# elinks -dump 192.168.0.234 (虚拟ip)desktop86.example.com服务节点切回去了,这是因为刚刚勾选No Failback了。即使服务节点正常了不会再切回去。【iSCSI GFS实现网络存储】1、查看iscsi的状态:[root@desktop54 node1]# /etc/init.d/iscsi status (node1的状态)iSCSI Transport Class version 2.0-870version 2.0-872Target: iqn.2012-03.com.example:kevinCurrent Portal: 192.168.0.24:3260,1Persistent Portal: 192.168.0.24:3260,1**********Interface:**********Iface Name: defaultIface Transport: tcpIface Initiatorname: iqn.1994-05.com.redhat:86d532367ca0Iface IPaddress: 192.168.0.54Iface HWaddress:Iface Netdev:SID: 2iSCSI Connection State: LOGGED INiSCSI Session State: LOGGED_INInternal iscsid Session State: NO CHANGE************************Negotiated iSCSI params:************************HeaderDigest: NoneDataDigest: NoneMaxRecvDataSegmentLength : 262144
- MaxXmitDataSegmentLength : 8192
- FirstBurstLength: 65536MaxBurstLength: 262144ImmediateData: YesInitialR2T: YesMaxOutstandingR2T: 1************************Attached SCSI devices:************************Host Number: 3 State: runningscsi3 Channel 00 Id 0 Lun: 0scsi3 Channel 00 Id 0 Lun: 1Attached scsi disk sdb (这里发现为sdb) State: running[root@desktop86 node2]# service iscsi status (node2的状态)iSCSI Transport Class version 2.0-870version 2.0-872Target: iqn.2012-03.com.example:kevinCurrent Portal: 192.168.0.24:3260,1Persistent Portal: 192.168.0.24:3260,1**********Interface:**********Iface Name: defaultIface Transport: tcpIface Initiatorname: iqn.1994-05.com.redhat:12546582ea96Iface IPaddress: 192.168.0.85Iface HWaddress:Iface Netdev:SID: 2iSCSI Connection State: LOGGED INiSCSI Session State: LOGGED_INInternal iscsid Session State: NO CHANGE************************Negotiated iSCSI params:************************HeaderDigest: NoneDataDigest: NoneMaxRecvDataSegmentLength : 262144
- MaxXmitDataSegmentLength : 8192
- FirstBurstLength: 65536MaxBurstLength: 262144ImmediateData: YesInitialR2T: YesMaxOutstandingR2T: 1************************Attached SCSI devices:************************Host Number: 3 State: runningscsi3 Channel 00 Id 0 Lun: 0scsi3 Channel 00 Id 0 Lun: 1Attached scsi disk sda (发现为sda,源存储为vda) State: running2、在节点node1和node2上配置:[root@desktop54 node1]# lvmconf --enable-cluster (启动CLVM的集成cluster锁)[root@desktop54 node1]# chkconfig clvmd on[root@desktop54 node1]# service clvmd start (clvm对lvm有效哦)Activating VG(s): No volume groups found[ OK ]3、现在可以在任意一台节点client对发现的磁盘进行分区,划分出sdb1。然后格式化成网络文件系统gfs2.[root@desktop54 node1]# pvcreate /dev/sdb1Physical volume "/dev/sdb1" successfully created[root@desktop54 node1]# vgcreate vg1 /dev/sdb1Clustered volume group "vg1" successfully created[root@desktop54 node1]# lvcreate -L 1G -n lv1 vg1Error locking on node desktop85.example.com: Volume group for uuid not found: e1CQKruwtLzT6dRc9wysYIDq 1Df78V0hZDs9a1sf3duPexOy v115ETnOiM9C4P36
- Aborting. Failed to activate new LV to wipe the start of it.(出现问题了,不能创建lv 我们去node2上去同步一下吧)[root@desktop86 node2]# pvcreate /dev/sda1Can't initialize physical volume "/dev/sda1" of volume group "vg1" without -ff (不用管,再去node2看看。)[root@desktop54 node1]# lvcreate -L 1G -n lv1 vg1Logical volume "lv1" created (能够创建lv了。)[root@desktop54 node1]# /etc/init.d/clvmd startActivating VG(s): 1 logical volume(s) in volume group "vg1" now active[ OK ]4、创建GFS文件系统[root@desktop54 node1]# mkfs.gfs2 -p lock_dlm -t kevin_cluster:gfs2 -j 3 /dev/vg1/lv1This will destroy any data on /dev/vg1/lv1.It appears to contain: symbolic link to `../dm-0'Are you sure you want to proceed? [y/n] yDevice: /dev/vg1/lv1Blocksize: 4096Device Size 1.00 GB (262144 blocks)Filesystem Size: 1.00 GB (262142 blocks)Journals: 3Resource Groups: 4Locking Protocol: "lock_dlm"Lock Table: "kevin_cluster:gfs2"UUID: 0E8AC404-767B-8C1A-5ADF-8B18AB157CC3『kevin_cluster:gfs2这个kevin_cluster就是集群的名字(要和集群名一致,否则无法挂载),gfs2是定义的名字,相当于标签吧。-j是指定挂载这个文件系统的主机个数,不指定默认为1即为管理节点的。这里实验有两个节点,加上管理主机为3』
- 5、挂载GFS文件系统在挂载之前将RHCS上apache服务停掉:Services 里将apache服务disabled掉。[root@desktop54 node1]# mount.gfs2 /dev/vg1/lv1 /var/www/html/如果这里出现类似『fs is for a different cluster error mounting lockproto lock_dlm』错误,查看日志文件:tail -n1 /var/log/messages,当前集群名字为current="kevin_cluster",上一步重新格式化,修改集群名就好了。6、测试**node1:[root@desktop54 node1]# echo node1 > /var/www/html/index.html[root@desktop54 node1]# service httpd start[root@desktop24 ~]# elinks -dump 192.168.0.54 (管理主机)node1**node2:[root@desktop86 node2]# mount.gfs2 /dev/vg1/lv1 /var/www/html/[root@desktop86 node2]# service httpd start[root@desktop24 ~]# elinks -dump 192.168.0.85 (管理主机)node1看,node2挂载上之后,数据还是刚刚在node1里边创建的。达到了共享存储的目的。整合GFS文件系统和apache服务到RHCS集群套件上集中管理吧。**Resources -> Add -> GFS2{Name:lv1;Mount point:/var/www/html;Device, FS label, or UUID:/dev/vg1/lv1;Mount options:_netdev;Force unmount: yes}-> Submit**Services -> apache -> Add a resource -> lv1 -> Submit然后启动apache服务用浏览器访问:http://192.168.0.234[root@desktop24 ~]# elinks -dump 192.168.0.234client1OK 配置成功还可以模拟刚刚的节点故障~ 实现了需要的效果.
已使用 Microsoft OneNote 2016 创建。