部署NFS主从同步复制功能
1.基础环境规划与准备
1.1服务器基础信息:
服务名称 | 虚拟IP(VIP) | IP地址 | 操作系统 |
---|---|---|---|
NFS主服务器 | 192.168.250.195 | 192.168.250.193 | CentOS 7.8 |
NFS从服务器 | 192.168.250.195 | 192.168.250.194 | CentOS 7.8 |
1.2分别在MASTER和SLAVE上创建共享目录
[root@mynfs01 /]# mkdir -p /mynfsdata
[root@mynfs02 /]# mkdir -p /mynfsdata
[root@mynfs02 /]#
1.3关闭防火墙和selinux
[root@mynfs01 /]# systemctl stop firewalld
[root@mynfs01 /]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@mynfs01 /]#
[root@mynfs01 /]# setenforce 0
[root@mynfs01 /]# vim /etc/selinux/config
[root@mynfs01 /]#
SELINUX=disabled
2.安装并配置NFS服务器
2.1安装NFS和RPC服务
这里分别在master上和slave服务器上分别安装,slave的安装过程略。
[root@mynfs01 /]# yum -y install nfs-utils rpcbind
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
软件包 rpcbind-0.2.0-49.el7.x86_64 已安装并且是最新版本
正在解决依赖关系
--> 正在检查事务
---> 软件包 nfs-utils.x86_64.1.1.3.0-0.68.el7.2 将被 安装
--> 解决依赖关系完成
依赖关系解决
==================================================================================================
Package 架构 版本 源 大小
==================================================================================================
正在安装:
nfs-utils x86_64 1:1.3.0-0.68.el7.2 updates 413 k
事务概要
==================================================================================================
安装 1 软件包
总下载量:413 k
安装大小:1.1 M
Downloading packages:
nfs-utils-1.3.0-0.68.el7.2.x86_64.rpm | 413 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : 1:nfs-utils-1.3.0-0.68.el7.2.x86_64 1/1
验证中 : 1:nfs-utils-1.3.0-0.68.el7.2.x86_64 1/1
已安装:
nfs-utils.x86_64 1:1.3.0-0.68.el7.2
完毕!
[root@mynfs01 /]#
2.2配置NFS共享目录
分别在master和slave上配置共享目录
[root@mynfs01 /]# echo '/mynfsdata *(rw,sync,all_squash)' >> /etc/exports
[root@mynfs01 /]#
[root@mynfs02 /]# echo '/mynfsdata *(rw,sync,all_squash)' >> /etc/exports
[root@mynfs02 /]#
注:All_squash:客户机上的任何用户访问该共享目录时都映射成匿名用户。
2.3设置开机自启动功能
[root@mynfs01 /]# systemctl start nfs && systemctl start rpcbind
[root@mynfs01 /]# systemctl enable nfs && systemctl enable rpcbind
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@mynfs01 /]# systemctl status nfs && systemctl status rpcbind
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since 六 2022-03-05 08:10:07 CST; 2 days ago
Main PID: 17660 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
3月 05 08:10:06 mynfs01 systemd[1]: Starting NFS server and services...
3月 05 08:10:07 mynfs01 systemd[1]: Started NFS server and services.
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since 五 2022-03-04 08:28:47 CST; 3 days ago
Main PID: 738 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─738 /sbin/rpcbind -w
3月 04 08:28:47 mynfs01 systemd[1]: Starting RPC bind service...
3月 04 08:28:47 mynfs01 systemd[1]: Started RPC bind service.
[root@mynfs01 /]#
3.配置文件同步功能
3.1在salve上配置rsync服务
这里进行master数据同步
1)安装rsync
[root@mynfs02 /]# yum -y install rsync.x86_64
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
软件包 rsync-3.1.2-10.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@mynfs02 /]#
2)修改/etc/rsyncd.conf文件
vim /etc/rsyncd.conf
uid = root
gid = root
port = 873
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
lock file = /var/run/rsyncd.lock
use chroot = no
max connections = 200
read only = false
list = false
fake super = yes
ignore errors
[data]
path = /mynfsdata
auth users = rsyncuser
secrets file = /etc/rsync_salve.pass
hosts allow = 192.168.250.193 #填写master服务器的IP
3)生成认证文件
[root@mynfs02 /]# echo 'rsyncuser:rsyncuser12#' > /etc/rsync_salve.pass
[root@mynfs02 /]# chmod 600 /etc/rsync_salve.pass
[root@mynfs02 /]#
4)修改文件夹权限
[root@mynfs02 /]# chown -R root:root /mynfsdata
5)启动服务
[root@mynfs02 /]# rsync --daemon --config=/etc/rsyncd.conf
[root@mynfs02 /]#
3.2在master上进行测试
1)安装同步工具
[root@mynfs01 /]# yum -y install rsync.x86_64
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
软件包 rsync-3.1.2-10.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@mynfs01 /]#
[root@mynfs01 /]# chown -R root:root /mynfsdata
[root@mynfs01 /]# echo "rsyncuser12#" > /etc/rsync.pass
[root@mynfs01 /]# chmod 600 /etc/rsync.pass
[root@mynfs01 /]#
2)创建测试文件,测试推送(在master上进行)
[root@mynfs01 /]# rsync -arv /mynfsdata/ rsyncuser@192.168.250.194::data --password-file=/etc/rsync.pass
sending incremental file list
./
aaa.txt
sent 119 bytes received 38 bytes 314.00 bytes/sec
total size is 4 speedup is 0.03
[root@mynfs01 /]#
3)在slave上查看
[root@mynfs02 /]# ls /mynfsdata
aaa.txt
[root@mynfs02 /]#
说明同步过来了。
3.3在master配置自同步
从网上下载sersync2.5.4_64bit_binary_stable_final.tar.gz
,上传后,移到下面目录然后安装。
[root@mynfs01 ~]# mv sersync2.5.4_64bit_binary_stable_final.tar.gz /usr/local/
[root@mynfs01 local]# tar xvf sersync2.5.4_64bit_binary_stable_final.tar.gz
GNU-Linux-x86/
GNU-Linux-x86/sersync2
GNU-Linux-x86/confxml.xml
[root@mynfs01 local]# ls
bin etc games GNU-Linux-x86 include lib lib64 libexec sbin sersync2.5.4_64bit_binary_stable_final.tar.gz share src
[root@mynfs01 local]# mv GNU-Linux-x86/ sersync
[root@mynfs01 local]# cd sersync/
[root@mynfs01 sersync]#
修改配置文件confxml.xml
[root@mynfs01 sersync]# sed -ri 's#<delete start="true"/>#<delete start="false"/>#g' confxml.xml
[root@mynfs01 sersync]# sed -ri '24s#<localpath watch="/opt/tongbu">#<localpath watch="/mynfsdata">#g' confxml.xml
[root@mynfs01 sersync]# sed -ri '25s#<remote ip="127.0.0.1" name="tongbu1"/>#<remote ip="192.168.250.194" name="data"/>#g' confxml.xml
[root@mynfs01 sersync]# sed -ri '30s#<commonParams params="-artuz"/>#<commonParams params="-az"/>#g' confxml.xml
[root@mynfs01 sersync]# sed -ri '31s#<auth start="false" users="root" passwordfile="/etc/rsync.pas"/>#<auth start="true" users="rsyncuser" passwordfile="/etc/rsync.pass"/>#g' confxml.xml
[root@mynfs01 sersync]# sed -ri '33s#<timeout start="false" time="100"/><!-- timeout=100 -->#<timeout start="true" time="100"/><!-- timeout=100 -->#g' confxml.xml
[root@mynfs01 sersync]# pwd
/usr/local/sersync
[root@mynfs01 sersync]#
启动Sersync服务
[root@mynfs01 ~]# /usr/local/sersync/sersync2 -dro /usr/local/sersync/confxml.xml
set the system param
execute:echo 50000000 > /proc/sys/fs/inotify/max_user_watches
execute:echo 327679 > /proc/sys/fs/inotify/max_queued_events
parse the command param
option: -d run as a daemon
option: -r rsync all the local files to the remote servers before the sersync work
option: -o config xml name: /usr/local/sersync/confxml.xml
daemon thread num: 10
parse xml config file
host ip : localhost host port: 8008
will ignore the inotify delete event
daemon start,sersync run behind the console
use rsync password-file :
user is rsyncuser
passwordfile is /etc/rsync.pass
config xml parse success
please set /etc/rsyncd.conf max connections=0 Manually
sersync working thread 12 = 1(primary thread) + 1(fail retry thread) + 10(daemon sub threads)
Max threads numbers is: 22 = 12(Thread pool nums) + 10(Sub threads)
please according your cpu ,use -n param to adjust the cpu rate
------------------------------------------
rsync the directory recursivly to the remote servers once
working please wait...
execute command: cd /mynfsdata && rsync -az -R ./ --timeout=100 rsyncuser@192.168.250.194::data --password-file=/etc/rsync.pass >/dev/null 2>&1
run the sersync:
watch path is: /mynfsdata
[root@mynfs01 ~]#
3.4测试自同步
# 在 master 中的/mynfsdata 目录创建文件
[root@mynfs01 mynfsdata]# touch mytest2.txt
然后在slave中的/mynfsdata 是否有该文件
[root@mynfs02 mynfsdata]# ls
aaa.txt mytest2.txt
[root@mynfs02 mynfsdata]# cat mytest2.txt
111
以上就做完了 salve 同步 master 的文件,但是当 master 宕机后恢复,master 无法同步 salve 文件,所以要配置 master 同步 salve 文件。
3.5在master配置rsync服务,进行同步slave数据
1)修改 /etc/rsyncd.conf
其中 hosts allow
填写 slave ip=192.168.250.194
[root@mynfs01 ~]# vim /etc/rsyncd.conf
[root@mynfs01 ~]#
uid = root
gid = root
port = 873
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
lock file = /var/run/rsyncd.lock
use chroot = no
max connections = 200
read only = false
list = false
fake super = yes
ignore errors
[data]
path = /mynfsdata
auth users = rsyncuser
secrets file = /etc/rsync_master.pass
hosts allow = 192.168.250.194
2)生成认证文件
[root@mynfs01 ~]# echo 'rsyncuser:rsyncuser12#' > /etc/rsync_master.pass
[root@mynfs01 ~]# chmod 600 /etc/rsync_master.pass
[root@mynfs01 ~]#
3)修改文件夹权限
[root@mynfs01 ~]# chown -R root:root /mynfsdata
[root@mynfs01 ~]#
4)启动服务
[root@mynfs01 ~]# rsync --daemon --config=/etc/rsyncd.conf
[root@mynfs01 ~]#
3.6在slave上测试
[root@mynfs02 ~]# echo "rsyncuser12#" > /etc/rsync.pass
[root@mynfs02 ~]# chmod 600 /etc/rsync.pass
[root@mynfs02 ~]#
1)创建测试文件,测试推送
[root@mynfs02 ~]# cd /mynfsdata
[root@mynfs02 mynfsdata]# echo "hello world" > myfile.txt
[root@mynfs02 mynfsdata]#
[root@mynfs02 mynfsdata]# rsync -arv /mynfsdata rsyncuser@192.168.250.193::data --password-file=/etc/rsync.pass
sending incremental file list
mynfsdata/
mynfsdata/aaa.txt
mynfsdata/myfile.txt
mynfsdata/mytest2.txt
sent 289 bytes received 77 bytes 732.00 bytes/sec
total size is 20 speedup is 0.05
[root@mynfs02 mynfsdata]#
2)在master上查看
[root@mynfs01 mynfsdata]# cd mynfsdata/
[root@mynfs01 mynfsdata]# ls
aaa.txt myfile.txt mytest2.txt
[root@mynfs01 mynfsdata]# pwd
/mynfsdata/mynfsdata
[root@mynfs01 mynfsdata]# cat myfile.txt
hello world
[root@mynfs01 mynfsdata]#
问题:这是目录重复问题,由于上面写错了。
3.7在slave上配置自动同步
这里先本地上传sersync
文件,然后进行安装,或从百度网盘下载,网盘地址:
链接: https://pan.baidu.com/s/1t25V9jab2s2-r5P31JQJDw
提取码: kqcx
本地上传并安装
[root@mynfs02 /]# cd /usr/local
[root@mynfs02 local]# rz
[root@mynfs02 local]# ls
bin etc games include lib lib64 libexec sbin sersync2.5.4_64bit_binary_stable_final.tar.gz share src
[root@mynfs02 local]#
[root@mynfs02 local]# tar xvf sersync2.5.4_64bit_binary_stable_final.tar.gz
GNU-Linux-x86/
GNU-Linux-x86/sersync2
GNU-Linux-x86/confxml.xml
[root@mynfs02 local]# ls
bin etc games GNU-Linux-x86 include lib lib64 libexec sbin sersync2.5.4_64bit_binary_stable_final.tar.gz share src
[root@mynfs02 local]#
[root@mynfs02 local]# mv GNU-Linux-x86/ sersync
[root@mynfs02 local]# cd sersync/
[root@mynfs02 sersync]#
修改配置文件
[root@mynfs02 sersync]# sed -ri 's#<delete start="true"/>#<delete start="false"/>#g' confxml.xml
[root@mynfs02 sersync]# sed -ri '24s#<localpath watch="/opt/tongbu">#<localpath watch="/mynfsdata">#g' confxml.xml
[root@mynfs02 sersync]# sed -ri '25s#<remote ip="127.0.0.1" name="tongbu1"/>#<remote ip="192.168.250.193" name="data"/>#g' confxml.xml
[root@mynfs02 sersync]# sed -ri '30s#<commonParams params="-artuz"/>#<commonParams params="-az"/>#g' confxml.xml
[root@mynfs02 sersync]# sed -ri '31s#<auth start="false" users="root" passwordfile="/etc/rsync.pas"/>#<auth start="true" users="rsyncuser" passwordfile="/etc/rsync.pass"/>#g' confxml.xml
[root@mynfs02 sersync]# sed -ri '33s#<timeout start="false" time="100"/><!-- timeout=100 -->#<timeout start="true" time="100"/><!-- timeout=100 -->#g' confxml.xml
[root@mynfs02 sersync]#
启动Sersync
[root@mynfs02 sersync]# /usr/local/sersync/sersync2 -dro /usr/local/sersync/confxml.xml
set the system param
execute:echo 50000000 > /proc/sys/fs/inotify/max_user_watches
execute:echo 327679 > /proc/sys/fs/inotify/max_queued_events
parse the command param
option: -d run as a daemon
option: -r rsync all the local files to the remote servers before the sersync work
option: -o config xml name: /usr/local/sersync/confxml.xml
daemon thread num: 10
parse xml config file
host ip : localhost host port: 8008
will ignore the inotify delete event
daemon start,sersync run behind the console
use rsync password-file :
user is rsyncuser
passwordfile is /etc/rsync.pass
config xml parse success
please set /etc/rsyncd.conf max connections=0 Manually
sersync working thread 12 = 1(primary thread) + 1(fail retry thread) + 10(daemon sub threads)
Max threads numbers is: 22 = 12(Thread pool nums) + 10(Sub threads)
please according your cpu ,use -n param to adjust the cpu rate
------------------------------------------
rsync the directory recursivly to the remote servers once
working please wait...
execute command: cd /mynfsdata && rsync -az -R ./ --timeout=100 rsyncuser@192.168.250.193::data --password-file=/etc/rsync.pass >/dev/null 2>&1
run the sersync:
watch path is: /mynfsdata
[root@mynfs02 sersync]#
以上,就已经做好了主从相互同步的操作。
4.安装keepalive并配置
4.1 master服务器上操作
1)安装keepalive
[root@mynfs01 /]# yum -y install keepalived
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
2)配置keepalived.conf文件
[root@mynfs01 /]# vim /etc/keepalived/keepalived.conf
[root@mynfs01 /]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.ori.bak.20220307
[root@mynfs01 /]# vim /etc/keepalived/keepalived.conf
[root@mynfs01 /]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id NFS-Master
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass keepalived12#
}
virtual_ipaddress {
192.168.250.195
}
}
[root@mynfs01 /]#
3)启动服务
[root@mynfs01 /]# systemctl start keepalived.service && systemctl enable keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@mynfs01 /]#
4.2在slave服务器上操作
1)安装keepalive
[root@mynfs02 /]# yum -y install keepalived
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
2)配置keepalived.conf文件
[root@mynfs02 /]# vim /etc/keepalived/keepalived.conf
[root@mynfs02 /]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id NFS-Slave
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass keepalived12#
}
virtual_ipaddress {
192.168.250.195
}
}
[root@mynfs02 /]#
3)启动服务
[root@mynfs02 /]# systemctl start keepalived.service && systemctl enable keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@mynfs02 /]#
4.3检查虚拟ip
在master上执行,进行检查虚拟IP
[root@mynfs01 /]# ip a | grep 192.168.250.195
inet 192.168.250.195/32 scope global eth0
4.4测试
在nfs客户机上挂载,这里找一台上节的mysql集群主机中的一台作为nfs客户端进行挂载测试。
[root@mysql01 ~]# mount -t nfs 192.168.250.195:/mynfsdata /mnt
mount.nfs: access denied by server while mounting 192.168.250.195:/mynfsdata
[root@mysql01 ~]#
解决:
[root@mynfs02 /]# vim /etc/exports
/mynfsdata 192.168.250.0/24(rw,sync,all_squash)
[root@mynfs02 /]# exportfs -rv
exporting 192.168.250.0/24:/mynfsdata
[root@mynfs02 /]#
上面处理后,挂载成功。
[root@mysql01 ~]# mount -t nfs 192.168.250.195:/mynfsdata /mnt
[root@mysql01 ~]#
这时模拟主NFS宕机,然后测试VIP是否漂移。
[root@mynfs01 /]# systemctl stop keepalived.service
[root@mynfs01 /]# ip a | grep 192.168.250.195
[root@mynfs01 /]#
没有VIP输出,然后到从机上测试
[root@mynfs02 /]# ip a | grep 192.168.250.195
inet 192.168.250.195/32 scope global eth0
[root@mynfs02 /]#
说明IP地址已漂移。
5.设置keepalive脚本来检测nfs存活
这里需要写一个脚本检测nfs存活,根据结果来进行漂移。
[root@mynfs01 /]# vim /root/check_nfs.sh
[root@mynfs01 /]# cat /root/check_nfs.sh
#!/bin/sh
/usr/bin/systemctl status keepalived &>/dev/null
if [ $? -ne 0 ]
then
echo "keepalived未启动,无需监测."
exit 0
else
/usr/bin/systemctl status nfs &>/dev/null
if [ $? -ne 0 ]
then
/usr/bin/systemctl restart nfs
/usr/bin/systemctl status nfs &>/dev/null
if [ $? -ne 0 ]
then
/usr/bin/systemctl stop keepalived
fi
fi
fi
[root@mynfs01 /]#
加入定时任务:
[root@mynfs01 /]# crontab -l
*/1 * * * * /bin/sh /root/check_nfs.sh &> /root/check_nfs.log
[root@mynfs01 /]#
同样在从机上操作。