Location>code7788 >text

NFS Service Setup Process

Popularity:258 ℃/2024-12-10 18:57:38

NFS service

【1】、nfs configuration

Function: Addresses data consistency issues

The configuration file for the NFS service program is/etc/exportsIt needs to be strictlyPath to the shared directory NFS clients allowed to access (share permissions parameter)The format is written to define the directory to be shared with the corresponding permissions, as shown in the following figure.

image-20241204143000677

# Install the service
[root@nfs ~]# yum install -y nfs-utils

# Configuration files
[root@nfs ~]# cat /etc/exports
/data 172.16.1.0/24(rw,sync,all_squash)
[root@nfs ~]# mkdir /data/

# Start the service
[root@nfs ~]# systemctl enable nfs --now
Created symlink /etc/systemd/system// → /usr/lib/systemd/system/.

# We need to see which user nfs is running as, and we can see that it's running with a uid and gid of 65534, so we need to set the owner and group for /data.
[root@nfs ~]# cat /var/lib/nfs/etab
/data 172.16.1.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid= 65534,anonuid=65534,sec=sys,rw,secure,root_squash,all_squash)
[root@nfs ~]# chown nobody:nobody /data
# Client to mount
# The client also needs to install nfs-utils, but it does not need to be started
yum install -y nfs-utlils
# showmount -e to see what shared directories are available
[root@web01 ~]# showmount -e 172.16.1.31
Export list for 172.16.1.31.
/data 172.16.1.0/24

# Create a local directory and mount it
[root@web01 ~]# mount -t nfs 172.16.1.31:/data /img
[root@web01 ~]# df -Th|grep nfs
172.16.1.31:/data nfs4 48G 3.8G 45G 8% /img
[root@web01 ~]# echo hahah > /img/
# In the nfs client you can see
[root@nfs ~]# cat /data/
hahah

# Mounted on the backup server
[root@backup ~]# mkdir /img
[root@backup ~]# mount -t nfs 172.16.1.31:/data /img
[root@backup ~]# ll /img
total 4
-rw-r--r-- 1 nobody nobody 6 Dec 3 17:25
[root@backup ~]# touch /img/{haha,xixi}
[root@backup ~]# ll /img
total 4
-rw-r--r-- 1 nobody nobody 6 Dec 3 17:25
-rw-r--r-- 1 nobody nobody 0 Dec 3 17:26 haha
-rw-r--r-- 1 nobody nobody 0 Dec 3 17:26 xixi
# I can see it on all the other machines
[root@web01 ~]# ll /img
total 4
-rw-r--r-- 1 nobody nobody 6 Dec 3 17:25
-rw-r--r-- 1 nobody nobody 0 Dec 3 17:26 haha
-rw-r--r-- 1 nobody nobody 0 Dec 3 17:26 xixi

# If you delete the contents under /img on web01, there's nothing left on the backup, and the server is gone as well
[root@web01 ~]# rm -f /img/*
[root@web01 ~]# ll /img/
total 0
[root@backup ~]# ll /img
total 0
[root@nfs ~]# ll /data/
total 0
# Client implementation of persistent mounts
/etc/fstab
172.16.1.31:/data /img nfs defaults 0 0

【2】、nfs parameters

nfs share parameters parameter role
rw* read-write access
ro read-only access
root_squash Maps to an anonymous user of the NFS server when the NFS client is accessed as root administrator (not commonly used)
no_root_squash Maps to the root administrator of the NFS server when the NFS client is accessed as root administrator (not commonly used)
all_squash Maps to an anonymous user on the NFS server regardless of the account used by the NFS client to access it (commonly used)
no_all_squash No compression regardless of the account used by the NFS client to access the
sync* Writes data to memory and hard disk at the same time, ensuring no data loss
async Prioritize saving data to memory before writing to the hard drive; this is more efficient, but data may be lost
anonuid* Configure all_squash for use, specify the user UID for NFS, which must exist on the system.
anongid* Configure all_squash for use, specify the user UID for NFS, which must exist on the system.
# ro for client read-only access
/data 172.16.1.0/24(ro,sync,all_squash)

# all_squash for compressed user permissions, defaults to nobody user if not specified later.

# Specify uid and gid, no longer use the default
/data 172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)

[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# cat /var/lib/nfs/etab
/data 172.16.1.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid= 666,anonuid=666,sec=sys,rw,secure,root_squash,all_squash)
# Client remounts, since the client doesn't have a user with uid=666, it shows the uid in the location of the owner and group
[root@backup ~]# echo haha > /img/
[root@backup ~]# ll /img
total 4
-rw-r--r-- 1 666 666 5 Dec 3 18:59
[root@backup ~]#

【3】、nfs principle

image-20241204142811377

1. User processes access the NFS client and use different functions to process the data
The client is passed to the NFS server via TCP/IP.
When the server receives the request, it will first call the portmap process for port mapping.
process is used to determine whether the NFS client has permission to connect to the NFS server.
The process determines whether the client has the corresponding privileges to authenticate.
Process implementation of user mapping and compression
7. Finally, the NFS server converts the function corresponding to the request into a locally recognized command, which is passed to the kernel, which drives the hardware.

Note: rpc is a remote procedure call, so you must have an rpc service to use nfs.

【4】、Structure for solving nfs single point of failure

image-20241204144620927

Reason for failure:

Since we only have one nfs server, assuming that the nfs server hangs, then all the servers that mount the shared directory on the nfs server will not have any data.

Troubleshooting:

A backup server exists in our cluster architecture and we will utilize the backup server for a kind of nfs redundancy.

The specific implementation is to deploy the lsync service on the nfs server to synchronize the data in the shared directory on the nfs to the backup server in real time, so that if the nfs hangs, the data will not be lost. We can also set up the nfs service on the backup server, and then let other hosts mount the shared directory on the backup server.

We need to ensure that users are consistent throughout the process

1、Build nfs service for nfs server

[root@nfs ~]# yum install -y nfs-utils
[root@nfs ~]# cat /etc/exports
/data 172.16.1.0/24(rw,all_squash,sync,anonuid=666,anongid=666)
[root@nfs ~]# groupadd -g 666 www
[root@nfs ~]# useradd -g 666 -u 666 -M -s /sbin/nologin www
[root@nfs ~]# mkdir -p /data/
[root@nfs ~]# chown www:www /data/
[root@nfs ~]# systemctl enable nfs --now
Created symlink /etc/systemd/system// → /usr/lib/systemd/system/.
[root@nfs ~]# cat /var/lib/nfs/etab
/data 172.16.1.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=666,anongid=666,sec=sys,rw,secure,root_squash,all_squash)

# existweb01upload
[root@web01 ~]# mkdir /img
[root@web01 ~]# yum install -y nfs-utils
[root@web01 ~]# showmount -e 172.16.1.31
Export list for 172.16.1.31:
/data 172.16.1.0/24
[root@web01 ~]# mount -t nfs 172.16.1.31:/data /img
[root@web01 ~]# df -Th | grep nfs
172.16.1.31:/data nfs4 48G 3.7G 45G 8% /img

# test (machinery etc)
[root@web01 ~]# touch /img/aaa
[root@nfs ~]# ll /data/
total 0
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa

2、Build rsync

[root@backup ~]# yum install -y rsync
[root@backup ~]# vim /etc/
uid = www # Be sure andnfsuser uniformity
gid = www
auth users = rsync_backup
secrets file = /etc/
log file = /var/log/
fake super = yes
use chroot = no
max connections = 200
time out = 600
ignore errors
read only = false
port = 873
list = false
[backup]
path=/backup
[nfs]
path=/nfs
[root@backup ~]# groupadd -g 666 www
[root@backup ~]# useradd -g 666 -u 666 -M -s /sbin/nologin www
[root@backup ~]# echo "rsync_backup:123" > /etc/
[root@backup ~]# chmod 600 /etc/
[root@backup ~]# mkdir /backup /nfs
[root@backup ~]# chown www:www /backup/
[root@backup ~]# chown www:www /nfs
[root@backup ~]# systemctl enable rsyncd --now
Created symlink /etc/systemd/system// → /usr/lib/systemd/system/.

# existweb01respond in singingnfsDoing tests on the server
[root@web01 ~]# rsync -avz /etc/passwd [email protected]::backup
Password:
sending incremental file list
passwd

sent 829 bytes received 43 bytes 158.55 bytes/sec
total size is 1,805 speedup is 2.07
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa
[root@nfs ~]# rsync -avz /etc/hosts [email protected]::nfs
Password:
sending incremental file list
hosts

sent 140 bytes received 43 bytes 11.09 bytes/sec
total size is 158 speedup is 0.86

3. Build lsync on the nfs server

[root@nfs ~]# yum install -y lsyncd
[root@nfs ~]# cat /etc/
settings {
    logfile = "/var/log/lsyncd/",
    statusFile = "/var/log/lsyncd/",
    maxProcesses = 2,
    nodaemon = false,
}
sync {
    ,
    source = "/data",
    target = "[email protected]::nfs",
    delete = true,
    delay = 1,
    rsync = {
        binary = "/usr/bin/rsync",
        password_file = "/etc/",
        archive = true,
        compress = true,
    }
}
[root@nfs ~]# echo 123 > /etc/
[root@nfs ~]# chmod 600 /etc/
# existlsyncOn startup,will automatically execute thersynccommand
# this timebackupin the servernfsThere is no data in the directory
[root@backup ~]# ll /nfs
total 0
[root@nfs ~]# systemctl enable lsyncd --now
[root@nfs ~]# systemctl status
● - Live Syncing (Mirror) Daemon
   Loaded: loaded (/usr/lib/systemd/system/; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2024-12-04 20:06:19 CST; 5s
# this timebackuphit the nail on the headnfsThe shared directory will have content
[root@backup ~]# ll /nfs
total 0
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa
# Test: writing to a shared directory on web01 will automatically sync to backup
[root@web01 ~]# touch /img/{1..3}.log
[root@backup ~]# ll /nfs
total 0
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa
# Now let's simulate an nfs hang and sync web01's shared directory to backup
[root@backup ~]# ifdown ens36
WARN : [ifdown] You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
WARN : [ifdown] 'network-scripts' will be removed from distribution in near future.
WARN : [ifdown] It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Device 'ens36' successfully disconnected.

# Check which shared directory is mounted
[root@web01 ~]# cat /proc/mounts
172.16.1.31:/data /img nfs4 rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys, clientaddr=172.16.1.7,local_lock=none,addr=172.16.1.31 0 0
[root@web01 ~]# umount -f /img

# Build nfs on backup
[root@backup ~]# vim /etc/exports
/nfs 172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)
[root@backup ~]# systemctl enable nfs --now
Created symlink /etc/systemd/system// → /usr/lib/systemd/system/.

# remount on web01
[root@web01 ~]# showmount -e 172.16.1.41
Export list for 172.16.1.41.
/nfs 172.16.1.0/24
[root@web01 ~]# mount -t nfs 172.16.1.41:/nfs /img
# And the data is back.
[root@web01 ~]# ll /img/
total 0
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 19:43 aaa
[root@web01 ~]# touch /img/
[root@backup ~]# ll /nfs
total 0
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:19
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa


# At this point the nfs server is back up and running and we need to mount the data again
# During the period when the nfs server hangs, the data generated by the web01 server is being synchronized with the nfs shared directory in backup. After the nfs server is back up, we remount the directory back, and the data for this period will not be unsynchronized. Since we need to restart the lsync service after remounting, and our lsync uses the --delete parameter when synchronizing, we have to do an rsync synchronization before remounting in order to prevent data loss
[root@web01 ~]# umount /img
[root@web01 ~]# mount -t nfs 172.16.1.31:/data /img
[root@web01 ~]# ll /img/
total 0
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 20:11
-rw-r--r-- 1 666 666 0 Dec 4 19:43 aaa
[root@nfs ~]# systemctl restart
[root@nfs ~]# ll /data
total 0
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa
[root@backup ~]# ll /nfs
total 0
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa
# To fix this part of the data problem, we need to perform an rsync to synchronize the data before we re-hook back to the nfs server
[root@backup ~]# rsync -avz /nfs/ 192.168.121.31:/data

Authorized users only. All activities may be monitored and reported.
[email protected]'s password.
's password: sending incremental file list
. /


sent 187 bytes received 38 bytes 64.29 bytes/sec
total size is 0 speedup is 0.00
[root@nfs ~]# ll /data
speedup is 0.00 [root@nfs ~]# ll /data
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:11
-rw-r--r-- 1 www www 0 Dec 4 20:30
-rw-r--r-- 1 www www 0 Dec 4 19:43 aaa

[root@web01 ~]# mount -t nfs 172.16.1.31:/data /img
[root@nfs ~]# systemctl restart

4、Using scripts to monitor the nfs server to achieve automatic switching

[root@web01 ~]# cat  
#!/bin/bash

ping -c1 -W1 172.16.1.31 > /dev/null 2>&1
ip=` df -Th | grep nfs | awk -F: '{print $1}'`
if [ $? -ne 0 ];then
    umount -f /img &> /dev/null &
    sleep 2
    mount -t nfs 172.16.1.41:/nfs /img
else
    if [[ $ip =~ "172.16.1.41" ]];then
         umount -f /img &> /dev/null &
         sleep 2
         mount -t nfs 172.16.1.31:/data /img
    fi
fi
# Timed packages of data into a specified directory
#! /bin/bash

mkdir -p /backup

IP=`hostname -I | awk -F" " '{print $1}'`
path=/backup/web01_${IP}_`date +%F`
tar -zcvf $path /etc/
rsync -avz $path rsync_backup@backup::backup
find /backup -mtime +7 -exec rm -f {} \;.