The Role of Distributed Storage
The purpose of adding distributed storage: mainly to protect the data to avoid the damage of a server disk, resulting in data loss can not be used normally.
Reference textgrade (of goods):/post/proxmox-ve-%E9%83%A8%E7%BD%B2%E5%8F%8C%E8%8A%82%E7%82%B9%E9%9B%86%E7%BE%A4%E5%8F%8Aglusterfs%E5%88%86%E5%B8%83%E5%BC%8F %E6%96%87%E4%BB%B6%E7%B3%BB%E7%BB%9F/
Need to save the same version for both pve nodes
Two proxmox ve nodes, realize high availability vm, lxc automatic migration
1. Modify the hosts file
In /etc/hosts on both pve's, add the following hosts
root@pve1:~# cat /etc/hosts 192.168.1.50 pve1 192.168.1.60 pve2 192.168.1.50 gluster1 192.168.1.60 gluster2
2. Modify the server name
In the twopve(used form a nominal expression)/etc/In hostname, add the following root@pve1.~# cat /etc/hostname pve1 root@pve2:~# cat /etc/hostname pve2
3. Install glusterfs
The following operations are done on both machines, in this case pve1, pve2.
wget -O - https:///pub/gluster/glusterfs/9/ | apt-key add - DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"') DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+') DEBARCH=$(dpkg --print-architecture) echo deb https:///pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt// apt update [note this step can be left out when the source has been determined to use a specific source] aptinstall -y glusterfs-server
3.1. need to save the same version of gluster
gluster --version
3.2 On pve1 edit: nano /etc/glusterfs/, in option -port 24007 add.
option -address gluster1 option -address gluster1 option -address gluster1
3.3 Edit on pve2: nano /etc/glusterfs/, in option -port 24007 add.
option -address gluster2 option -address gluster2 option -address gluster2
3.4 Turning on services
systemctl enable
systemctl start
3.5 Important, you need to run a command on pve2 to join the cluster.
gluster peer probe gluster1
Display: peer probe: success and it's OK
3.6 Increase volume
Prerequisites: Data disks need to exist on each of your own servers and be mounted in the /data directory.
gluster volume create VMS replica 2 gluster1:/data/s gluster2:/data/s
gluster vol start VMS
command analysis * gluster volume create: This is the GlusterFS command to create a new volume. * VMS: This is the name you assign to the new volume. In this example, the name of the volume is VMS. * replica 2: This specifies the type of volume and the replication factor。replica Indicates that this is a replicated volume,2 Indicates that the data will be replicated on two nodes. This means that you have two copies of the data, one on the master node and the other on the replica node. * gluster1:/data/s: This is the first storage path。gluster1 be GlusterFS The name of a node in the cluster or IP address,/data/s is the directory on this node used to store GlusterFS volume data. * gluster2:/data/s: This is the second storage path。Similar to the first path,but specifies a second node and storage directory。In this example,The data will be copied to the gluster1 cap (a poem) gluster2 These two nodes on the /data/s directory. When this command is executed, GlusterFS creates a replicated volume called VMS on both nodes, gluster1 and gluster2, and stores the data in the two nodes'/data/s directory for replication. Doing so improves the reliability and availability of the data because if one of the nodes fails, the copy of the data on the other node is still available. However, as mentioned in the error message you encountered, creating replicated volumes (especially when there are only two copies) has brain cracks (split-brain)risks。Brain splitting is when two or more nodes both consider themselves to be master nodes and both are receiving write operations,Data may become inconsistent。To avoid this,Arbitration nodes can be used(arbiter)or increase the replication factor to 3 or more. But in many cases, a simple two-node replicated volume is sufficient for most applications. ------------------------------- Analyzing the gluster vol start VMS Command 1. Start Volume Services: This command starts services for the volume named VMS in the GlusterFS cluster, enabling clients to begin accessing data on that volume. 2. Ensuring Data Availability: When a volume is started, GlusterFS ensures that the data is available between nodes in the cluster and will manage and replicate the data based on the type of volume (e.g., distributed replicated volume). 3. Checking Node Status: Before starting a volume, GlusterFS checks the status of all nodes in the cluster that are participating in the volume to make sure they are all available and in the correct configuration. 4. Handling Client Requests: once a volume is successfully started, clients can access the data stored on it by mounting the volume. glusterFS handles read and write requests from clients and ensures that the data is consistent across the cluster. 5. Load Balancing: For distributed volumes and distributed replicated volumes, GlusterFS automatically performs load balancing at startup to ensure that data is evenly distributed across nodes, thus improving overall performance and reliability. 6. Monitoring and Logging: After a volume is started, GlusterFS continuously monitors the status and performance of the volume and records relevant log information. This information is useful for subsequent troubleshooting and performance tuning. In summary, the gluster volume start VMS command is used to start a volume named VMS in a GlusterFS cluster to ensure data availability, consistency, and performance, and to handle read and write requests from clients. Before executing this command, you need to ensure that all nodes in the GlusterFS cluster are properly configured and can communicate with each other.
3.7. Check status
gluster vol info VMS
gluster vol status VMS
3.8 Adding mounts
Do it on both pve
mkdir /vms
Modify pve1's /etc/fstab by adding
gluster1:VMS /vms glusterfs defaults,_netdev,,backupvolfile-server=gluster2 0 0
Modify pve2's /etc/fstab by adding
gluster2:VMS /vms glusterfs defaults,_netdev,,backupvolfile-server=gluster1 0 0
Reboot both pve's and get /mnt mounted.
Two pve mounts without reboot
mount /vms
3.9 Solving the split-brain problem
The two nodes of theglusteroccurssplit-Brain problem, that is, two nodes have the same number of votes, and no one listens to anyone, the solution is as follows. gluster vol set VMS-timeout 5 gluster volume heal VMS enable gluster vol set VMS -reads false gluster vol set VMS -count 1 gluster vol set VMS network.ping-timeout 2 gluster volume set VMS -child-policy mtime gluster volume heal VMS granular-entry-heal enable gluster volume set VMS -self-heal-algorithm full
4.0pve dual node cluster setup
The first one creates, the second one joins, there's nothing to say.
5.0 Creating a shared directory
In DataCenter, in Storage, click Add, fill in /vms for Directory, and check share.
6.0 HA setup
modifications /etc/pve/ Add it to quorum and it looks like this. quorum { provider: corosync_votequorum expected_votes.1 two_node: 1 }
where, expected_votes indicates the number of desired nodes, two_node: 1 indicates that, there are only two nodes, and one wait_for_all: 0, and the number of nodes is the number of nodes.NOTES: enabling two_node: 1 automatically enables wait_for_all. It is still possible to override wait_for_all by explicitly setting it to 0. If more than 2 nodes join the cluster, the two_node option is automatically disabled.
6.1 Configuring Automatic Failover into HA
In the event of a hang-up on the PVe2 node, the VM will automatically drift to another PVE1 under the