Location>code7788 >text

One article: LVM (Logical Volume Manager)

Popularity:906 ℃/2024-07-23 17:28:26

The previous two articles have talked about disk partitioning and disk arrays:

Understanding Linux Disk Partitioning in One Article

Understanding in one article: Redundant Array of Independent Disks (RAID)

But after the disk partition and then want to expand or shrink the capacity of the more troublesome, and even a lot of times can not be expanded or shrunk, this time we have to use linux very commonly used hard disk device resource management technology: LVM.

LVM, the full name of the English called "Logical Volume Manager", translated means called "Logical Volume Manager", mainly to solve the problem of dynamic expansion or contraction of the disk.

In general, in a production environment, it is not possible to accurately assess the future use of each hard disk partition at the beginning, which may result in insufficient use of the originally allocated hard disk partition. For example, the size of the database directory for storing transaction records increases with the increase of business volume, and the size of the log directory grows due to the analysis and recording of user behaviors, all of which will cause the original hard disk partition to be stretched to the limit. After the hard disk is partitioned or deployed as a RAID array, it is not easy to modify the hard disk partition size. LVM, as a very popular hard disk device resource management technology, allows users to dynamically adjust the hard disk resources to solve the above problems.

I. Basic Concepts of Logical Volume Manager

image-20240722134650922

There are several concepts involved in the above diagram:

PV (Physical Volume): Physical Volume, as the name suggests, it may be a hard disk or a RAID array.

VG (Volume Group): Volume group, multiple physical volumes (PVs) form a volume group.

LV (Logical Volume): Logical Volume, that is, the core of our discussion, it is based on the volume group allocation management of disk resources.

PE (Physical Extent): the basic unit, the smallest unit that LVM can manage to allocate, each space allocated by LVM must be an integer multiple of PE.

This means that physical volumes (PVs) make up volume groups (VGs), and logical volumes (LVs) are allocated to manage base units (PEs) based on volume groups (VGs).

Logical Volume A in the above figure combines several PEs across hard disks, so you can use Logical Volume A directly to the outside world without caring about how many hard disks are actually under Logical Volume A.

II. Logical volume management in practice

When LVM is deployed, you need to configure physical volumes, volume groups, and logical volumes one by one, and the commonly used deployment commands are shown below:

functionality Physical volume management Volume management Logical Volume Management
scanning pvscan vgscan lvscan
establish pvcreate vgcreate lvcreate
demonstrate pvdisplay vgdisplay lvdisplay
removing pvremove vgremove lvremove
expansion vgextend lvextend
shrinkage vgreduce lvreduce

One might ask why logical volume management doesn't have commands for scaling up and scaling down.

A: The logical volume corresponds to the underlying hard disk or RAID array, and if it can be expanded or shrunk, there's no such thing as LVM, haha.

The following operations are based on Vmware virtual machine operations.

1. Physical volume and volume group creation

First, add two new 20G hard disks to the VM and boot it up. You can see that the two hard disks sdb/sdc are already in place

image-20240722153944333

Step one: Getting two hard disks to support lvm technology

pvcreate /dev/sdb /dev/sdc
image-20240722154254125

Step two: Add the two hard disk devices to the storage volume group

vgcreate storage /dev/sdb /dev/sdc
image-20240722154755946

Step Three: Viewing Volume Group Status

vgdisplay
image-20240722155042974

As you can see from the above screenshot, after the volume group is created, the PE-related parameters have been determined, the size is 4MB, and the 40G space is divided into a total of 10238 PEs waiting for lvm to divide the logical volume to use.

2. Logical Volume Creation

Step one: Creating Logical Volumes

We now cut a 201M logical volume device from the storage volume group.

Note here that there are two types of cuts:

  1. Cut to size, using the parameter-LFor example-L 150MIndicates a logical volume that is cut out to 150MB in size;

  2. In terms of the number of basic units, the parameters used are-l, the default size of each base unit is 4MB, for example-l 37This generates a logical volume of size 37*4MB=148MB.

The easier part is definitely cutting to size:

lvcreate -n kdyzm_lv -L 201 storage
image-20240722203327517

The creation was successful, but you can see that it prompted a message:Rounding up size to full physical extent 204.00 MiB. This means that rounding up to create a 204MB logical volume device. Why is it, we obviously created a 201MB size, it took the liberty of changing it to 204MB?The answer is that the size we create must be an integer multiple of the PE (Physical Extent, the basic unit), the PE size is 4MB, so it is corrected to 204MB.

The Linux system stores the logical volume devices in the LVM in the /dev device directory (which is actually a shortcut), and also creates a directory with the name of the volume group, which holds the device mapping file for the logical volume (i.e., /dev/volume group name/logical volume name).

Step two: Formatting and mounting

If you are using a logical volume manager, it is not recommended to use the xfs file system, it is said that xfs and lvm are not very compatible. So next use ext4 to format the drive.

image-20240723141604629

Of course in order for the mount not to fail after a reboot, the mount information needs to be written to the/etc/fstabdocument, which will not be repeated here.

3. Logical volume expansion

The biggest advantage of using logical volumes is that it can be dynamically expanded: volume group consists of a number of hard disk, the user in the use of storage devices do not perceive the underlying architecture and layout of the device, not to mention the bottom is composed of how many hard disk, as long as there are enough resources in the volume group, you can always expand the capacity of the logical volume.

Steps for logical volume expansion:Unmount - > Logical Volume Expansion - > Check Hard Disk Integrity - > Reset Device Capacity in System - > Remount Hard Disk Device

Step one: Unmount

Always remember to uninstall the association between the device and the mount point before expanding:umount /kdyzm_lv

Step two: Logical Volume Expansion

Expand logical volume /dev/storage/kdyzm_lv from 204.00 MB to 300 MB

lvextend -L 300M /dev/storage/kdyzm_lv 
image-20240723144057857

It also says "Size of logical volume storage/kdyzm_lv changed from 204.00 MiB (51 extents) to 300.00 MiB (75 extents)." The logical volume was originally 204M (51 basic units) and now it is expanded to 300M (75 basic units).

Step Three: Checking Hard Disk Integrity

Confirm that the directory structure, contents and file contents are not lost. Normally no errors are reported, all are normal.

 e2fsck -f /dev/storage/kdyzm_lv 
image-20240723144607332

Step Four: Reset the capacity of the device in the system

The LV (Logical Volume) device has just been expanded, but the system kernel has not yet been synchronized to this newly modified information and needs to be synchronized manually.

resize2fs /dev/storage/kdyzm_lv 
image-20240723144821755

Step Five: Remount the hard disk

mount /dev/storage/kdyzm_lv /kdyzm_lv
image-20240723145139653

It should be noted that the capacity shown here is 287M, rather than 300M, to be smaller than the capacity of the expansion of our original design, the reason for this is that the hardware manufacturer's manufacturing standard is 1M = 1000KB, 1KB = 1000B; in the computer system is 1M = 1024KB, 1KB = 1024B. Therefore, the capacity of the hard disk will be a little " Therefore, the capacity of the hard disk will be somewhat "shrunk", 300M hard disk will eventually be recognized as 300MB * 1000K * 1000B/1024B/1024K = 286.1022MB, the approximate algorithm is so calculated.

4. Shrinking the logical volume

The risk of data loss during a shrink operation can be high, so Linux systems specify that the integrity of the file system must be checked before performing a shrink operation on an LVM logical volume in order to ensure data security. The complete steps for shrinking are as follows:

unmount - > check file system integrity - > notify system kernel of upcoming shrink - > logical volume shrink

Step one: Unmount

 umount /kdyzm_lv

Step two: Check the integrity of the file system.

As mentioned above, checking file system integrity is mandatory for Linux, and if you skip this step, you will be prompted as follows

image-20240723160141292

Requires that the command must be run firste2fsck -f /dev/storage/kdyzm_lvThen, as requested, run the command first

e2fsck -f /dev/storage/kdyzm_lv
image-20240723160419763

Step Three: Notify the system kernel to reduce the logical volume capacity to 100M

resize2fs /dev/storage/kdyzm_lv 100M
image-20240723160650082

Running this command without reporting an error means that the kernel feels that shrinking will not be a problem through calculations, so it approves the shrinking operation.

Step Four: logical convolution capacity

Reduce the logical volume to 100M with the lvreduce command

lvreduce -L 100M /dev/storage/kdyzm_lv 
image-20240723161043511

After running the command, it also prompts that the command is risky and to double-check the execution of the command, type y to do so.

** Step 5: ** Remount the system

image-20240723161406733

This completes the downsizing.

5. Delete logical volumes

To delete a logical volume, you need to delete the logical volume, the volume group, and the physical volume device in sequence, and the order cannot be reversed.

Step one: Unmount

Delete the mount information recorded in the /etc/fstab file and cancel the mount association.

umount /kdyzm_lv

Since the /etc/fstab file is not written, there is no need to remove the relevant information here.

Step two: Deleting Logical Volumes

lvremove /dev/storage/kdyzm_lv 
image-20240723163220630

Note that a secondary confirmation is required here.

Step Three: Deleting Volume Groups

vgremove storage

You only need to write the name of the volume group here, because that's how we created it when we created it in the first place.

image-20240723163441598

Step Four: Deleting Physical Volumes

pvremove /dev/sdb /dev/sdc 
image-20240723165218622

Finally, feel free to follow my blog: