VMware vSphere 8.0 U3 released a very interesting feature called Memory Tiering to increase the amount of physical memory (RAM) available to ESXi hosts by utilizing PCIe-based NVMe devices to act as a second tier (secondary) of memory. Essentially, memory tiering is the use of less expensive NVMe devices to act as physical memory, thereby increasing the total capacity of memory and the amount available for workloads while reducing the total cost of ownership (TCO).
Why would you need memory hierarchy? Is there a significant performance bottleneck? There are a number of reasons, such as the cost of memory, and the fact that today's demand for memory capacity and performance is actually unbalanced with the demand for CPUs, which can lead to many limitations in many environments due to memory. The impact on performance is definitely there, but it is slowly being managed as technology evolves. Memory tiering is completely transparent to applications and can be used for all different types of workloads. The hypervisor is responsible for memory management, so it knows which pages are hot and which are cold, i.e. it can determine which pages are used for which tier and maintain performance at the same time.
For current memory tiering configurations, VMware recommends a 1:4 ratio of NVMe storage to physical memory, or 25%. This means that if the physical memory is 100 GB, then the NVMe storage for memory tiering is recommended to be 25 GB, and the total memory capacity that can be used is 125 GB, which not only increases the capacity of the physical memory, but also reduces the performance impact caused by memory tiering. Of course this ratio is only the official recommendation and the default, this value can be modified, you can use a value of 1~400 to set the percentage of physical memory to NVMe storage. For more information and details check out the VMwareKB 95944 Memory Layering Technical Guide document at the bottom of the Knowledge Base article.
Right now, the Memory Tiering technology is only in preview and can be evaluated in a lab or test environment, and can only be configured via the ESXCLI or PowerCLI command line, although in future releases it may be possible to apply it directly from the UI management interface. Here's a look at the configuration process.
First, ESXi hosts must have version 8.0 U3 and above installed to support the memory tiering feature, keeping in mind the current physical memory "capacity" here.
Then a Samsung 970 EVO 250 GB NVMe drive was used for the test, keeping in mind the "path" and "capacity" here.
Log in to the ESXi host.
ssh
2. Run the ESXCLI command to enable the memory hierarchy feature.
esxcli system settings kernel set -s MemoryTiering -v TRUE
3. Create specific NVMe devices for memory tiering.
esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_970_EV0_250GB_______________5C71B5815A382500
4. view the NVMe devices used for memory tiering.
esxcli system tierdevice list
5. configure the percentage of NVMe devices used for memory tiering versus physical memory.
esxcli system settings advanced set -o /Mem/TierNvmePct -i 100
Host memory hierarchy feature configuration process.
7. After completing the configuration, reboot the ESXi host to make the configuration take effect and check the memory of the ESXi host again! Current memory capacity = physical memory capacity + NVMe storage capacity.
Isn't it amazing? Why should your next generation of memory be RAM? Why don't you hurry up and use it? Memory layering is here, is memory pooling far behind?