VMware Cloud Foundation management domain deployments require at least 4 ESXi hosts as the minimum compute unit, and in the case of consolidated deployments (where the management and VI workload domains are merged), additional ESXi compute hosts are required as appropriate. However, for the test study, we do not need to prepare so many physical hosts and can use nested virtualization deployment to complete the experiment purpose.
While it is true that nested virtual machines make it easy to deploy the environments we need, deploying a complete VMware Cloud Foundation solution requires a lot of configuration and performance from the physical host where the nested virtual machines reside, which increases the learning curve to a certain extent. So, if you don't currently have the means, you might want to try theVMware Hands-on Labs Experience the official simulation environment.
I. Preparing a Nested ESXi Host Environment
1) Physical ESXi host information
The configuration information of the physical host for deploying the VCF nested experimental environment is shown in the following figure. In fact, the deployment of VCF environment mainly requires high memory size, the deployment of the complete management domain related components down to almost take up about 200 GB of memory, while the demand for CPU and storage can be appropriately allocated according to the actual situation.
2) Physical ESXi host network configuration information
VCF deployment requires different types of network traffic to be isolated using different VLANs, which can be easily configured for environments with physical switches, but if not, you can refer to the following method to create a standard switch on the ESXi host without a NIC attached, and then create a port group on top of this standard switch to simulate the same effect The same effect can be achieved by creating port groups on top of a standard switch on the ESXi host.
As shown below, a standard switch vSwitch2 was created above the physical host and no task physical adapters were connected. Two main port groups, vcf-mgmt-vmotion and vcf-nsx-vsan, were created under vSwitch2, as shown in the previous article (VMware Cloud Foundation Part 03: Preparing the Excel Parameter Sheet.The VCF will assign the NICs used for VCF network traffic on the ESXi host according to the Profile profile selected in the Excel spreadsheet, for example, the Profile-2 profile uses the vmnic0 and vmnic1 NICs of the ESXi host for the management network and the vMotion network, and the vmnic2 and vmnic3 NICs for the NSX Overlay network and the vSAN network, respectively. For example, the Profile-2 profile uses the ESXi host's vmnic0 and vmnic1 NICs for the management network and vMotion network, and the vmnic2 and vmnic3 NICs for the NSX Overlay network and the vSAN network, whereas the two port groups created here can be used for the purpose of segregation; the other four port groups are used for the virtual gateways for each of these network types.
For the settings of this vSwitch2 standard switch, please adjust the features of promiscuous mode, MAC address change, and pseudo-transmission to "Accept" under the "Security" configuration.
The network port group vcf-mgmt-vmotion is used for traffic transfer between the VCF management network and the vMotion network, and of course in addition to being assigned to the nested ESXi hosts, the Cloud Builder (vcf-builder), which is used for the deployment of the VCF management domains, connects to this network, as well as the DNS and NTP servers (the same virtual machine vcf-dns), which provide the external services for the components related to the VCF management. and NTP servers that provide external services for VCF management-related components (the same virtual machine, vcf-dns) are also connected to this port group, and lastly, there is a springboard (vcf-win11) that is also connected to this port group. Since there is no external access to the network above this switch, you need a springboard that belongs to the same network above the switch to access the Cloud Builder tool and to deploy the VCF management domain and subsequent management.
For this vcf-mgmt-vmotion port group setting, be sure to configure the VLAN ID as "All (4095)" and adjust the promiscuous mode, MAC address change, and pseudo-transmission features to "Accept" under the "Security" configuration. under the "Security" configuration, and adjust the functions of promiscuous mode, MAC address change, and pseudo-transmission to "Accept".
The network port group vcf-mgmt-vmotion is used for VCF's NSX Overlay network and vSAN network and is primarily assigned for use by nested ESXi hosts.
For this vcf-nsx-vsan port group setting, be sure to configure the VLAN ID as "All (4095)" and adjust the promiscuous mode, MAC address change, and pseudo-transmission features to "Accept" under the "Security" configuration. Accept" under the "Security" configuration.
Other VMkernel port groups are virtual gateways used for several VCF network types. This is optional. the absence of a gateway will only prompt a warning and will not affect the deployment of the VCF management domain.
For the settings of these VMkernel port groups, please configure different VLAN IDs according to the settings of different network types, and adjust the functions of promiscuous mode, MAC address change, and pseudo-transmission to "Accept" under "Security" configuration.
3) Virtual machine summary information
A vcf virtual machine folder was created, and underneath this folder, the mgmt workload domain and vi workload domain folders were created to hold nested ESXi hosts for different workload domains, as shown in the following figure. Under the vcf folder are the Cloud Builder virtual machine (vcf-builder), the DNS and NTP server (vcf-dns), and the springboard machine (vcf-win11) for deploying the VCF management domain. Nested ESXi virtual machines (vcf-mgmt01-esxi01~vcf-mgmt01-esxi04) for deploying VCF management domains are created under the mgmt workload domain folder, and the vi workload domain folder may subsequently be used to house nested ESXi virtual machines for deploying VI management domains. The
4) Nested ESXi VM configuration information
Configuration information about creating nested ESXi virtual machines (vcf-mgmt01-esxi01~vcf-mgmt01-esxi04) for deploying VCF management domains is shown below.
Each nested ESXi virtual machine is allocated 16 CPUs and has hardware virtualization turned on.
With 96 GB of memory allocated to each nested ESXi VM, VCF has a high demand on ESXi host memory and should be able to be deployed successfully if configured with a lower memory allocation, although a higher allocation is recommended if you are going to deploy a VI workload domain later on and run VMs related to VI workload domain management on it.
Each nested ESXi virtual machine is allocated a 60 GB hard disk for installation of the ESXi system, type Thick Spare Latency Zero, and is configured with a separate NVMe controller.
Each nested ESXi virtual machine is allocated two 500 GB hard disks for vSAN ESA storage, type thin provisioning, and configured with separate NVMe controllers.
Each nested ESXi virtual machine is assigned two NVMe controllers for the ESXi system and vSAN storage, and it is recommended that you keep the two types of storage separate.
Four NICs are assigned to each nested ESXi VM, with the first two NICs being used for the management network and the vMotion network, and the second two NICs being used for the NSX network and the vSAN network.
II. Planning Nested ESXi Host Addresses
Plan the management network of the nested ESXi hosts with the same DNS and NTP servers. Be sure to configure forward and reverse domain name resolution for the nested ESXi hosts on the DNS servers in advance.
hostname (of a networked computer) | IP address | subnet mask | gateway (Internet or between networks) | DNS/NTP server |
192.168.32.61 | 255.255.255.0 | 192.168.32.254 | 192.168.32.3 | |
192.168.32.62 | 255.255.255.0 | 192.168.32.254 | 192.168.32.3 | |
192.168.32.63 | 255.255.255.0 | 192.168.32.254 | 192.168.32.3 | |
192.168.32.64 | 255.255.255.0 | 192.168.32.254 | 192.168.32.3 |
Configuring Nested ESXi Host Information
The installation process for nested ESXi host systems is skipped straight to the point, and you can check theofficial documentIf you want to start with the configuration of an ESXi host, the following will be an example of the configuration of one host. Please follow these steps to perform the same actions, which will not be repeated here.
1) Configure the ESXi address
After installing the ESXi system, configure the address information of the ESXi host through the DCUI. Press F2 to enter the Root password set during installation to enter the configuration interface.
Select the Configure Management Network option to configure the management network.
The NIC and VLAN defaults are sufficient. Select the IPv4 Configuration option to configure the ESXi host management address.
Select Static IPv4 Address and enter the previously planned ESXi host IP address, subnet mask, and gateway.
Select the DNS Configuration option to configure the DNS servers and hostnames for the ESXi host.
Select the Custom DNS Suffixes option to configure the DNS suffixes for the ESXi host.
Press ESC to exit and select YES to save the configuration.
2) Configure NTP service
Access and log in to the ESXi Host Client management interface through the ESXi host's management address.
Navigate to Host->Management->System->Time and Date, click Edit NTP Settings, enter the address of the NTP server and select Start and Stop with Host.
Navigate to Host->Management->Services, find the ntpd service, and click Start the service.
3) Configure SSL certificate
After installing the system on the ESXi host, navigate to Host->Management->Security and Users->Certificates, and you can see that the host SSL certificate is issued with the domain name by default, and you need to re-generate the SSL certificate to the host name we configured.
Navigate to Host->Management->Services, locate the SSH service, and click Start the service.
Log in to the command line of the ESXi host as a Root user via SSH and use the following command to regenerate a new SSL certificate.
/sbin/generate-certificates
/etc//hostd restart && /etc//vpxa restart
You can also run the following Powershell script (as administrator) to regenerate a new SSL certificate if you don't want to log in to the ESXi host one by one SSH.
# Install the Posh-SSH module if not already installed
Install-Module -Name Posh-SSH -Force
# Define the ESXi host details
$esxiHost = "192.168.32.62"
$username = "root"
$password = "Vcf5@password"
# Convert the password to a secure string
$securePassword = ConvertTo-SecureString $password -AsPlainText -Force
# Create the credential object
$credential = New-Object ($username, $securePassword)
# Establish the SSH session
$session = New-SSHSession -ComputerName $esxiHost -Credential $credential
# Run the command to regenerate certificates
Invoke-SSHCommand -SessionId $ -Command '/sbin/generate-certificates'
# Restart the hostd service
Invoke-SSHCommand -SessionId $ -Command '/etc//hostd restart && /etc//vpxa restart'
# Close the SSH session
Remove-SSHSession -SessionId $
After performing the above steps, log back into ESXi and navigate to Host - > Administration - > Security and Users - > Certificates and you can see that the host SSL certificate has been changed to the hostname.