A standard VMware Cloud Foundation instance has two types of workload domains, a management domain and a VI workload domain, where there is one management domain and only one VI domain, and multiple VI domains can be created, with multiple vSphere clusters in each type of workload domain and multiple hosts in each cluster, with limitations such as the number of clusters and hosts supported by the workload domains. For information about the limits of clusters and hosts supported by workload domains, see theVMware Configuration MaximumsThe following are some examples of workload domains that can be scaled out. After a workload domain has been deployed, you may need to scale the resources in a management workload domain or a VI workload domain, such as adding ESXi hosts or adding new vSphere clusters, to gain optimal resiliency in the event of a host failure and to support more workloads.
In fact, the process of scaling resources is similar for both management domains and VI domains. Therefore, due to the limited resources in the environment, the following is an example of how to manage a workload domain to understand how to scale resources for a workload domain, which mainly involves the process of adding or removing hosts and clusters in a workload domain.
I. Preparing the environment
To prepare ESXi hosts for joining a workload domain, again using nested VMs, refer to (VMware Cloud Foundation Part 04: Preparing an ESXi Host.) method in the article. Note that you should use the same version of the ESXi host as in the workload domain, and you should add at least two hard disks to the ESXi host for vSAN storage. In a real-world environment, try to keep the configuration of the ESXi hosts as consistent as possible with the hosts in the deployed environment.
Before adding hosts or clusters to a workload domain, make sure that there are available IP addresses in the network pool used for ESXi hosts. You can edit the network pool to add a range of available IPs to the address pool for ESXi hosts added to the same cluster; or you can create a new network pool for adding new clusters to the workload domain.
Before adding hosts to a workload domain, ensure that the TEP IP address pool in NSX Manager for the workload domain in which you are working has available IP addresses; if you are adding a new cluster, you can either create the TEP IP address pool ahead of time or manually create the address pool in the Add Cluster workflow.
II. Service hosts
Regardless of which workload domain (management/VI domain) you need to add hosts or clusters to, you need to add ESXi hosts to the In-Service Hosts list first. Navigate to SDDC Manager->Inventory->Hosts and click Service Hosts.
Check All, make sure that the added ESXi hosts are eligible for service, and click Continue.
Enter the FQDN of the ESXi host and select the storage type used by the host based on the storage type used by the workload domain. vSAN is selected and vSAN ESA is checked. the current environment is VCF version 5. and the vSAN type can only be "local vSAN", which is vSAN HCI ESA; In VCF version 5.2, hosts are supported for vSAN Max disaggregated storage based on the ESA architecture. Network Pool Name Be sure to select the network pool used by the current workload domain or a network pool dedicated to a certain type of workload domain. Fill in the username and password of the ESXi host for authentication.
Check the added hosts and click "Verify All".
Validation is successful, click Next.
Check the host and click on Service.
If all went well, you can see the ESXi hosts you just added in the Unassigned Hosts.
III. Adding Hosts to a Workload Domain
Before adding hosts to a cluster in a workload domain, ensure that the ESXi hosts have been added to the serving hosts and are in the Unassigned state.
Navigate to SDDC Manager->Inventory->Workload Domains and click on the workload domain to which you need to add hosts.
Click on the Clusters tab of the current workload domain and click on the cluster to which you want to add hosts.
Click "Actions" in the current cluster and select "Add Host".
When there are no available hosts in the SDDC Manager service host list, it will prompt an error and ask you to add hosts first; if there are available hosts in the list, the Add Hosts workflow can continue normally. Check the hosts selected for the current cluster and click Next.
Configure the virtual switches for the ESXi hosts. The current cluster has two distributed switches with two uplinks each, so the added ESXi hosts are required to have four NICs; this varies depending on the environment, and the NICs on the ESXi hosts are selected to apply to the network types of the different switches.
Configure the license for the ESXi host.
Check all configurations and click Finish.
If all goes well, you can see that the host has been added to the workload domain.
The vCenter Server UI where the workload domain resides can see the added hosts.
The host has been added to the vSAN fault domain.
The host has been configured as an NSX transport node.
IV. Deleting hosts in a workload domain
The whole process of removing unwanted hosts from a workload domain is relatively easier than the process of adding hosts. First, make sure that there are no workloads running on the host where you want to remove. Because it is a vSAN cluster, after removing a host, the VMs may become non-compliant, depending on the different storage policies used by the VMs. So, after adjusting the hosts within the cluster, modify the storage policy of the VMs to make them compliant with the storage policy currently available for the cluster.
Navigate to SDDC Manager->Host Workload Domain->Host Cluster->Hosts, check the hosts you want to remove, and click Remove Selected Hosts.
Determine the removal.
Removal in ....... If all goes well, the host will be removed from the cluster.
Navigate to SDDC Manger->Inventory->Hosts->Unassigned Hosts, and the removed host will go among the unassigned hosts. You cannot reapply that host to any workload domain, and the removed host needs to be cleaned up, that is, the configuration information left on the host needs to be removed. Before you can do the cleanup, you need to remove the host from the inventory (decommission it), and when the cleanup is complete, you can recommission the host and apply it to any workload domain.
Click Confirm to be released from service.
V. Adding clusters to a workload domain
In addition to adding hosts to clusters in a workload domain, you can also add new vSphere clusters to a workload domain. For management domains, the initial build of all vSphere clusters can only use vSAN as the primary storage, and after the cluster is built, you can add other types of storage (such as NFS, vVols, and so on) as the supplementary storage for the cluster; for VI domains, the initial build of all vSphere clusters can use any other type of storage besides vSAN as the primary storage, and after the cluster is built, you can add other types of storage (such as NFS, vVols, and so on) as the supplementary storage for the cluster. For VI domains, all vSphere clusters can be initially built using any type of storage other than vSAN as the primary storage, and after the cluster is built, other types of storage (such as NFS, vVols, and so on) can be added as supplementary storage for the cluster.
Adding a new vSphere cluster to a workload domain requires a minimum of three ESXi hosts, but of course this is if you are using the vSAN storage type; if you are using another type of storage, only two ESXi hosts are required. As with the process of adding hosts to a workload domain, you need to prepare the ESXi hosts for use in the new vSphere cluster, as well as ensure that the network pool and the NSX TEP IP pool have addresses available, and then use the In-Service Hosts workflow to add these hosts to the SDDC Manager host list, as shown in the following figure. As shown in the following figure.
Navigate to SDDC Manager->Inventory->Workload Domains and click on the workload domain to which you need to add a new vSphere cluster.
Click Actions in the current workload domain and select Add Cluster.
Select the type of storage to be used for the new cluster and click Start.
Set the name of the new cluster and select the LCM image for that cluster.
The default storage policy for vSAN ESA is automatically adjusted based on the number of hosts in the cluster.
Check the hosts selected for the new cluster. vSAN clustering requires a minimum of 3 ESXi hosts.
VDS configuration for new cluster hosts. Similar to the configuration during the initial build of the workload domain, three pre-configuration files are provided by default, as well as the option to create a custom switch configuration. Since the TEP IP addresses used by the NSX Overlay network hosts in the pre-profiles are assigned via DHCP by default, this requires that the VLAN in which the TEP IP address resides has an available DHCP server, which is not available in the current environment, so I'm going to go with the Custom Switch Configuration here and manually create a pool of static TEP IP addresses.
After clicking Customize Switch Configuration, you can select the "Copy from Pre-Configured Configuration File" option to copy the configuration from one of the predefined configuration files so that you don't have to configure them one by one.
I dedicated the VDS 02, a switch, for NSX network use, but need to make some settings for this VM switch by clicking Edit Distributed Switch.
Pull down to the bottom and click "EDIT".
Here in the overlay network configuration for this switch, configure the VLANs and the TEP IP address pool. Select Static IP Pool and create a new IP pool. Since the NSX Manager of my other VI domain has been shut down, when configuring IP pools here, the system will detect the existing TEP IP pools in the NSX Manager of all workload domains to prevent IP conflicts, make sure that the configured IP pools do not have any conflicts, and then click Acknowledge. If you have already created TEP IP pools in NSX Manager in the workload domain, you can also select "Reuse Existing Pools" and then select the existing TEP IP pools.
Pull to the bottom and click Save Configuration.
Pull down to the bottom and click Save Distributed Switch.
Once the switch for the new cluster host is configured, click Next.
License configuration for the new cluster host.
Detect all configurations and click Finish.
If all goes well, the new cluster will show up in the list.
New cluster vcf-mgmt01-cluster02 Summary Information.
New cluster vcf-mgmt01-cluster02 host list.
New cluster vcf-mgmt01-cluster02 network configuration.
View the new cluster in the workload domain vCenter Server.
The vSAN storage type for the new cluster.
vSAN fault domain for the new cluster.
vSAN storage for the new cluster.
VDS Distributed Switch for new clusters.
New cluster transport node configuration in NSX.
VI. Deleting clusters in a workload domain
If you want to delete a vSphere cluster in a workload domain, again it is very simple. However there are a number of caveats, no task workloads should be running in the cluster, if vSAN storage is used the vSAN datastore will be deleted and any other type of storage will be removed, and if a remote vSAN datastore is mounted in the cluster the mounted datastore must be removed first. For initial clusters in a workload domain, you cannot delete them individually; you can only delete clusters by deleting the workload domain.
Navigate to SDDC Manager->Inventory->Workload domains where clusters need to be removed->Clusters, where a list of all clusters is listed.
Click "Delete Cluster" on the left side of the cluster you want to delete.
Enter the name of the cluster to be deleted and click Delete Cluster. If all goes well, the cluster is removed from the workload domain.
Similarly, deleting a host in a cluster to an unallocated list of hosts requires that the host be taken out of service.
Click Confirm to be released from service.