Location>code7788 >text

[VMware VCF] VMware Cloud Foundation Part 06: Deploying VI Workload Domains.

Popularity:710 ℃/2024-08-02 19:05:10

In the VMware Cloud Foundation standard architecture, the management domain, which is a workload domain deployed as part of the initial build-up and is the only one, is deployed separately from the VI workload domain, which is used exclusively to host the management-related component VMs. The previous article (VMware Cloud Foundation Part 05: Deploying an SDDC Management Domain.Now that we have completed the deployment of the management domain, we need to deploy the VI workload domain for the actual production business systems.

I. VI Domain Deployment Architecture

1) Topology Architecture

  • VI Workload Domains (Local Multiple)

There can be multiple VI workload domains in a VCF instance and multiple vSphere clusters in each VI workload domain. The management component of each VI domain by default has a vCenter Server component and a cluster of three NSX Manager nodes, where all the management component virtual machines of the VI workload domain run in the management domain.

  • VI Workload domains (remote clusters)

In a regular VCF instance, where both the management and VI workload domains are located in the same data center, VCF also supports VI workload domains located at remote sites. This is somewhat similar to the meaning of headquarters and branch offices. The VCF instance located in the headquarters contains the management domain and local VI workload domains, and you can add VI workload domains located in the branch offices to the VCF management domain in the headquarters, and these remote VI workload domains located in the branch offices are unified by the management domain in the headquarters for the lifecycle management, and the management components of the remote VI workload domains also run in the management domain in the headquarters. The management components of the remote VI workload domains also run in the management domain of the headquarters, except that the real physical hosts in the VI domains are located in the branch offices. The remote branch VI workload domains are typically connected to the headquarters management domain using a dedicated Wide Area Network (WAN) line, and there are many requirements and details for using this topology, seeVCF Product Documentation

  • VI workload domain (shared NSX cluster)

In a regular VCF instance, when a VI workload domain is deployed, the management components of the VI domain are deployed in the management domain. When the first VI domain is deployed, the NSX management component, a cluster of three nodes, must be deployed; when the second and subsequent VI domains are deployed, when the NSX management component is deployed, it can be optionally added to the NSX management component of the existing VI domain without deploying a new one. When deploying an NSX management component for a second and subsequent VI domains, you can choose to add it to the NSX management component of the existing VI domain without deploying a new one.

  • VI Workload Domain (Single SSO Domain)

In a regular VCF instance, the management components of a VI domain are deployed in the management domain when the VI workload domain is deployed, and a separate vCenter Server component is always deployed with each VI domain. For vCenter Server components, SSO domains are supported to be added to the SSO domain of the management domain at deployment time, and all deployed VI domains can be added to a single SSO domain. This is called Enhanced Linked Mode Enhanced Linked Mode (ELM), and if you don't already know about this feature, check out this previous post (Use the cmsso-util command to link, delete, and modify multiple vCenter Server (VCSA) SSO domains.) Article.

  • VI Workload Domains (Hybrid SSO Domains)

You can add the vCenter Server components of a VI domain to the SSO domain of an existing management domain, or you can configure vCenter Server to a separate SSO domain when deploying other VI domains to support hybrid SSO domains.

  • VI Workload Domains (Standalone SSO Domains)

All SSO domains can be deployed independently, with the management domain and each VI workload domain belonging to a different SSO domain.

2) Network Architecture

The network architecture of the VI workload domain is actually almost the same as the network topology of the management domain, but of course there is a little difference, for example, the VI management domain will not have a separate management component VM Management network, there are only vCenter Server and NSX Manager management component virtual machines for the VI management domain, and these virtual machines will be deployed directly to the management domain, and these virtual machines will be used directly with the VM Management network assigned to the management domain components above the management domain. These virtual machines will be deployed directly to the management domain, and these virtual machines will be used directly with the VM Management network that is assigned to the management domain components above the management domain, which means that the VI domain management components and the management domain management components use the same network and the same port group. The other networks, the vMotion network and the NSX network, are the same as the vSAN network if vSAN storage is used.

Of course, as with the management domain, if you want to realize the full SDDC architecture, you also need to separately deploy NSX Edge nodes and configure the Edge network for north-south routing and networking.

3) Storage Architecture

When deploying a management domain, you can only use vSAN storage as the principal storage for the initial build. After deploying a management domain, you can log in to the management domain vCenter Server to add other supplemental storage for use as shown in the following figure. VI workload domains differ from management domains in terms of storage usage. When initially building a VI workload domain, vSAN is no longer the only option for the primary storage, and storage types such as FC, NFS, and vVols can also be used as the primary storage for the VI domain, and then you can log in to the vCenter Server of the VI domain to add supplemental storage for use after deploying the VI domain, as shown in the following figure. storage, as shown in the following figure. For building a VI domain, the use of different types of primary storage makes a difference in the initial hosts that make up the VI domain. For example, if you choose vSAN as the primary storage for the VI domain, you need to prepare at least three ESXi hosts to build the VI domain (four for the management domain), whereas with FC-type shared storage, you can prepare only two ESXi hosts to build the VI domain. There are different considerations for using different types of storage.VCF Product Documentation

II. VI domain deployment preparation

Like deploying a management domain, deploying a VI workload domain also requires some prep work, such as installing the ESXi hosts used to build the VI domain, creating network pools assigned to the VI domain hosts, and so on. Of course, if in the real environment also involves network planning and configuration, etc., because my side is in the nested environment for testing, so there may be a lot of overlooked places, please refer to the actual situation.

First of all, please note that the previous environments and this test environment are all based on VMware Cloud Foundation version 5.1.0, so please strictly follow the BOM bill of materials in the release notes of this version to prepare the environment and follow the manuals in the product documentation to conduct the relevant tests, and please evaluate and bear the risks that may result from any operation. Please evaluate and bear the risk of any operation.

1) Prepare the ESXi host

As with the deployment of the management domain, the main storage type used to build the VI workload domain is still the vSAN ESA architecture of choice, so I prepared 3 nested ESXi hosts (vcf-vi01-esxi01~vcf-vi01-esxi03) as shown below.

The configuration of the VMs regarding these 3 nested ESXi hosts is shown below and is a little different from the nested ESXi VM configuration for the management domain, for example, I have set the memory to 60 GB because the VI domain has a relatively lower memory footprint since the VI domain management components are running on the management domain. However, if you are going to subsequently deploy NSX Edge nodes as well as deploy other solutions, make sure to crank up the memory a bit. All other configurations are the same as for the management domain VMs, so please make your own adjustments as appropriate.

Note that you should refer to the installation and configuration of ESXi hosts nested in administrative domains for the installation and configuration of ESXi hosts nested in VI domains, which I won't go into again, so check out the previous article (VMware Cloud Foundation Part 04: Preparing an ESXi Host.) Article.

2) Prepare the vLCM image

If you are deploying a VI workload domain using the vSAN ESA architecture, as I am, you will need to prepare an image of vLCM for cluster-based lifecycle management. vLCM in vSphere is used by VCF for the lifecycle management of the relevant management components. vLCM in vSphere uses vLCM for the lifecycle management of the relevant management components, and if you are using vSAN ESA then you can only use the image-based approach, and with vSAN OSA you can use the baseline-based approach If you are using vSAN ESA, you can only use the image-based approach, and with vSAN OSA, you can use the baseline-based approach, but the baseline-based lifecycle management approach is deprecated and will be removed in a future release, and the image-based management approach is recommended. If you don't know what vLCM is, check out this previous article (Use vSphere Lifecycle Manager (vLCM) to manage the lifecycle of standalone hosts and clusters.) Article.

Navigate to SDDC Manager->Lifecycle Management->Image Management, and under the Available Images tab, you can see all the available images, as shown in the following figure, there is already an available image (Management-Domain-Personality), which was automatically extracted from the Management Domain cluster when I deployed the Management Domain, and you can see that there is only the base ESXi image, with no other vendor add-ons, components, and so on. As you can see in the figure below, there is already one available image (Management-Domain-Personality), which is the image that is automatically extracted from the Management Domain cluster when the Management Domain is deployed. You can see that there is only the base ESXi image, and there are no other vendor add-ons, components, and so on, because the ESXi host system of the Management Domain that I deployed is using the official standard ISO installation image. The ESXi host system in the VI domain that I subsequently installed used the same installation image, and the vLCM image used for the VI domain cluster was the same as the one for the management domain, so I used this management domain image directly when I deployed the VI domain in the future.

If the ESXi hosts deployed in the VI domain use different installation images, you can manually extract or import the cluster image for the VI domain in the Import Image tab. This may be common in real-world environments. For example, if the ESXi hosts for the VI domain use a different brand of physical server than the management domain, the corresponding installation image for the ESXi system will be different, and you will need to set up a separate cluster image, and VCF provides the following two options for extracting or importing a different cluster image. Option 1 is better for workload domains that already have servers of the same make or model, so you can extract images directly from clusters in this workload domain. Option 2 is more suitable for importing images locally when initially building a workload domain, and supports JSON configuration files, ZIP and ISO format image files. These two options can be used in conjunction with each other depending on the situation.

3) Prepare the VI domain network pool

When deploying a management domain, we need to prepare an Excel parameter sheet where there is a "Hosts and Networks" parameter sheet that defines the address segments for vMotion and vSAN networks that will be automatically assigned to these hosts' vmkernel NICs for transporting traffic between the same network types. You can view this article (VMware Cloud Foundation Part 03: Preparing the Excel Parameter Sheet.) article for more details.

Navigating to SDDC Manager->Administration->Network Settings, you can see the network pools configured for management domain hosts in the Network Pools tab, as shown below. You can click Edit Network Pool to add IP address segments to the network types in the network pool so that they can be used when you need to add clusters or expand cluster hosts to the management domain.

When an IP address is added to a network pool, the corresponding number of available IPs is also increased. Note that this can only be increased, not decreased, and if you want to decrease it, you can only delete the network pool and then add it again.

The network pool created above is already used for the management domain, so it can only be used by the management domain hosts, and cannot be used for the VI workload domain. If you want to add a VI domain, you need to create a separate network address pool for the VI domain or use a network pool that is already used for the VI domain (and has available IPs). Click "Create Network Pool" to create a network pool for VI domain, configure the name of the network pool as vcf-vi01-np01, and select the types of networks that need to be assigned an IP address. When the host applies this network pool, it will automatically get the address from the network pool if the corresponding services are enabled. On my side, I only need to use vMotion and vSAN networks, so I check these two network types and configure the VLAN and address information for these two network types below. Note that each network type in each network pool can only belong to one network, and cannot overlap with networks in other network pools. For example, the vMotion network type in the above management domain's network pool has a VLAN of 40, and a network segment of 192.168.40/24, so the same network cannot be set up in the newly-created pool, so I set up a network segment of 50. You can set your own IP address range as needed, and for the default gateway I have gone ahead and configured a virtual vmkernel gateway on the standard switch of the physical ESXi hosts, as described in the previous article.

After saving, you can see that the network pool for the VI domain has been created.

4) Preparing the VI Domain Component License

By default, when deploying management domain components, relevant component licenses have been assigned, but in the actual environment, these licenses may only be sufficient for the use of management domain components. When deploying a VI domain, you need to add License licenses for the VI domain components separately. Before deploying a VI domain, you need to add these licenses to the list of licenses that can be provisioned by SDDC Manager in advance, and you need to use them in the process of deploying and configuring the VI domain.

Navigate to SDDC Manager->Administration->Licenses and click on License Keys to add all licenses for the VI Domain Management component to SDDC Manager.

III. VI domain deployment techniques

Before formally deploying a VI workload domain, there may be some tips that might be helpful, especially for an environment like mine where physical hardware resources are not very sufficient, it is really too difficult to fully deploy a VCF environment because it really eats up too much resources! For those who are new to VMware Cloud Foundation, it is recommended to use the guided setup in SDDC Manager to learn about VCF and the related configuration considerations, which is very helpful for learning and managing VCF.

1) Shut down the administrative domain host

Since my physical ESXi hosts have limited resources to run all the components of the management domain and the VI domain at the same time, I shut down one of the ESXi hosts in the management domain, and was barely able to deploy a VI workload domain. This needs to be turned off when you officially start the task of deploying the VI domain, and it needs to be turned off after the task of "Verify that the management workload domain has enough resources to deploy NSX" or before you deploy the vCenter Server components. Of course, if you have enough resources, you can simply ignore it.

2)vSAN ESA HCL

Since a VI workload domain with vSAN ESA architecture will be deployed, it is also necessary to verify the vSAN HCL compatibility of the ESXi hosts during the deployment process. Since nested ESXi hosts are used, they are definitely not on the official compatibility list, so when we deployed the management domain previously, we used a script to generate custom vSAN HCL files for the nested ESXi hosts. I confirmed this when I deployed the VI domain, and since the custom vSAN HCL file we uploaded had already been synchronized to SDDC Manager when I deployed the management domain, I didn't encounter any compatibility issues when I deployed the VI domain, but if there were any, I would have encountered the "Perform vSAN ESA Auto Disk Claim" fails, then the solution is to re-upload the custom vSAN HCL file generated previously to the SDDC Manager VM and replace the following file: /nfs/vmware/vcf/nfs-mount/vsan-hcl/, then Configure the file ownership to vcf_lcm:vcf.

3)NSX Manager

By default, when deploying the VI domain management component NSX, an NSX cluster of 3 NSX Manager nodes is created. This is the same as when deploying a management domain, but since this is a test and learning environment and there are not enough physical resources, there is a trick to deploy only 1 NSX Manager node, or the default if you are in a live environment and there are enough resources. Below is the default configuration for the size of the NSX Manager deployment.

NSX Manager Deployment Size random access memory (RAM) vCPU (computer) disk Memory reservation (default) VM Hardware Version
Extra Small 8 GB 2 300 GB All 10 or later
Small 16 GB 4 300 GB All 10 or later
Medium 24 GB 6 300 GB All 10 or later
Large 48 GB 12 300 GB All 10 or later

Log in to SDDC Manager as the vcf user and switch to the root user, then modify the following file.

vim /etc/vmware/vcf/domainmanager/

Add the following customization parameters, a deployment number of 1 for NSX Manager and a deployment size of small.

=1
=small

After saving the configuration, restart the Domain Manager service and ensure that the service is running properly.

systemctl restart domainmanager

For NSX Manager VM memory reservation, there is a way to adjust the predefined configuration in the NSX Manager OVA template to no reservation before deployment, but it is a bit of a hassle, so if it is not particularly tight, you can also log in to vCenter Server after deploying the NSX Manager VM and make changes to it. If you are not in a tight spot, you can log in to vCenter Server after deploying the NSX Manager VM and make changes to it.

IV. VI domain deployment process

1) VI domain component address

For the current environment, plan the IP address information for the management components of all VI workload domains, and be sure to configure forward and reverse domain name resolution on the DNS servers in advance.

component name hostname (of a networked computer) IP address subnet mask gateway (Internet or between networks) DNS/NTP server
ESXi 192.168.32.71 255.255.255.0 192.168.32.254 192.168.32.3
192.168.32.72 255.255.255.0 192.168.32.254 192.168.32.3
192.168.32.73 255.255.255.0 192.168.32.254 192.168.32.3
vCenter Server 192.168.32.75 255.255.255.0 192.168.32.254 192.168.32.3
NSX Manager 192.168.32.76 255.255.255.0 192.168.32.254 192.168.32.3
192.168.32.77 255.255.255.0 192.168.32.254 192.168.32.3
192.168.32.78 255.255.255.0 192.168.32.254 192.168.32.3
192.168.32.79 255.255.255.0 192.168.32.254 192.168.32.3

2) Add service host

Before deploying a VI workload domain, you first need to add the ESXi hosts used for the workload domain to the SDDC Manager inventory. Also, regardless of the type of workload domain (management domain/VI domain) that requires expansion hosts, you need to add service hosts first.

Navigate to SDDC Manager->Inventory->Hosts where all the assigned hosts and unassigned hosts are shown, we need to click on the "In-Service Hosts" add workflow to add the ESXi hosts for the VI domain to the inventory.

Check all, make sure you are aware of the conditions and requirements for adding, and click Continue.

There are two ways to add hosts, one is to select "Add" to add hosts one by one, and the other is to select "Import" to add hosts from JSON template file in batch.

Select "Add", enter the FQDN of the ESXi host, and select the storage type. The VI domain supports several types of body storage as shown in the following figure, in fact, when you add a host here, you have already decided in advance the storage type of the host that will be used in the deployment of the VI domain later. When you choose a storage type, you can only use the storage type you choose later. For example, when you select vSAN, this host is labeled as a vSAN host, and can only be used for vSAN when deploying VI domains or expanding VI domain hosts in the future, which cannot be modified after you select it, and you can only remove the serving hosts and re-add serving hosts. Since we need to deploy vSAN ESA architecture, we need to check the "vSAN ESA" box, and you can see that vSAN ESA can only be managed by vLCM image-based lifecycle method. vSAN ESA vSAN type can only be selected as "Local". The vSAN type of the vSAN ESA can only be selected as "Local". The Network Pool Name here must be the network pool that we created for the VI domain (vcf-vi01-np01). Finally, enter the username and password of the ESXi host and click Add.

If you don't check vSAN ESA, which is also known as vSAN OSA, then the vSAN type can be either "local vSAN" or "vSAN compute cluster", and the hosts that choose vSAN compute cluster can have no storage locally, then you can remotely mount other clusters' vSAN storage through the vSAN Mesh feature, which is now called vSAN HCI with datastore sharing. Remotely mount the vSAN storage of other clusters through the vSAN Mesh feature, which is now called vSAN HCI with datastore sharing. if you don't know what vSAN HCI with datastore sharing means, you can check out this article (Create a vSAN Max cluster and configure to mount a remote datastore.) Article.

Once configured, click "Add" and you can see that the host has been added to the Pending Authentication list.

If you select the "Import" option, we can download the JSON template by clicking on it, and then fill in the JSON file with the information from our environment, so that we can add hosts in bulk, as shown below.

{
    "hostsSpec": [
        {
            "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-vi01-np01",
            "vVolStorageProtocolType": "-- VMFS_FC/ISCSI/NFS --"
        },
        {
            "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-vi01-np01",
            "vVolStorageProtocolType": "-- VMFS_FC/ISCSI/NFS --"
        }
    ]
}

Select "Upload" to upload the JSON template file and the hosts configured in the JSON file will be automatically added to the list to be verified.

At this point, make sure the configuration information is correct, and click "Verify All" to ensure that all hosts can be verified.

Validation has been successful. If any are unsuccessful, please remove the host correction and re-add it in for verification.

Check that all hosts are configured correctly and click "Enable" to start the add task.

Click on the "Tasks" window at the bottom to view the task status.

Click on the task to go to the sub-task view, where you can see the status of the specific task being performed.

If all goes well, navigate to SDDC Manager->Inventory->Hosts->Unassigned Hosts and you should see the serving hosts you just added.

3) Deploying a VI workload domain

If everything is ready above, you can now formally execute the VI Workload Domain Deployment workflow. Navigate to SDDC Manager->Inventory->Workload Domains and click on "VI - Workload Domains".

Select "vSAN" and check "Enable vSAN ESA", click Start.

Set the name and organization name of the VI workload domain and choose how the VI domain will be an SSO domain. I've already mentioned the difference, so I'm going to choose the "Create new SSO domain" option. Personally, I think that using the ELM approach may increase the complexity of component lifecycle management, and it may be more flexible if it is separate.

Note: VMware Cloud Foundation Starting with 5.0, you can only choose to join a management domain SSO domain or create a new SSO domain when deploying a new VI workload domain.

Configure a separate SSO domain name and a password for the SSO administrator; because vSAN ESA is selected, lifecycle management can only be based on the vLCM image and cannot be changed. Click Next.

Set the name of the default cluster in the VI domain and select the image for the cluster, in this case the image extracted by the administrative domain.

Configure the address information for the vCenter Server component, enter the FQDN to automatically populate the IP address information, and set the root password for the vCenter Server. Click Next.

Configure the address information for the NSX Manager component, enter the FQDN and the IP address information will be automatically populated, and set the administrator (admin) password for NSX Manager. Because you have already configured only 1 node in the configuration file, but the information here is required, so you still need to fill in the information, and only one NSX Manager node will be deployed in the deployment later, click Next.

Enabling the vSAN ESA architecture will use automatic policy management by default. vSAN will automatically configure the default storage policy based on the number of hosts in the cluster.

Select the ESXi hosts that will be used to deploy the VI workload domain. You can see that at least 3 or more hosts are required to deploy the vSAN ESA architecture.

VM switch configuration for the default cluster of the VI workload domain. There are three preconfigured profiles provided by default, Default, Storage Traffic Separation, and NSX Traffic Separation. the Default profile is meant to run all network type (Management, vMotion, vSAN, NSX) traffic on top of the same The Default profile means that traffic for all network types (Management, vMotion, vSAN, NSX) runs on the same distributed switch; the Storage Traffic Separation profile means that traffic for vSAN network types runs on a separate distributed switch, and traffic for other network types runs on another distributed switch; the NSX Traffic Separation profile means that traffic for NSX network types runs on a separate distributed switch; and the NSX Traffic Separation profile means that traffic for NSX network types runs on a separate distributed switch. traffic of NSX network type on one distributed switch and traffic of other network types on another distributed switch. Is this the same as when you deployed a management domain and chose theExcel Parameter SheetThe Profile-1, Profile-2, and Profile-3 profiles are exactly the same! The same thing applies here. You can also select "Create Custom Switch Configuration" to configure the network type assignment of the NICs, so I chose Custom and created the same assignment as in the previous management domain.

We can manually create a distributed switch and configure the network type by clicking on "Create Custom Switch Configuration".

Create your first distributed switch and set the switch name, MTU, and uplink port. To prepare this switch for management and vMotion networking, click Configure Network Traffic.

Select "Management" network traffic, configure the name of the distributed port group, set the binding and failover load balancing methods, and other defaults.

Select "vMotion" network traffic, configure the name of the distributed port group, set the binding and failover load balancing methods, and other defaults.

Once configured, click "Create Distributed Switch".

Now that the configuration of one distribution switch has been completed, another distribution switch needs to be created.

Create a second distributed switch and set the switch name, MTU, and uplink port. Prepare this switch for vSAN and NSX networking by clicking Configure Network Traffic.

Select "vSAN" network traffic, configure the name of the distributed port group, set the binding and failover load balancing methods, and other defaults.

Select "NSX" network traffic, select Standard for the mode of operation or Enhanced Data Path if there is a DPU device; the transmission area type includes Overlay network and VLAN network; set the name of the overlay network transmission area.

Set the VLANs for the NSX Overlay network and the TEP address pool.

Sets the name of the NSX VLAN transport area. the uplink of the transport node profile is 2. depending on the number of ports that the ESXi host assigns to the NSX network.

NSX Transport Node Profile Name, select "Load Balancing Source" for Binding Policy and other defaults, click Save Configuration.

When all configurations are complete, click "Create Distributed Switch".

The creation of all distributed switches and the assignment of network traffic types has been completed, confirm that there are no problems and click Next.

Select the License license for the VI workload domain component.

Check all the configuration information, click Finish and start deploying the VI domain.

Click on the bottom task window to view the task status.

Click on the task to view the subtask execution.

The entire deployment took 2-3 hours.

All tasks of the deployment process.

V. VI domain-related information

1)SDDC Manager

  • Workload Domains in SDDC Manager

  • VI Domain (vcf-vi01) Summary Information

  • VI Domain (vcf-vi01) Service Component

  • VI domain (vcf-vi01) service hosts

  • VI domain (vcf-vi01) default cluster

  • VI Domain (vcf-vi01) Component Certificate

  • VI Domain (vcf-vi01) Cluster (vcf-vi01-cluster01) Summary

  • VI domain (vcf-vi01) cluster (vcf-vi01-cluster01) hosts

  • VI domain (vcf-vi01) cluster (vcf-vi01-cluster01) network

  • Network Pools in SDDC Manager

  • All hosts in SDDC Manager

  • Releases in SDDC Manager

2)NSX Manager

  • NSX System Configuration Overview

  • NSX Manager device node

  • NSX Transport Host Node

  • NSX Transport Host Node Configuration File

  • NSX Uplink Profile

  • NSX transmission area

  • NSX TEP IP Address Pool

3)vCenter Server

  • Managing VI Domain Components in a Domain vCenter Server

  • VI Domain vCenter Server vSAN ESA Cluster

  • Virtual Machines in a VI Domain vCenter Server

  • VI Domain vSAN ESA Storage Pool

  • VI Domain VDS Distributed Switch Configuration

  • VI Domain ESXi Host Switch Configuration