Location>code7788 >text

[VMware VCF] VCF 5.2: Configuring a Management Domain vSAN Extension Cluster.

Popularity:625 ℃/2024-09-03 10:13:45

VMware vSAN solutions are categorized by cluster configuration type as vSAN Standard Cluster, vSAN Extended Cluster, and Dual-Host Cluster (Extended Cluster Exception). The most common usage would be a vSAN Standard Cluster, also known as a vSAN HCI Hyperconverged Cluster, which consists of at least 3 ESXi hosts mounted in the same data center, with local disks aggregated and made available to workloads. Why vSAN Extended Clusters vSAN standard clusters provide redundancy for virtual machines within the same datacenter site, for example, if a host or cabinet fails without affecting the normal operation of the workload. However, this is not enough because there is the potential for datacenter-level failures. vSAN Extended Clustering solutions are designed to address this issue and enable cross-datacenter level VM protection. Dual-host clustering is a special case of the extended cluster architecture for smaller scenarios such as ROBOs, and is referred to as dual-host/node clustering because the number of hosts in the cluster only needs to be two.

In vSAN, protection for virtual machines relies on theStorage-Provisioning-Based Management (SPBM)To achieve this, SPBM configures the storage policy for a virtual machine based on the number of fault domains in the vSAN cluster. The higher the number of fault domains, the higher the level of protection that can be configured for the virtual machine. The first concept that needs to be understood first is Fault Domain (FD). Fault Domain is a logical division for a certain class of hosts, which may be located in the same cabinet or in the same datacenter, and this is an area where we believe that failures may occur, so the hosts in this area can be divided into a Fault Domain. In a standard vSAN cluster, if there is no specific partitioning of hosts within the cluster, each host belongs to a separate fault domain. If there is a partitioning, you can create fault domains and assign hosts to them, but the final number of fault domains must not be less than the minimum requirement for vSAN clusters (3, including the number of fault domains and separate hosts). In a vSAN extended cluster, a data center belongs to one fault domain, and the primary and backup data center sites belong to different fault domains, so there are two fault domains. In order to solve the problem of "brain cracking" in the dual-live data center, a separate arbitration server has been set up, which is an arbitration node that belongs to the third site and belongs to a separate fault domain. This arbitration node belongs to the third site and belongs to a separate fault domain, which ultimately forms a vSAN extension cluster composed of three fault domains.

In SPBM, the level of protection for a virtual machine is defined by theFailures To Tolerate (FTT)There are two types of FTT, one is PFTT and the other is SFTT. PFTT is called the "primary" level of the number of failures allowed, which corresponds to "site disaster recovery" in the SPBM configuration, and the available options are standard cluster, extended cluster, and two-node cluster; SFTT is called the "secondary" level of the number of failures allowed, which corresponds to "number of failures allowed" in the SPBM configuration, and the available options are no data redundancy SFTT is called the "secondary" level of allowed failures, which corresponds to the "allowed number of failures" in the SPBM configuration, and the available options are no data redundancy (RAID 0), RAID 1, RAID 5, and RAID 6. RAID 6 and so on. Typically, when we refer to FTT, we mean SFTT, because the SFTT protection level reflects the number of hosts (number of fault domains) required by the vSAN cluster. For a standard vSAN cluster, PFTT is set to "Standard Cluster" by default, and then the SFTT protection level is set according to the number of hosts in the cluster; for an extended vSAN cluster, PFTT has more options. If it is a vSAN extended cluster, then there are more options to set PFTT, the default setting is "Extended Cluster", and then you can set the SFTT protection level according to the number of hosts on the primary (preferred) or backup (secondary) site.

Of course, regardless of vSAN standard/extended clustering, the more hosts in the cluster, the higher the level of protection for the VMs, at the cost of consuming more storage space.

The vSAN cluster used for the default build of the management workload domain in VMware Cloud Foundation solutions is a standard vSAN HCI cluster. Log in to the management domain vCenter Server (vSphere Client), navigate to Cluster (vcf-mgmt01-cluster01)->Configuration->vSAN->Fault Domain, and you can see that the current configuration type of the vSAN cluster is a single-site cluster. There are a total of four ESXi hosts in the cluster, and each host belongs to a separate fault domain under a standard vSAN cluster, according to theFTT = 2N + 1The current allowable number of faulty domain failures (FTT) is 1, which means that in the event of the loss of a faulty domain (host), there is still a faulty domain (host) that can meet the FTT = 2N+1 requirement, thus restoring resiliency to the protection level of the VM storage policy (RAID 1). Of course, if you want to realize FTT=2 protection level, the hosts in the vSAN cluster should be at least 5 hosts, and it is recommended to use 6 hosts to ensure that the specified resiliency level can be restored after a host failure; if you want to realize FTT=3 protection level, the hosts in the vSAN cluster should be at least 7 hosts, and it is recommended to use 8 hosts to ensure that the specified resiliency level can be restored after a host failure. It is recommended to use eight hosts to ensure that the specified resiliency level can be recovered if a host fails.

In real-world environments, you may need to extend a standard vSAN cluster to a vSAN extension cluster to achieve a higher level of protection for your workloads. vSAN clusters in a workload domain can be extended in VMware Cloud Foundation, but there are a number of requirements and considerations, for example, you need to extend the default cluster of the management workload domain first, before you can extend a vSAN cluster in a VI workload domain. For example, before extending a vSAN cluster in a VI workload domain, you need to extend the default cluster in the management workload domain. You cannot extend a vSAN cluster in a VCF instance in the normal way, but you need to use the API Explorer in SDDC Manager to perform the extension process. The following is a look at the configuration process.

 

I. Adding a Witness Host

A vSAN extended cluster consists of two data sites, the preferred and secondary, and a witness host, which is typically located at the third site. vSAN witness hosts can be connected to both data sites using low-bandwidth/high-latency links because they hold only the witness components of the virtual machine objects and are Metadata metadata only and do not participate in the vSAN datastore operations. vSAN Witness Hosts is a dedicated virtual ESXi host that can be connected to both data sites using theVCF BOM Bill of MaterialsDetermine the corresponding release version. Note that the vSAN OSA and vSAN ESA architectures have separate witness appliances, and you need to deploy vSAN witness hosts that match the cluster as appropriate.

The deployment process of the witness host is ignored here. Since the current VCF is a nested environment, you can deploy the vSAN witness device to the physical host of the nested environment, and the following mainly describes the addition process. Log in to the management domain vCenter Server (vSphere Client), navigate to the datacenter (vcf-mgmt01-datacenter01), right-click "Add Host", enter the FQDN of the witness host, and then click the next page.

Enter the user name and password of the witness host.

Verify the certificate fingerprint of the witness host.

View summary information about the witness host.

Set the lifecycle management method for witness hosts. you can leave it as it is for now. starting with vSphere 8 U2 supports direct lifecycle management of witness hosts through vLCM using images.

Assign the license of the witness host, the default is sufficient.

Sets the locking mode of the witness host.

Select the location of the Witness Host VM.

Check all the information and click Finish.

The addition of witness hosts has been completed.

Note that the VMkernel NIC address used for the vSAN service of the witness host needs to communicate with the vSAN networks of the hosts in the vSAN cluster in the management domain. If you use different network segments and cross-L3 routes, configure the corresponding routes on the witness host and the vSAN hosts to make them interoperate. Because this is a test environment, the witness host uses the VMkernel address of the vSAN service in the same network segment as the host of the management domain, so you only need to set the VMkernel port group of the vSAN service of the witness host to the same VLAN.

 

II. Adding service hosts

A vSAN extended cluster has two data sites, the preferred and secondary. The initial build of the VCF management domain requires a vSAN cluster of at least four ESXi hosts, so when the vSAN cluster is extended, these four hosts will be categorized as the preferred site hosts, and we will need to add additional ESXi hosts for the secondary site, and we will need to add those hosts to the SDDC Manager service host list first. Note that the newly added hosts should be as consistent as possible with the hosts in the current management domain, such as configuration and number.

Navigate to SDDC Manager->Inventory->Hosts and after clicking on the serving hosts, select Import and use the JSON template file to add ESXi hosts in bulk as shown below:

{
    "hostsSpec": [
        {
            "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-mgmt01-np01",
            "vvolStorageProtocolType": ""
        },
		{
            "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-mgmt01-np01",
            "vvolStorageProtocolType": ""
        },
		{
		    "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-mgmt01-np01",
            "vvolStorageProtocolType": ""
        },
        {
            "hostfqdn": "",
            "username": "root",
            "storageType": "VSAN_ESA",
            "password": "Vcf5@password",
            "networkPoolName": "vcf-mgmt01-np01",
            "vvolStorageProtocolType": ""
        }
    ]
}

After completing the Serving Hosts Add workflow, all hosts will be displayed in the Unassigned Hosts list, as shown in the following figure.

 

III. Configuring extended clusters

Extending a vSAN cluster in the VMware Cloud Foundation solution is different from our regular process of extending a vSAN cluster in that you cannot add hosts and configure a vSAN extended cluster directly in the workload domain vCenter Server (vSphere Client). Because VCF is an SDDC solution that combines vSphere, vSAN, and NSX (and possibly NSX Edge clusters), VMware has an orchestration solution that is specific to VCF instances and stretches vSAN clusters, which is configured using the API Explorer in SDDC Manager. VMware has an orchestration solution that specializes in VCF instances and extends vSAN clusters.

Navigate to SDDC Manager->Developer Center and complete the configuration of the vSAN extended cluster through the API Explorer here. A reference guide to the API Explorer can be found atVMware Cloud Foundation API Reference Guide Make a view.

Extending a vSAN cluster using API Explorer requires some preparation, such as preparing and adding the vSAN witness hosts to the VCF management domain and adding the ESXi hosts for the secondary site to the SDDC Manager service host list, which are required to extend a vSAN cluster. You can also create separate network pools for the ESXi hosts at the secondary site and use them to assign IP addresses for the vSAN and vMotion services, but since this is a test environment, I just used the network pools from my current management domain. In addition to this, extending a vSAN cluster using the API Explorer requires the preparation of JSON profiles that predefine the configuration information for extending a vSAN cluster, such as the ESXi hosts at the secondary site (hostname, ID, and License, etc.), the NSX (TEP IP address pools, uplink profiles, and transport node profiles, etc.), and the witness hosts. (TEP IP address pool, uplink profile, transport node profile, etc.), and witness hosts. Once you have prepared the JSON profile, you can validate the profile before configuring it, and if it is OK, you can use the JSON profile to perform the extension of the vSAN cluster through the API explorer.

1) JSON template configuration file.

The JSON template profile for extending a vSAN cluster is shown below, although the following template can only be used in scenarios where there is a single VDS Distributed Switch in the VCF workload domain, such as when building a VCF management domain using the Profile-1 profile, which requires that the ESXi hosts have two NICs for the VDS Distributed Switch, which This profile requires the ESXi host to have two NICs for a VDS distributed switch that carries all traffic for the management network, vMotion network, vSAN network, and NSX Overlay network. In real-world environments, it is common for ESXi hosts to have multiple NICs and multiple distributed switches configured, such as the current test environment, where there are four NICs and two VDS switches have been created, in which case the JSON profile template below can't be used in its entirety, and some tweaks need to be made to the profile. However, there is some information in the JSON configuration file that needs to be obtained in advance, such as the ESXi host ID, so let's see how to obtain this information and complete the preparation of the JSON configuration file.

{
 "clusterStretchSpec": {
  "hostSpecs": [
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "sfo-w01-az2-nsx-np01",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "sfo-w01-cl01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "sfo-w01-cl01-vds01"
      }
     ]
    },
    "id": "<ESXi host 1 ID>",
    "licenseKey": "<license key>"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "sfo-w01-az2-nsx-np01",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "sfo-w01-cl01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "sfo-w01-cl01-vds01"
      }
     ]
    },
    "id": "<ESXi host 2 ID>",
    "licenseKey": "<license key>"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "sfo-w01-az2-nsx-np01",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "sfo-w01-cl01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "sfo-w01-cl01-vds01"
      }
     ]
    },
    "id": "<ESXi host 3 ID>",
    "licenseKey": "<license key>"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "sfo-w01-az2-nsx-np01",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "sfo-w01-cl01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "sfo-w01-cl01-vds01"
      }
     ]
    },
    "id": "<ESXi host 4 ID>",
    "licenseKey": "<license key>"
   }
  ],
  "isEdgeClusterConfiguredForMultiAZ": <true, if the cluster hosts an NSX Edge cluster; false, if the cluster does not host an NSX Edge cluster>,
  "networkSpec": {
   "networkProfiles": [
    {
     "isDefault": false,
     "name": "sfo-w01-az2-nsx-np01",
     "nsxtHostSwitchConfigs": [
      {
       "ipAddressPoolName": "sfo-w01-az2-host-ip-pool01",
       "uplinkProfileName": "sfo-w01-az2-host-uplink-profile01",
       "vdsName": "sfo-w01-cl01-vds01",
       "vdsUplinkToNsxUplink": [
        {
         "nsxUplinkName": "uplink-1",
         "vdsUplinkName": "uplink1"
        },
        {
         "nsxUplinkName": "uplink-2",
         "vdsUplinkName": "uplink2"
        }
       ]
      }
     ]
    }
   ],
   "nsxClusterSpec": {
    "ipAddressPoolsSpec": [
     {
      "description": "WLD01 AZ2 Host TEP Pool",
      "name": "sfo-w01-az2-host-ip-pool01",
      "subnets": [
       {
        "cidr": "172.16.44.0/24",
        "gateway": "172.16.44.253",
        "ipAddressPoolRanges": [
         {
          "end": "172.16.44.200",
          "start": "172.16.44.10"
         }
        ]
       }
      ]
     }
    ],
    "uplinkProfiles": [
     {
      "name": "sfo-w01-az2-host-uplink-profile01",
      "teamings": [
       {
        "activeUplinks": [
         "uplink-1",
         "uplink-2"
        ],
        "name": "DEFAULT",
        "policy": "LOADBALANCE_SRCID",
        "standByUplinks": []
       }
      ],
      "transportVlan": 1644
     }
    ]
   }
  },
  "witnessSpec": {
   "fqdn": "",
   "vsanCidr": "172.17.11.0/24",
   "vsanIp": "172.17.11.219"
  },
  "witnessTrafficSharedWithVsanTraffic": false
 }
}

2) Obtain the ESXI host ID.

The JSON profile needs to be populated with the IDs of the ESXi hosts used for the secondary site, which is why you need to add the hosts to the list of in-service hosts ahead of time, as this is how you will be able to get the IDs of these hosts. navigate to SDDC Manager - > Developer Center - > API Explorer, and enter "Hosts" at serial number ③ to search for the API category. Search for the API category by entering "Hosts" at serial number ③ and find serial number ④ "Hosts".

Expand "Hosts", find serial number ① "GET /v1/hosts" and expand it, fill in serial number ② "status" with "UNASSIGNED_USEABLE", then click "Execute".

After execution, you can see the Response below.

Expand the serial number ① "PageOfHost", and then expand the serial number ② "Host......" of a host. Then, determine the FQDN of the host at serial number ③, and get the ID of the host at serial number ④.

Expand all Hosts in the same way as above to get the IDs of all ESXi hosts, and you end up with the information shown in the following table:

hosts ID
9147f1ce-7b65-40e4-a8e7-f4fe5ce98295
373f5d1d-d074-4da6-b5ef-765880fa90a5
e1c85352-d168-493e-a2a3-be4772e51a93
c282a60d-bc00-404c-a65a-8244886b7b9d

In fact, there is an easiest and most direct way to get the ID of an ESXi host. After clicking into a particular ESXi host in the following figure, you can get the ID information of that host in the address bar of your browser.

/ui/sddc-manager/inventory/hosts/host/9147f1ce-7b65-40e4-a8e7-f4fe5ce98295/summary(monitoring-panel:monitoring/tasks)

3) Prepare the JSON configuration file.

Having obtained the ID information for all ESXi hosts, you are now ready to prepare the JSON configuration file as shown below, which is the JSON configuration file for a vSAN extended cluster environment with 2 distributed switches. The LicenseKey for each ESXi host has been processed, note that these Licenses need to be in the SDDC Manager license list. The NIC for each ESXi host needs to correspond to the NIC assigned to the VDS Distributed Switch by the ESXi host in the existing vSAN clustered environment, and the switch name also needs to correspond. Since the current vSAN cluster is not configured with NSX Edge clustering, isEdgeClusterConfiguredForMultiAZ is configured to false. you need to configure the ESXi hosts at the secondary site with a new TEP IP address pool, a new uplink profile, and a new transport node profile called a sub-transport node profile (sub-transport node profile). This new transport node profile is called a sub-transport node profile (sub-TNP), and ultimately the secondary site is configured as a subcluster. az2 stands for availability zone 2, and the preferred site is az1 (availability zone 1). Finally, the configuration file is populated with information about the vSAN witness host. vsanIp is the address of the VMkernel NIC that the witness host uses for vSAN traffic. witnessTrafficSharedWithVsanTraffic A configuration option of true means that the ESXi host's witness traffic shares a NIC with the vSAN traffic, while a value of false means that the vSAN traffic is shared with the ESXi host. value is false, the vSAN witness traffic is separated from the ESXi host's vSAN traffic and another NIC (the management NIC by default) is configured separately to communicate with the vSAN network of the vSAN witness host.

{
 "clusterStretchSpec": {
  "hostSpecs": [
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "vcf-mgmt01-az2-network-profile",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds01"
      },
	  {
       "id": "vmnic2",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds02"
      },
      {
       "id": "vmnic3",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds02"
      }
     ]
    },
    "id": "9147f1ce-7b65-40e4-a8e7-f4fe5ce98295",
    "licenseKey": "00000-00000-00000-00000-00000"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "vcf-mgmt01-az2-network-profile",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds01"
      },
	  {
       "id": "vmnic2",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds02"
      },
      {
       "id": "vmnic3",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds02"
      }
     ]
    },
    "id": "373f5d1d-d074-4da6-b5ef-765880fa90a5",
    "licenseKey": "00000-00000-00000-00000-00000"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "vcf-mgmt01-az2-network-profile",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds01"
      },
	  {
       "id": "vmnic2",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds02"
      },
      {
       "id": "vmnic3",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds02"
      }
     ]
    },
    "id": "e1c85352-d168-493e-a2a3-be4772e51a93",
    "licenseKey": "00000-00000-00000-00000-00000"
   },
   {
    "hostname": "",
    "hostNetworkSpec": {
     "networkProfileName": "vcf-mgmt01-az2-network-profile",
     "vmNics": [
      {
       "id": "vmnic0",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds01"
      },
      {
       "id": "vmnic1",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds01"
      },
	  {
       "id": "vmnic2",
       "uplink": "uplink1",
       "vdsName": "vcf-mgmt01-vds02"
      },
      {
       "id": "vmnic3",
       "uplink": "uplink2",
       "vdsName": "vcf-mgmt01-vds02"
      }
     ]
    },
    "id": "c282a60d-bc00-404c-a65a-8244886b7b9d",
    "licenseKey": "00000-00000-00000-00000-00000"
   }
  ],
  "isEdgeClusterConfiguredForMultiAZ": false,
  "networkSpec": {
   "networkProfiles": [
    {
     "isDefault": false,
     "name": "vcf-mgmt01-az2-network-profile",
     "nsxtHostSwitchConfigs": [
      {
       "uplinkProfileName": "vcf-mgmt01-vds01-az2-uplink-profile",
       "vdsName": "vcf-mgmt01-vds01",
       "vdsUplinkToNsxUplink": [
        {
         "nsxUplinkName": "uplink-1",
         "vdsUplinkName": "uplink1"
        },
        {
         "nsxUplinkName": "uplink-2",
         "vdsUplinkName": "uplink2"
        }
       ]
      },
	  {
       "ipAddressPoolName": "vcf01-mgmt01-tep02-az2",
       "uplinkProfileName": "vcf-mgmt01-vds02-az2-uplink-profile",
       "vdsName": "vcf-mgmt01-vds02",
       "vdsUplinkToNsxUplink": [
        {
         "nsxUplinkName": "uplink-1",
         "vdsUplinkName": "uplink1"
        },
        {
         "nsxUplinkName": "uplink-2",
         "vdsUplinkName": "uplink2"
        }
       ]
      }
     ]
    }
   ],
   "nsxClusterSpec": {
    "ipAddressPoolsSpec": [
     {
      "description": "AZ2 Host TEP Pool",
      "name": "vcf01-mgmt01-tep02-az2",
      "subnets": [
       {
        "cidr": "192.168.43.0/24",
        "gateway": "192.168.43.254",
        "ipAddressPoolRanges": [
         {
          "end": "192.168.43.50",
          "start": "192.168.43.1"
         }
        ]
       }
      ]
     }
    ],
    "uplinkProfiles": [
     {
      "name": "vcf-mgmt01-vds01-az2-uplink-profile",
      "teamings": [
       {
        "activeUplinks": [
         "uplink-1",
         "uplink-2"
        ],
        "name": "DEFAULT",
        "policy": "LOADBALANCE_SRCID",
        "standByUplinks": []
       }
      ],
      "transportVlan": 0
     },
	 {
      "name": "vcf-mgmt01-vds02-az2-uplink-profile",
      "teamings": [
       {
        "activeUplinks": [
         "uplink-1",
         "uplink-2"
        ],
        "name": "DEFAULT",
        "policy": "LOADBALANCE_SRCID",
        "standByUplinks": []
       }
      ],
      "transportVlan": 43
     }
    ]
   }
  },
  "witnessSpec": {
   "fqdn": "",
   "vsanCidr": "192.168.41.0/24",
   "vsanIp": "192.168.41.200"
  },
  "witnessTrafficSharedWithVsanTraffic": true
 }
}

4) Get the management domain cluster ID.

After preparing the JSON configuration file, we can validate the configuration file, but before doing so, we need to get the ID of the VCF management domain cluster. navigate to SDDC Manager->Developer Center->API Explorer, at serial number ③, type in "Clusters" to search the API category, and find serial number ④ "Clusters".

Expand serial number ① "Clusters", find serial number ② "GET /v1/clusters" and expand it, then click serial number ③ "Execute".

Expand serial number ① "PageOfCluster", then expand serial number ② "Cluster(vcf-mgmt01-cluster01)", you can get the ID of the cluster at serial number ③.

The cluster IDs obtained using the above method are shown in the following table:

clustering ID
vcf-mgmt01-cluster01 1c10ff5f-d65f-4cc4-a062-79be46b3d9d7

In fact, there is an easiest and most direct way to get the ID of the cluster, after clicking into the cluster in the following figure, you can get the ID information of the cluster in the address bar of your browser.

/ui/sddc-manager/inventory/domains/mgmt-vi-domains/a0ef2be8-0d8c-4ac1-89aa-a6a575f3f9e0/clusters/1c10ff5f-d65f-4cc4-a062-79be46b3d9d7/summary(monitoring-panel:monitoring/tasks)

5) Validate the JSON profile.

After preparing the JSON profile and obtaining the cluster ID, verify that the JSON profile meets the requirements for extending the vSAN cluster. After searching the API category by typing "Clusters" at serial number ①, expand serial number ② "Clusters" and find serial number ③ "POST /v1/clusters/{id}/ validations".

Expand serial number ① "POST /v1/clusters/{id}/validations", fill in the cluster ID in serial number ②, fill in the prepared JSON configuration file in serial number ③, and click "Execute" in serial number ④. in serial number ④ and click "Execute". If the validation is successful, you can see the execution status "COMPLETED" at serial number ⑤ and "SUCCEEDED" at serial number ⑥.

6) Configure the vSAN extended cluster.

If everything is ready, let's start configuring the vSAN Extended Cluster. After searching for the API category by typing "Clusters" at serial number ①, expand serial number ② "Clusters" and find serial number ③ "PATCH /v1/clusters/{id} ".

Expand serial number ① "PATCH /v1/clusters/{id}", fill in the cluster ID in serial number ②, fill in the prepared JSON configuration file in serial number ③, click "Execute" in serial number ④, and then click "Execute" in serial number ⑤. You can see that the execution status is "IN_PROGRESS" processing.

Click on the task window to view the task status.

Go to that task and view the status of the subtask.

The Extend vSAN Cluster task executed successfully.

 

IV. Verifying Cluster Configuration

Navigate to SDDC Manager->Inventory->Workload Domain->Cluster, you can see that the vSAN configuration status is "Extended" and the extension status is "Enabled", which indicates that the configuration is successful.

Log in to the management domain vCenter Server (vSphere Client), navigate to cluster (vcf-mgmt01-cluster01)->Configuration->vSAN->Failure Domain, and you can see that the configuration type is "Extended Cluster".

Navigating to cluster (vcf-mgmt01-cluster01)->Configuration->Configuration->VM/Host Groups and VM/Host Rules, you can see the Preferred Site Host Groups, Auxiliary Site Host Groups, and VM Groups within the cluster, and the Associativity Rule for the VM Groups with Host Groups is created.

Navigate to the Storage tab - >Storage (vcf-mgmt01-vsan-esa-datastore01) - >Configuration - >General to see that the default storage policy for this storage is "stretched RAID5".

Navigate to the Network tab - >Network folder (Management Networks) - >Network - >Distributed Port Groups to view information about the distributed port groups configured for the secondary site host.

Log in to the administrative domain NSX Manager (VIP) to see that the secondary site hosts are configured as transport node hosts with subclusters and the appropriate network profiles configured.

Click Transmission Node Profile Configuration to see the cluster's transmission node profile with sub-transmission node profiles (sub TNPs) configured under each distributed switch.

Click Profiles to view the uplink profile to see the uplink profile created by the secondary site.

Note that when you configure vSAN to extend the cluster, the cluster "Add Host" feature is disabled, so if you want to extend it later, you can only do so using the API Explorer.