The previous article ("VMware Cloud Foundation Part 03: Preparing the Excel Parameter Sheet."and"VMware Cloud Foundation Part 04: Preparing an ESXi Host."), we have learned that deploying a VMware Cloud Foundation requires a lot of time and effort to prepare the deployment parameter profiles and ESXi hosts for the deployment of the management domain, but when it comes to the actual implementation, it can all be done in a few hours. However, once everything is in place, the actual implementation can be completed in a matter of hours, which is the beauty of automating and standardizing SDDC solutions with VMware Cloud Foundation.
Without further ado, here are the official topics.
I. Cloud Builder Usage Tips
There may be a few tips that can be helpful when using the Cloud Builder tool. To do a good job, you need a good tool.
1) Viewing Log Log Files
During the deployment of VCF management domain in Cloud Builder, you may encounter some errors or failed tasks, you can check the following log files in Cloud Builder to check the cause of the specific error.SSH Log in to Cloud Builder as admin user and switch to the root user, execute the following commands.
tail -f /var/log/vmware/vcf/bringup/
tail -f /var/log/vmware/vcf/bringup/
2) Enable History history
By default, the History command history feature of the Cloud Builder VM is turned off, so if you try to view previously used commands and scroll up through the history, it will fail. If you want to enable the History feature, you can remove the configuration file that turns off history.SSH Log in to Cloud Builder as the admin user and switch to the root user, and execute the following command.
mv /etc// .
history
3) Reset the Postgres database
When you finish deploying a VCF management domain using Cloud Builder, you will end up with the screen shown below. If you want to continue to use Cloud Builder to redeploy the VCF management domain or deploy another VCF instance, you will always be stuck at the screen shown below when you access Cloud Builder again.
To use Cloud Builder again, you can reset the Postgres database.SSH Log in to Cloud Builder as the admin user and switch to the root user, and execute the following command.
/usr/pgsql/13/bin/psql -U postgres -d bringup -h localhost
delete from execution;
delete from "Resource";
\q
II. vSAN ESA HCL Customization Files
When you deploy VMware Cloud Foundation with nested ESXi VMs, as I did, if you choose to deploy the VCF management domain using the vSAN OSA architecture, you will not run into HCL compatibility issues when deploying, as there will be no checking of the HCL JSON file; however, if you deploy the vSAN ESA architecture with the However, if you deploy the vSAN ESA architecture and use the official HCL JSON file (/service/vsan/), then you will definitely encounter compatibility issues and the ESXi host vSAN compatibility validation check will fail (Failed to verify HCL status on ESXi Host), as shown in the following figure.
1) Generate custom HCL JSON files for nested ESXi hosts
This issue can be resolved by generating a custom HCL JSON file using the PowerCLI script created by VMware engineer William Lam, which is shown below in its entirety. It would be interesting to see if this approach could also be used for hardware compatibility issues encountered in the deployment of vSAN ESA clusters in nested environments and using image-based vLCM lifecycle management! Note that you need to install thePowerCLI environment in order to perform the following steps.
# Author: William Lam
# Description: Dynamically generate custom vSAN ESA HCL JSON file connected to standalone ESXi host
$vmhost = Get-VMHost
$supportedESXiReleases = @("ESXi 8.0 U2")
Write-Host -ForegroundColor Green "`nCollecting SSD information from ESXi host ${vmhost} ... "
$imageManager = Get-View ($)
$vibs = $()
$storageDevices = $
$storageAdapters = $
$devices = $
$pciDevices = $
$ctrResults = @()
$ssdResults = @()
$seen = @{}
foreach ($storageDevice in $storageDevices) {
$targets = $
if($targets -ne $null) {
foreach ($target in $targets) {
foreach ($ScsiLun in $) {
$device = $devices | where {$_.Key -eq $ScsiLun}
$storageAdapter = $storageAdapters | where {$_.Key -eq $}
$pciDevice = $pciDevices | where {$_.Id -eq $}
# Convert from Dec to Hex
$vid = ('{0:x}' -f $).ToLower()
$did = ('{0:x}' -f $).ToLower()
$svid = ('{0:x}' -f $).ToLower()
$ssid = ('{0:x}' -f $).ToLower()
$combined = "${vid}:${did}:${svid}:${ssid}"
if($ -eq "nvme_pcie" -or $ -eq "pvscsi") {
switch ($) {
"nvme_pcie" {
$controllerType = $
$controllerDriver = ($vibs | where {$_.name -eq "nvme-pcie"}).Version
}
"pvscsi" {
$controllerType = $
$controllerDriver = ($vibs | where {$_.name -eq "pvscsi"}).Version
}
}
$ssdReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = [ordered] @{
vsanSupport = @( "All Flash:","vSANESA-SingleTier")
$controllerType = [ordered] @{
$controllerDriver = [ordered] @{
firmwares = @(
[ordered] @{
firmware = $
vsanSupport = [ordered] @{
tier = @("AF-Cache", "vSANESA-Singletier")
mode = @("vSAN", "vSAN ESA")
}
}
)
type = "inbox"
}
}
}
if(!$ssdReleases[$supportedESXiRelease]) {
$($supportedESXiRelease,$tmpObj)
}
}
if($ -eq "disk" -and !$seen[$combined]) {
$ssdTmp = [ordered] @{
id = [int]$(Get-Random -Minimum 1000 -Maximum 50000).toString()
did = $did
vid = $vid
ssid = $ssid
svid = $svid
vendor = $
model = ($).trim()
devicetype = $
partnername = $
productid = ($).trim()
partnumber = $
capacity = [Int]((($ * $) / 1048576))
vcglink = "/homelab"
releases = $ssdReleases
vsanSupport = [ordered] @{
mode = @("vSAN", "vSAN ESA")
tier = @("vSANESA-Singletier", "AF-Cache")
}
}
$controllerReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = [ordered] @{
$controllerType = [ordered] @{
$controllerDriver = [ordered] @{
type = "inbox"
queueDepth = $
firmwares = @(
[ordered] @{
firmware = $
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through","vSAN ESA")
}
)
}
}
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through")
}
if(!$controllerReleases[$supportedESXiRelease]) {
$($supportedESXiRelease,$tmpObj)
}
}
$controllerTmp = [ordered] @{
id = [int]$(Get-Random -Minimum 1000 -Maximum 50000).toString()
releases = $controllerReleases
}
$ctrResults += $controllerTmp
$ssdResults += $ssdTmp
$seen[$combined] = "yes"
}
}
}
}
}
}
# Retrieve the latest vSAN HCL jsonUpdatedTime
$results = Invoke-WebRequest -Uri '/products/v1/bundles/lastupdatedtime' -Headers @{'x-vmw-esp-clientid'='vsan-hcl-vcf-2023'}
# Parse out content between '{...}'
$pattern = '\{(.+?)\}'
$matched = ([regex]::Matches($results, $pattern)).Value
if($matched -ne $null) {
$vsanHclTime = $matched|ConvertFrom-Json
} else {
Write-Error "Unable to retrieve vSAN HCL jsonUpdatedTime, ensure you have internet connectivity when running this script"
}
$hclObject = [ordered] @{
timestamp = $
jsonUpdatedTime = $
totalCount = $($ + $)
supportedReleases = $supportedESXiReleases
eula = @{}
data = [ordered] @{
controller = @($ctrResults)
ssd = @($ssdResults)
hdd = @()
}
}
$dateTimeGenerated = Get-Date -Uformat "%m_%d_%Y_%H_%M_%S"
$outputFileName = "custom_vsan_esa_hcl_${dateTimeGenerated}.json"
Write-Host -ForegroundColor Green "Saving Custom vSAN ESA HCL to ${outputFileName}`n"
$hclObject | ConvertTo-Json -Depth 12 | Out-File -FilePath $outputFileName
Run Powershell and use the PowerCLI command Connect-VISserver to connect to the nested ESXi host and run the custom HCL JSON file generation script.
The content of the generated custom HCL JSON file is shown below. Note that to run the above script, you need to connect your computer to the Internet. If you cannot connect to the Internet, you need to download the official HCL JSON file (/service/vsan/) manually, and then change the values of the timestamp and jsonUpdatedTime fields to the latest values in the official HCL JSON file.
{
"timestamp": 1721122728,
"jsonUpdatedTime": "July 16, 2024, 2:38 AM PDT",
"totalCount": 2,
"supportedReleases": [
"ESXi 8.0 U2"
],
"eula": {
},
"data": {
"controller": [
{
"id": 33729,
"releases": {
"ESXi 8.0 U2": {
"nvme_pcie": {
"1.2.4.11-1vmw.802.0.0.22380479": {
"type": "inbox",
"queueDepth": 510,
"firmwares": [
{
"firmware": "1.3",
"vsanSupport": [
"Hybrid:Pass-Through",
"All Flash:Pass-Through",
"vSAN ESA"
]
}
]
}
},
"vsanSupport": [
"Hybrid:Pass-Through",
"All Flash:Pass-Through"
]
}
}
}
],
"ssd": [
{
"id": 25674,
"did": "7f0",
"vid": "15ad",
"ssid": "7f0",
"svid": "15ad",
"vendor": "NVMe",
"model": "VMware Virtual NVMe Disk",
"devicetype": "NVMe",
"partnername": "NVMe",
"productid": "VMware Virtual NVMe Disk",
"partnumber": "f72c2cf6551ae47e000c2968afc4b0ec",
"capacity": 61440,
"vcglink": "/homelab",
"releases": {
"ESXi 8.0 U2": {
"vsanSupport": [
"All Flash:",
"vSANESA-SingleTier"
],
"nvme_pcie": {
"1.2.4.11-1vmw.802.0.0.22380479": {
"firmwares": [
{
"firmware": "1.3",
"vsanSupport": {
"tier": [
"AF-Cache",
"vSANESA-Singletier"
],
"mode": [
"vSAN",
"vSAN ESA"
]
}
}
],
"type": "inbox"
}
}
}
},
"vsanSupport": {
"mode": [
"vSAN",
"vSAN ESA"
],
"tier": [
"vSANESA-Singletier",
"AF-Cache"
]
}
}
],
"hdd": [
]
}
}
2) Re-save as HCL JSON file
Strange, I don't know why I have problem to use the HCL JSON file generated above, I open the generated HCL JSON file through Notepad, then copy all of them to another Notepad, then save as JSON file (e.g. ), and finally import them to Cloud Builder before I can verify the success. If you encounter the same problem, you can try this operation.
3) Upload HCL JSON file to Cloud Builder
After generating a custom HCL JSON file for the nested ESXi host using the script above, you need to upload it to Cloud Builder via SFTP and configure the path to the HCL JSON file in the Excel parameter sheet, which will be needed when deploying the management domain.
mv /home/admin/ /opt/vmware/bringup/tmp/
chmod 644 /opt/vmware/bringup/tmp/
chown vcf_bringup:vcf /opt/vmware/bringup/tmp/
NSX Manager Deployment Tips
1) Increase wait time for NSX Manager deployment
During VCF management domain deployment, the longest time is spent on automatic deployment and configuration of NSX components, which may last for a long time and eventually even fail if the hardware performance of the deployment environment is not good. The wait time for Cloud Builder to deploy NSX components can be adjusted so that the deployment process can also be completed before the timeout occurs.SSH Log in to Cloud Builder as the admin user and switch to the root user and execute the following command.
vim /opt/vmware/bringup/webapps/bringup-app/conf/
Add the following parameter:
=100 (or longer)
Restart the Cloud Builder service.
systemctl restart vcf-bringup
2) Modify the number of NSX Manager deployment nodes
By default, when deploying the NSX Manager component, 3 NSX Manager nodes are deployed and a full NSX cluster is configured. In fact, if you are just testing and learning, you can deploy only 1 NSX Manager node when the resources of the host where the VCF environment is deployed are not sufficient, which can also greatly reduce the resource consumption.
This is done by converting the Excel parameter sheet into a JSON configuration file and then finding the configuration for NSX in the JSON file, as shown below.
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
},
{
"hostname": "vcf-mgmt01-nsx01b",
"ip": "192.168.32.68"
},
{
"hostname": "vcf-mgmt01-nsx01c",
"ip": "192.168.32.69"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
Remove the other 2 NSX Manager nodes from the JSON file as shown below. This will allow you to deploy only 1 node.
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
3) Adjust the NSX Manager default storage policy
For the same reason, when the hardware performance is not enough, you can adjust the vSAN storage policy by adjusting the vSAN default storage policy to change the FTT to 0, that is, there is no task replica, so that the deployment of the NSX Manager component can also be accelerated when deploying the NSX Manager component, and then after the subsequent VCF management domain is successfully deployed, you can adjust the NSX Manager node's vSAN storage policy to the vSAN ESA default storage policy (RAID 5) for the NSX Manager node after the VCF management domain has been deployed. Note that this requires logging into the vSphere Client to make the adjustment before Cloud Builder deploys the NSX Manager component.
4) Modify the NSX Manager memory reservation
For the same reason, when there are not enough hardware resources, you can change the memory reservation in the memory configuration of the NSX Manager node VM to "0", which means that it does not take up all the allocated memory resources. Of course, this can be changed as needed by logging into the vSphere Client after the VCF management domain has been deployed.
Preparing the JSON Configuration File
1) Excel Parameter Sheet
Below is an Excel spreadsheet prepared for the current environment, so you can have a visualization. license has been processed.
- Credentials Parameter Table
- Hosts and Networks parameter table
- Deploy Parameters
2) JSON configuration file
The configuration file in JSON format will be imported later for deployment. only 1 NSX Manager node is retained. license has been processed.
{
"subscriptionLicensing": false,
"skipEsxThumbprintValidation": false,
"managementPoolName": "vcf-mgmt01-np01",
"sddcManagerSpec": {
"secondUserCredentials": {
"username": "vcf",
"password": "Vcf5@password"
},
"ipAddress": "192.168.32.70",
"hostname": "vcf-mgmt01-sddc01",
"rootUserCredentials": {
"username": "root",
"password": "Vcf5@password"
},
"localUserPassword": "Vcf5@password"
},
"sddcId": "vcf-mgmt01",
"esxLicense": "00000-00000-00000-00000-00000",
"taskName": "workflowconfig/",
"ceipEnabled": false,
"fipsEnabled": false,
"ntpServers": ["192.168.32.3"],
"dnsSpec": {
"subdomain": "",
"domain": "",
"nameserver": "192.168.32.3"
},
"networkSpecs": [
{
"networkType": "MANAGEMENT",
"subnet": "192.168.32.0/24",
"gateway": "192.168.32.254",
"vlanId": "0",
"mtu": "1500",
"portGroupKey": "vcf-mgmt01-vds01-pg-mgmt",
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VMOTION",
"subnet": "192.168.40.0/24",
"gateway": "192.168.40.254",
"vlanId": "40",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds01-pg-vmotion",
"includeIpAddressRanges": [{"endIpAddress": "192.168.40.4", "startIpAddress": "192.168.40.1"}],
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VSAN",
"subnet": "192.168.41.0/24",
"gateway": "192.168.41.254",
"vlanId": "41",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds02-pg-vsan",
"includeIpAddressRanges": [{"endIpAddress": "192.168.41.4", "startIpAddress": "192.168.41.1"}],
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
},
{
"networkType": "VM_MANAGEMENT",
"subnet": "192.168.32.0/24",
"gateway": "192.168.32.254",
"vlanId": "0",
"mtu": "9000",
"portGroupKey": "vcf-mgmt01-vds01-pg-vm-mgmt",
"standbyUplinks":[],
"activeUplinks":[
"uplink1",
"uplink2"
]
}
],
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",
"nsxtLicense": "33333-33333-33333-33333-33333",
"transportVlanId": 42,
"ipAddressPoolSpec": {
"name": "vcf-mgmt01-tep01",
"description": "ESXi Host Overlay TEP IP Pool",
"subnets":[
{
"ipAddressPoolRanges":[
{
"start": "192.168.42.1",
"end": "192.168.42.8"
}
],
"cidr": "192.168.42.0/24",
"gateway": "192.168.42.254"
}
]
}
},
"vsanSpec": {
"licenseFile": "11111-11111-11111-11111-11111",
"vsanDedup": "false",
"esaConfig": {
"enabled": true
},
"hclFile": "/opt/vmware/bringup/tmp/",
"datastoreName": "vcf-mgmt01-vsan-esa-datastore01"
},
"dvsSpecs": [
{
"dvsName": "vcf-mgmt01-vds01",
"vmnics": [
"vmnic0",
"vmnic1"
],
"mtu": 9000,
"networks":[
"MANAGEMENT",
"VMOTION",
"VM_MANAGEMENT"
],
"niocSpecs":[
{
"trafficType":"VSAN",
"value":"HIGH"
},
{
"trafficType":"VMOTION",
"value":"LOW"
},
{
"trafficType":"VDP",
"value":"LOW"
},
{
"trafficType":"VIRTUALMACHINE",
"value":"HIGH"
},
{
"trafficType":"MANAGEMENT",
"value":"NORMAL"
},
{
"trafficType":"NFS",
"value":"LOW"
},
{
"trafficType":"HBR",
"value":"LOW"
},
{
"trafficType":"FAULTTOLERANCE",
"value":"LOW"
},
{
"trafficType":"ISCSI",
"value":"LOW"
}
],
"nsxtSwitchConfig": {
"transportZones": [
{
"name": "vcf-mgmt01-tz-vlan01",
"transportType": "VLAN"
}
]
}
},
{
"dvsName": "vcf-mgmt01-vds02",
"vmnics": [
"vmnic2",
"vmnic3"
],
"mtu": 9000,
"networks":[
"VSAN"
],
"nsxtSwitchConfig": {
"transportZones": [ {
"name": "vcf-mgmt01-tz-overlay01",
"transportType": "OVERLAY"
},
{
"name": "vcf-mgmt01-tz-vlan02",
"transportType": "VLAN"
}
]
}
}
],
"clusterSpec":
{
"clusterName": "vcf-mgmt01-cluster01",
"clusterEvcMode": "intel-broadwell",
"clusterImageEnabled": true,
"vmFolders": {
"MANAGEMENT": "vcf-mgmt01-fd-mgmt",
"NETWORKING": "vcf-mgmt01-fd-nsx",
"EDGENODES": "vcf-mgmt01-fd-edge"
}
},
"pscSpecs": [
{
"adminUserSsoPassword": "Vcf5@password",
"pscSsoSpec": {
"ssoDomain": ""
}
}
],
"vcenterSpec": {
"vcenterIp": "192.168.32.65",
"vcenterHostname": "vcf-mgmt01-vcsa01",
"licenseFile": "22222-22222-22222-22222-22222",
"vmSize": "small",
"storageSize": "",
"rootVcenterPassword": "Vcf5@password"
},
"hostSpecs": [
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.61"
},
"hostname": "vcf-mgmt01-esxi01",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:PYxgi8oEfK3j263pHx3InwL1xjIY1rAYN6pR607NWjc",
"sslThumbprint": "FF:A2:88:5B:C3:9A:A0:14:CE:ED:6D:F7:CE:5C:55:B6:2B:6D:35:E8:60:AE:79:79:FD:A3:A7:6C:D7:C1:5C:FA",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.62"
},
"hostname": "vcf-mgmt01-esxi02",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:h6HfTvQi/HJxFq48Q4SQH1TevWqNvgEQ1kWARQwpjKw",
"sslThumbprint": "70:1A:62:4F:B6:A9:A2:E2:AC:6E:4D:28:DE:E5:A8:FE:B1:F3:B0:A0:3F:26:93:86:F1:66:B3:A6:44:50:1F:AE",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.63"
},
"hostname": "vcf-mgmt01-esxi03",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:rniXpvC4JmiXVq7nd+FkjMrX+oTKCM+CgkvglKATgEE",
"sslThumbprint": "76:84:9E:03:BB:C5:10:FE:72:FC:D3:24:84:71:F5:85:7B:A7:0B:55:7C:7B:0F:BB:83:EA:D7:4F:66:3E:B1:8D",
"vSwitch": "vSwitch0"
},
{
"association": "vcf-mgmt01-datacenter01",
"ipAddressPrivate": {
"ipAddress": "192.168.32.64"
},
"hostname": "vcf-mgmt01-esxi04",
"credentials": {
"username": "root",
"password": "Vcf5@password"
},
"sshThumbprint": "SHA256:b5tRZdaKBbMUGmXPAph5s6XdMKQ5Mh0pjzgM0A16J/g",
"sslThumbprint": "97:83:39:DE:C0:D3:99:06:49:FF:1C:E8:BA:76:60:C6:C1:45:19:BD:C9:10:B0:C2:58:AC:71:12:C8:21:A9:BF",
"vSwitch": "vSwitch0"
}
]
}
V. Deploying the SDDC Management Domain
With all of the above environments prepared, you are now officially in the SDDC Management Domain deployment. Access to Cloud Builder through the springboard machine and complete the login.
Select the VMware Cloud Foundation platform.
Make sure you accept and click NEXT.
The parameter profile is ready, click NEXT.
Upload the JSON configuration file and click NEXT.
To complete the profile check, click NEXT.
Click OK to deploy the SDDC.
Start the SDDC Bringup build process.
Could go for a meal and a cup of coffee and finish the deployment.
All the tasks of the deployment process (previous screenshot).
DOWNLOAD deployment report in 2 hours.
Click FINISH to access SDDC Manager.
Jump to vCenter Server and enter your password to log in.
View the VMware Cloud Foundation version.
VI. Information about the SDDC management domain
1)SDDC Manager
- SDDC Manager Dashboard
- All workload domains in the SDDC Manager inventory
- vcf-mgmt01 Management Workload Domain Summary
- vcf-mgmt01 Managing Hosts in a Workload Domain
- vcf-mgmt01 Managing clusters in workload domains
- vcf-mgmt01 Managing Workload Domain Component Certificates
- All hosts in the SDDC Manager inventory
- Releases included in SDDC Manager
- Network Pools Created in SDDC Manager
- SDDC Manager Configuration Backup
- Component Password Management in SDDC Manager
2)NSX Manager
- NSX System Configuration Overview
- NSX Node Devices
- NSX Transport Node
- NSX Configuration File
- NSX transmission area
- NSX Configuration Backup
3)vCenter Server
- Hosts and clusters in a VCF management domain
- VCF Management Domain vSAN ESA Storage Architecture
- VCF Management Domain Related Components Virtual Machine
- vSAN storage used by the VCF management domain
- Distributed Switch Configuration for VCF Management Domains
- Network Configuration for ESXi Hosts in the VCF Management Domain