Location>code7788 >text

[VMware VCF] VMware Cloud Foundation Part 03: Preparing the Excel Parameter Sheet.

Popularity:406 ℃/2024-07-24 11:23:26

VMware Cloud Foundation uses the VMware Cloud Builder tool to automate and standardize deployments. In addition to preparing the necessary ESXi hosts for deploying the management domain and running the management-related components, you must also prepare a predefined Excel parameter sheet for deploying the management domain, which is used to specify information about the environment, such as ESXi hosts and network and license keys, that plays a key role in the deployment process. In addition to preparing the necessary ESXi hosts for deploying the management domain and running the management-related components, you must also prepare a predefined Excel parameter sheet for deploying the management domain, which is mainly used to specify the relevant information in the deployment environment, such as the ESXi hosts, the network, and the license key, etc. This plays a key role in the Bring-up process.Dell Vxrail It's a bit like the difference between DIY and OEM, and you'll mainly use the first one in a real deployment. These two parameter lists are published in the Brondcom Support Portal along with the VMware Cloud Builder OVA file, or if you don't know about them, you can find them in my other article (VMware Cloud Foundation Part 02: Deploying Cloud Builder.) Go ahead and make the download.

The reason why I will start with the Excel parameter sheet for deploying a VCF management domain and not the ESXi hosts for deploying a VCF management domain is because I think it is important to know what you need to prepare before deploying VCF and how VCF is designed based on these things, especially for those who don't know about VCF as a solution, and when you know that and then you can go through the process of preparing the environment to meet the VCF deployment. Once you know this, you can then go back and prepare the environment for VCF deployment, which will also avoid many problems during the actual deployment. This is in the actual project, it is like the first to understand the project requirements and do a good job of planning and design, and then step by step to complete the project construction. As an aside, it is actually recommended to combine this article with another article (VMware Cloud Foundation Part 04: Preparing an ESXi Host.) Combine for better results. Without further ado, here's a look at what's really in this Excel parameter sheet.

I. Introduction

Open the Excel Parameter Sheet ( ), the first sheet is an Introduction to this Excel Parameter Sheet. The current version of the Excel Parameters sheet is v5.1.0, which is used by the VMware Cloud Builder tool to import and deploy VCF management domains. There are three parameter tables, Credentials, Hosts and Networks, and Deploy Parameters, with explanations of their meanings and uses. Note: If a cell in the parameter table is yellow, it represents an example value and you need to change it to the information of your deployment environment; if a cell in the parameter table turns red, it means that the field is a required field and the value is empty or incorrect; and try to avoid copying and pasting between cells in the parameter table, which may result in failure to validate the value when deploying, and if you really want to use it, choose the option of no formatting. If you really want to use it, please choose the unformatted paste.

II. Credentials

The first real parameter table (Credentials), this parameter table is used to define the user credentials information for the core components of the VCF management domain, which includes ESXi hosts, vCenter Server, NSX Manager, SDDC Manager and so on. You need to fill in the passwords for the default users of these components. For example, the ESXi hosts are the ones we prepared in advance, and the password for the default superuser root is also the one we configured during the installation, and note that the passwords should be the same for all ESXi hosts used for the VCF management domain. The other components are the ones that will be created when we deploy the VCF management domain, so we need to predefine the passwords for the default users of these components, which will be used for administrative purposes. Fill in the red cells with the passwords for these components, and note that the passwords need to meet the following criteriacomplexity requirement。 

III. Hosts and Networks

The following is the Hosts and Networks parameter table. This table is the most important when planning and designing the VCF management domain, as long as you understand the network design here, it should be easier to deploy VCF later. This table is divided into three main sections, which areManagement Domain NetworksManagement Domain ESXi Hosts andNSX Host Overlay Network - DHCP/STATIC

Before we get into the Hosts and Networks parameter table, let's take a look at the VCF network design, as shown in the following figure. There are four ESXi hosts in the VCF management domain, which are the hosts with the fewest number of compute nodes that make up the management domain.The VCF management domain divides the network into four categories based on the type of traffic: management network, vMotion network, vSAN network, and NSX network, which is further divided into host management network and virtual machine (VM) management network.

  • Management (Host and VM) network

The host management network is the network used for ESXi host management. When installing an ESXi host, one of the host's NICs will be used as the management address by default, which you should know very well if you have installed ESXi before. After installing ESXi, you can configure the NIC, IP address, and DNS used for ESXi management through the host's DCUI, and then log in to the ESXi Host Client UI management interface. ESXi Host Client UI management interface, in the network view, you can see that a standard switch vSwitch0 will be created by default, and under this switch, there is a Management Network port group, and under the port group, there is a VMkernel NIC configured with a host management IP address, and this network is the host management network.

The virtual machine (VM) management network is the management network used by the VCF management domain related component virtual machines (e.g., vCenter Server, NSX Manager, SDDC Manager), which can be managed using a separate management network or can be placed together with the host (Host) management network.

  • vMotion Network

VCF management domains are typically used to house management domain related component VMs and should not be running other workloads. the vMotion network here is not used for workload VMs to migrate but for these management domain related component VMs to migrate.

  • vSAN Networking

The only primary storage used to build a VCF management domain is vSAN, and vSAN distributed storage requires that storage traffic between hosts be transported using a separate network. This is typically done using an L2 Layer 2 network, using traditional Ethernet switches rather than FCs.

  • NSX Networks

NSX network is divided into two categories, one is the TEP network, which is the VLAN network used to set up NSX hosts, usually hosts have as many NICs used for networking NSX, how many TEP IP addresses are needed, the previous NSX version supports the standard switch (VSS), but now it only supports the distributed switch (VDS); the other is the overlay network, which is dedicated to transmitting NSX Another category is the Overlay Network, which is a dedicated network for transporting NSX Overlay Geneve traffic, somewhat similar to what is meant by a vSAN dedicated storage traffic network, but it is a tunneling technology, one segment per network (segment), which enables large Layer 2 networks to interoperate through Overlay. Before you can understand what these networks are and what they do, you may want to familiarize yourself with the concept of NSX networking.

The above network basically enables east-west traffic interoperability within the virtualized data center, with NSX being the main artery of VMware's VCF solution at the network level, carrying the connectivity between the core components and workloads. For a complete SDDC VCF solution, you will also need the NSX Edge node, as shown in the figure below, which connects the workloads in the NSX network to the real physical environment for north-south traffic interoperability, allowing external endpoints to access services within the NSX network. This component is not set up in the Bring-up, but is configured through SDDC Manager after the management domain is built.

Having understood the network architecture of VCF, we now return to the Hosts and Networks parameter table.

1)Management Domain Networks

The first part is the configuration of the Management Domain Networks parameters. The following figure defines the three networks used by VCF, namely the Management Network, the vMotion Network, and the vSAN Network. The Management Network consists of the Host Management Network and the VM Management Network in the figure below. You can separate these two networks using VLANs, set different network/masks and gateways, belong to different port groups, and set the MTU size, or put them in one block, using the same VLAN segments and different gateways. Or they can be put together, using the same VLAN segments and the same gateways. This management network will eventually be placed under the distributed switch (described below), including the host management network (ESXi will have a vSwitch0 by default, which will eventually be migrated under the distributed switch defined below and then deleted). vMotion network and vSAN network are similar to the management network described above, with different VLANs, and different networks. The MTU size of the port group is set to 9000. then all of the switches connected to the ESXi host's NICs must be set to this value in the upper layers up to the real physical environment. otherwise this setting is invalid.

Below are the three Profile profiles that VCF defines to create a VDS based on the different types of traffic carried by the NICs, Profile-1, Profile-2, and Profile-3 respectively. you can select different Profile profiles by clicking the options in the cell after "vSphere Distributed Switch Profile". You can click the options in the cell behind the "Profile" to select a different profile configuration type, and see how these profile configuration types are set up below.

The first.Profile-1This profile type is explained at the bottom of the page. At the bottom there is an explanation of this Profile configuration type, which creates a Distributed Switch, usually chosen when the physical ESXi host has only two ports for VCF, which is used to carry all the traffic from the management network, the vMotion network, the vSAN network, and the NSX Overlay network, but definitely separated by VLANs. refer to the diagram above. Then configure the name of the distributed switch, the physical NIC names (pNICs) assigned to the distributed switch by the ESXi host, the name of the ports should be filled in according to the actual ESXi host, and then configure the MTU value (Size) and the Transport Zone Type of the distributed switch, the Transport Zone Type should be filled in according to the actual ESXi host, the MTU value (Size) and the Transport Zone Type should be configured. Then you need to configure the MTU value (Size) of the distributed switch and the type of transport zone (Transport Zone Type), which supports the choices of n/a (none), VLAN, Overlay, and Overlay/VLAN. If you select Profile-1, which means that all traffic runs on the same distributed switch, you can only select Overlay/VLAN because NSX needs to run on it as well. because NSX needs to run on it as well.

The second.Profile-2. In most real-world environments, ESXi hosts usually have multiple network ports, and Profile-2 is the type of configuration made for using 4 network ports. When Profile-2 is selected, it defines the need to create 2 distribution switches, each with 2 ports, and you can see the red cell indicating that we need to set the name of the other distribution switch and the type of transport network. Once this Profile is selected, the different Distributed Switches will be assigned different network transport types. For example, as illustrated at the bottom of the figure below, Distributed Switch vds01 is used to transport the Management Network, vMotion Network, and NSX Overlay Network, and the Overlay/VLAN traffic type is selected for the configuration options; while the second Distributed Switch (which is not named), is dedicated to transporting the management network, vMotion Network, and NSX Overlay Network. not yet named), is dedicated to hosting the vSAN network. Note that only one of the two Distributed Switches can be used to host the NSX Overlay network. For example, if Overlay/VLAN is selected for vds01 above, the other Distributed Switch below it cannot be selected for Overlay/VLAN or Overlay, only VLAN or n/a.

This actually needs to be selected according to the actual situation. For example, in the figure below, vds01 selects the VLAN traffic transport type to run the management network and vMotion network; vds02 selects the Overlay/VLAN traffic transport type to run the vSAN network and NSX Overlay network. Note that this is not a random selection, and you can see how to match it correctly by selecting the different options in the cell.

The third.Profile-3This type of profile configuration is also made for the use of 4 ports. This Profile configuration type is also made for the use of 4 ports, but the difference is that different Distributed Switches carry different types of networks. This profile type can only use the vds02 switch as an NSX Overlay network and the vds01 switch as a management network, vMotion network, and vSAN network.

There is some verbosity above, but it is still desirable to correctly differentiate between the different network types and the correspondences of the different Profile configuration types to minimize the error rate.

2)Management Domain ESXi Hosts

The following section contains the parameters used to configure the management domain ESXi hosts and the three network types. The first line is for the host name of the ESXi hosts for the management domain, such as sfo01-m01-esx01, sfo01-m01-esx02, sfo01-m01-esx03, sfo01-m01-esx04, and the second line is for the IP addresses of the ESXi hosts, which need to be configured with forward and backward DNS resolution. resolution, otherwise the deployment process will fail. The third and fourth lines define the addresses used by the VMkernel for the vMotion network and the vSAN network, which are assigned to the hosts during installation and deployment. Typically, you need to configure as many addresses as there are hosts, and the start-to-finish IP address needs to be greater than or equal to the number of hosts.

The following figure is used to fill in the security fingerprint of the ESXi host (including SSH and SSL), which is required if Yes is selected, and is not validated during deployment if No is selected, and is primarily used for security purposes. Note that the ESXi host name needs to correspond to the actual ESXi host name if this information is to be filled in.

For this information, you need to log in to VMware Cloud Builder as admin and switch to the root user, and then use the following command to get it. In the case of a host, you need to change the host address in the following command to yours.

  • Getting ESXi Host SSH RSA Key Fingerprints (SHA256)
ssh-keygen -lf <(ssh-keyscan  2>/dev/null)
  • Get SSL Thumbprints (SHA256)
openssl s_client -connect :443 < /dev/null 2> /dev/null | openssl x509 -sha256 -fingerprint -noout -in /dev/stdin

3)NSX Host Overlay Network - DHCP/STATIC

Below that are the NSX Host Overlay Network parameters, which is a separate section for NSX Overlay network configuration. At the top, you configure the VLANs for the NSX Overlay, just like any other network type. If you select No for "Configure NSX Host Overlay Using a Static IP Pool", then the DHCP server in your environment is used to assign TEP addresses; if you select Yes, then addresses are assigned using the created TEP address pool configured in the following figure. Note that each port on ESXi that participates in the TEP network needs to be assigned an address. For example, if there are four hosts and each host has two NICs used for NSX Overlay networking, i.e., Profile-2 and Profile-3, then there are a total of eight ports that need TEP addresses, and you will need to configure the NSX Overlay start-to-finish IP addresses to be greater than or equal to eight. The configuration of NSX Overlay start-to-finish IPs requires more than or equal to 8.

IV. Deploy Parameters

Finally, there is the Deploy Parameters table, which is used to define the configuration information for the components related to the VCF management domain, as well as the DNS and NTP servers that provide external services to these components. There are five categories, Existing Infrastructure Details, License Keys, vSphere Infrastructure, NSX, and SDDC Manager, which are described in more detail below.

1)Existing Infrastructure Details

Configure the DNS and NTP servers used for the VCF management components, filling in the IP addresses in the actual environment as appropriate. Be sure to ensure that the VCF management components are all configured with forward and reverse domain name resolution on the DNS servers followed by the DNS domain name to ensure that all the management components of the VCF belong to the same DNS domain, and enable or disable CEIP (Customer Experience Improvement Program) and FIPS (Federal Information Processing Standards) security modes, if required. FIPS (Federal Information Processing Standards) security modes as needed.

2)License Keys

Define the license key for the VCF management component, select No and enter the License supported by the component in the VCF BOM inventory. note that if you have a serial number, be sure to fill in all the product licenses here in advance, if you do not, you will be deploying the VCF in evaluation mode and will not be able to add service hosts to SDDC Manager and create VI workload domains later. If not, the VCF will be deployed in evaluation mode and you will not be able to add service hosts and create VI workload domains in SDDC Manager.

3)vSphere Infrastructure

Define configuration information for the VCF management components vSphere, vSAN, and the VCF environment. During the automated deployment of the VCF management domain, a vCenter Server is created. You need to define the host name (FQDN) and IP address of this vCenter Server, corresponding to the previous virtual machine (VM) management network, and set the deployment size of the vCenter Server, which is Small by default, and the storage disk size by default. The default storage disk size is fine. The vCenter Server creates a datacenter, and underneath the datacenter, a cluster is created that holds all the ESXi hosts for the VCF management domain; whether to turn on vLCM image-based lifecycle management or not, you must select Yes if you are configuring the cluster later to have a vSAN based on the ESA architecture; configure the EVC mode for the cluster, which can be based on the ESXi VM management network, or the vSAN based on the ESXi VM management network, which can be based on the ESXi VM management network. EVC mode can be selected based on the benchmarks supported by the ESXi host CPUs; the VCF management domain only supports the use of vSAN as primary storage, with the name of the vSAN storage defined under the vSphere Datastore; and whether to turn on deduplication and compression applies only to the vSAN OSA architecture, while the vSAN ESA architecture can be configured in the vSAN virtual machine storage policy. vSAN ESA architecture can be specified in the vSAN VM storage policy; whether to turn on vSAN ESA deployment architecture, this is more important, only from this version (5.1.0) began to support vSAN ESA deployment architecture as a datastore for the VCF management domain, if you want to deploy vSAN ESA architecture, you need to ESXi hosts disk is based on NVMe SSD drives, and the hard drive must meet the If your VMware Cloud Builder cannot be networked, you need to manually fill in the vSAN HCL JSON file after "Path to HCL JSON File", such as /opt/vmware/bringup/tmp/, we need to manually download the /service/vsan/ file and upload it to the vSAN ESA architecture. We need to manually download the /service/vsan/ file and upload it to the VMware Cloud Builder VM, and then verify the compatibility of the ESXi hosts before deploying it, if the hosts do not meet the compatibility requirements, we cannot proceed to the next step, I am going to use a nested ESXi for this deployment to deploy the VCF for the vSAN ESA architecture, and if you follow the above steps to configure it normally, then it will be problematic, there are other ways to work around this issue. There are other ways to get around this, so stay tuned for more on that later. Of course, if VMware Cloud Builder cannot be networked, you can also configure a Proxy Server to network and download the vSAN HCL JSON file online for verification. Finally, configure the type of VCF deployment architecture to support standard and consolidated deployment. By default, standard deployment is selected, and if you select consolidated deployment, the resource pool configured below will be created automatically to isolate the VCF management-related components.

4)NSX

Define the configuration information of NSX components. By default, an NSX management cluster consisting of 3 nodes of NSX Manager will be created during the automatic deployment of the VCF management domain, so you need to define the VIPs of this NSX cluster and the corresponding host names (FQDN), including the IP addresses of the 3 nodes and the host name (FQDN), and the deployment size of the NSX virtual machines. The default is Medium, and you can choose a deployment size that suits your environment. If your environment has limited resources, you may not be able to deploy a full 3-node cluster, there are some tips to deploy only 1 NSX Manager node, please stay tuned to the following article for more details.

5)SDDC Manager

Define the configuration information for the SDDC Manager component. An SDDC Manager virtual machine is created during the automatic deployment of the VCF management domain, so you need to define the IP address and host name (FQDN) of this virtual machine, configure the name of the management domain network pool, and the name of the management domain.

V. JSON files

Deploying a VMware Cloud Foundation management domain using VMware Cloud Builder supports the use of parameter files in JSON format in addition to the Excel parameter tables described above, which may be easier to read and modify. We can use theSOS programConvert the Excel parameter table above into a JSON file supported by Deploy VCF.

1) SSH login to VMware Cloud Builder as admin user and switch to root user.

su -

2) Upload the configured Excel parameter sheet to Cloud Builder via SFTP.

cp /home/admin/ /tmp

3) Run the following command to convert the Excel parameter table into a JSON format parameter file.

/opt/vmware/sddc-support/sos --jsongenerator --jsongenerator-input /tmp/ --jsongenerator-design vcf-ems-deployment-parameter

4) The generated JSON file will be placed in the following directory by default.

cd /opt/vmware/sddc-support/cloud_admin_tools/Resources/vcf-ems-deployment-parameter/

5) You can use SFTP to download the JSON locally for backup, the final generated VCF JSON file is as follows:

{
    "subscriptionLicensing": true,
  "skipEsxThumbprintValidation": false,
  "managementPoolName": "sfo-m01-np01",
  "sddcManagerSpec": {
    "secondUserCredentials": {
      "username": "vcf",
      "password": ""
    },
    "ipAddress": "172.16.11.59",
    "hostname": "sfo-vcf01",
    "rootUserCredentials": {
      "username": "root",
      "password": ""
    },
    "localUserPassword": ""
  },
  "sddcId": "sfo-m01",
  "esxLicense": "",
  "taskName": "workflowconfig/",
  "ceipEnabled": false,
  "fipsEnabled": false,
  "ntpServers": ["172.16.11.253"],
  "dnsSpec": {
    "subdomain": "",
    "domain": "",
    "nameserver": "172.16.11.4"
  },
  "networkSpecs": [
    {
      "networkType": "MANAGEMENT",
      "subnet": "172.16.11.0/24",
      "gateway": "172.16.11.253",
      "vlanId": "1611",
      "mtu": "1500",
      "portGroupKey": "sfo01-m01-cl01-vds01-pg-mgmt",
      "standbyUplinks":[],
      "activeUplinks":[
        "uplink1",
        "uplink2"
      ]
    },
    {
      "networkType": "VMOTION",
      "subnet": "172.16.12.0/24",
      "gateway": "172.16.12.253",
      "vlanId": "1612",
      "mtu": "9000",
      "portGroupKey": "sfo01-m01-cl01-vds01-pg-vmotion",
      "includeIpAddressRanges": [{"endIpAddress": "172.16.12.104", "startIpAddress": "172.16.12.101"}],
      "standbyUplinks":[],
      "activeUplinks":[
        "uplink1",
        "uplink2"
      ]
    },
    {
      "networkType": "VSAN",
      "subnet": "172.16.13.0/24",
      "gateway": "172.16.13.253",
      "vlanId": "1613",
      "mtu": "9000",
      "portGroupKey": "sfo01-m01-cl01-vds01-pg-vsan",
      "includeIpAddressRanges": [{"endIpAddress": "172.16.13.104", "startIpAddress": "172.16.13.101"}],
      "standbyUplinks":[],
      "activeUplinks":[
        "uplink1",
        "uplink2"
      ]
    },
    {
      "networkType": "VM_MANAGEMENT",
      "subnet": "172.16.15.0/24",
      "gateway": "172.16.15.253",
      "vlanId": "1610",
      "mtu": "9000",
      "portGroupKey": "sfo01-m01-cl01-vds01-pg-vm-mgmt",
      "standbyUplinks":[],
      "activeUplinks":[
        "uplink1",
        "uplink2"
      ]
    }
  ],
  "nsxtSpec":
  {
    "nsxtManagerSize": "medium",
    "nsxtManagers": [
      {
          "hostname": "sfo-m01-nsx01a",
          "ip": "172.16.11.66"
      },
      {
          "hostname": "sfo-m01-nsx01b",
          "ip": "172.16.11.67"
      },
      {
          "hostname": "sfo-m01-nsx01c",
          "ip": "172.16.11.68"
      }
    ],
    "rootNsxtManagerPassword": "",
    "nsxtAdminPassword": "",
    "nsxtAuditPassword": "",
    "vip": "172.16.11.65",
    "vipFqdn": "sfo-m01-nsx01",
    "nsxtLicense": "",
    "transportVlanId": 1614
  },
  "vsanSpec": {
      "vsanDedup": "false",
      "esaConfig": {
        "enabled": false
      },
      "datastoreName": "sfo-m01-cl01-ds-vsan01"
  },
  "dvsSpecs": [
    {
      "dvsName": "sfo-m01-cl01-vds01",
      "vmnics": [
        "vmnic0",
        "vmnic1"
      ],
      "mtu": 9000,
      "networks":[
        "MANAGEMENT",
        "VMOTION",
        "VSAN",
        "VM_MANAGEMENT"
      ],
      "niocSpecs":[
        {
          "trafficType":"VSAN",
          "value":"HIGH"
        },
        {
          "trafficType":"VMOTION",
          "value":"LOW"
        },
        {
          "trafficType":"VDP",
          "value":"LOW"
        },
        {
          "trafficType":"VIRTUALMACHINE",
          "value":"HIGH"
        },
        {
          "trafficType":"MANAGEMENT",
          "value":"NORMAL"
        },
        {
          "trafficType":"NFS",
          "value":"LOW"
        },
        {
          "trafficType":"HBR",
          "value":"LOW"
        },
        {
          "trafficType":"FAULTTOLERANCE",
          "value":"LOW"
        },
        {
          "trafficType":"ISCSI",
          "value":"LOW"
        }
      ],
      "nsxtSwitchConfig": {
        "transportZones": [ {
          "name": "sfo-m01-tz-overlay01",
          "transportType": "OVERLAY"
        },
        {
          "name": "sfo-m01-tz-vlan01",
          "transportType": "VLAN"
        }
        ]
      }
    }
  ],
  "clusterSpec":
  {
    "clusterName": "sfo-m01-cl01",
    "clusterEvcMode": "",
    "clusterImageEnabled": true,
    "vmFolders": {
      "MANAGEMENT": "sfo-m01-fd-mgmt",
      "NETWORKING": "sfo-m01-fd-nsx",
      "EDGENODES": "sfo-m01-fd-edge"
    }
  },
  "pscSpecs": [
    {
      "adminUserSsoPassword": "",
      "pscSsoSpec": {
        "ssoDomain": ""
      }
    }
  ],
  "vcenterSpec": {
      "vcenterIp": "172.16.11.62",
      "vcenterHostname": "sfo-m01-vc01",
      "vmSize": "small",
      "storageSize": "",
      "rootVcenterPassword": ""
  },
  "hostSpecs": [
    {
      "association": "sfo-m01-dc01",
      "ipAddressPrivate": {
        "ipAddress": "172.16.11.101"
      },
      "hostname": "sfo01-m01-esx01",
      "credentials": {
        "username": "root",
        "password": ""
      },
      "sshThumbprint": "",
      "sslThumbprint": "",
      "vSwitch": "vSwitch0"
    },
    {
      "association": "sfo-m01-dc01",
      "ipAddressPrivate": {
        "ipAddress": "172.16.11.102"
      },
      "hostname": "sfo01-m01-esx02",
      "credentials": {
        "username": "root",
        "password": ""
      },
      "sshThumbprint": "",
      "sslThumbprint": "",
      "vSwitch": "vSwitch0"
    },
    {
      "association": "sfo-m01-dc01",
      "ipAddressPrivate": {
        "ipAddress": "172.16.11.103"
      },
      "hostname": "sfo01-m01-esx03",
      "credentials": {
        "username": "root",
        "password": ""
      },
      "sshThumbprint": "",
      "sslThumbprint": "",
      "vSwitch": "vSwitch0"
    },
    {
      "association": "sfo-m01-dc01",
      "ipAddressPrivate": {
        "ipAddress": "172.16.11.104"
      },
      "hostname": "sfo01-m01-esx04",
      "credentials": {
        "username": "root",
        "password": ""
      },
      "sshThumbprint": "",
      "sslThumbprint": "",
      "vSwitch": "vSwitch0"
    }
  ]
}

In fact, it is possible to use the online VCF JSON file generator web page created by VMware engineer Martin himself (/vcf-ui-json/) to produce a JSON file for use with VCF, as shown in the following figure.