Location>code7788 >text

[VMware VCF] Use the VCF Import Tool to convert an existing vSphere environment into a management domain.

Popularity:893 ℃/2024-09-20 16:44:53

VMware Cloud Foundation 5.2 was released and introduced a new capability to convert existing vSphere environments directly into a management workload domain or import them into a VI workload domain with the VCF Import Tool. This capability enables customers to quickly transform existing environments into SDDC software-defined data centers powered by VMware Cloud Foundation solutions without the need for new hardware purchases and complex deployment and migration efforts. With this approach, there is no disruption to existing operations, meaning that all conversion processes can be completed without disrupting workloads.

The VCF Import Tool provides two ways to help customers transform existing vSphere environments into VMware Cloud Foundation, Convert and Import. If a customer has never used a VMware Cloud Foundation solution before, i.e., they may have a vSphere environment or a vSphere+vSAN environment, they can use the Convert feature provided by the VCF Import Tool to directly convert their existing environment into a managed workload domain; if they are already using a VMware Cloud Foundation solution in their environment, they can use the Import feature to convert their existing environment into a managed workload domain. If the customer is already using a VMware Cloud Foundation solution in their environment, i.e., they already have a workload domain to manage, they can use the Import feature provided by the VCF Import Tool to directly import their existing environment into a VI workload domain. Overall, the VMware Cloud Foundation solution now provides three ways to build workload domains: Deploy, Convert, and Import, with the VMware Cloud Builder method being used for the initial deployment of the VCF management domain (Deploy).

Note that the workflow differs between the two methods. In the case of Convert, the first step is to deploy the SDDC Manager on an existing vSphere environment, then upload the VCF Import Tool and perform the Convert process; in the case of Import, the Import process is performed after uploading the VCF If you are importing, you can perform the Import process by uploading the VCF Import Tool software and tools to SDDC Manager. For more information and details, see《Converting or Importing Existing vSphere Environments into VMware Cloud Foundation》Product Documentation.

 

I. Requirements and restrictions on use

If you use the VCF Import Tool to convert an existing vShere environment into a management workload domain or to import it into a VI workload domain, there are manyRequirements and restrictionsThe VCF Import Tool is a tool that can be used in a variety of scenarios, including those in which the VCF Import Tool is not supported. Note: Since the VCF Import Tool is still in its initial release, more and more scenarios will be supported as the tool is gradually improved. For example, the current version does not support environments with NSX, which was released at VMware Explore 2024.VMware Cloud Foundation 9 Converting or importing environments with NSX solutions will be supported in future releases.

1) Basic Requirements

Convert to an administrative domain:

  • Existing vSphere environments must be running vSphere 8 U3 and later (VCF 5.2 BOM), including vCenter and ESXi hosts.
  • The vCenter Server virtual machines in an existing vSphere environment must belong to the same cluster, also known as coexistence.

Imported as a VI domain:

  • Existing vSphere environments must be running vSphere 7 U3 and later (VCF 4.5 BOM), including vCenter and ESXi hosts.
  • The vCenter Server virtual machines in an existing vSphere environment must belong to the same cluster or run in a management domain.

2) General Requirements

  • All hosts in an existing vSphere cluster must be isomorphic. That is, all hosts in the cluster need to be identical in terms of capacity, storage type, and configuration (pNIC, VDS, etc.).
  • Existing vSphere clusters must be configured with DRS in fully automated mode.
  • Existing vSphere clusters can use only one of three supported storage types: vSAN, NFS, or FC SAN (VMFS).
  • Existing vSphere clusters require at least four ESXi hosts if you are using vSAN storage, at least two ESXi hosts if you are using other types of storage, and at least three ESXi hosts if you are deploying NSX.
  • Existing vSphere clustered vCenter Servers cannot be configured for Enhanced Linked Mode (EAM), and existing vSphere clusters can only belong to their own SSO domains after using conversion or import.
  • All hosts in an existing vSphere cluster must be configured with a dedicated vMotion network, with one and only one NIC configured for traffic per host, and the NIC configured with a static fixed IP address.
  • All clusters in an existing vSphere environment with a vCenter Server inventory must be configured with one or more dedicated VDS Distributed Switches. the VDS switches cannot be shared by multiple clusters. no VSS standard switches can exist within the cluster.
  • Existing vSphere environments cannot have standalone hosts in the vCenter Server inventory. Standalone hosts are typically hosts that are located in the datacenter and the hosts folder and are not part of any cluster, and if they are, they need to be moved into a cluster.

3) Support limitations

  • vSphere environments with LACP link aggregation configured are not supported.
  • vSphere environments with VDS switch sharing configured are not supported.
  • vSphere environments with vSAN extended clusters configured are not supported.
  • vSphere environments configured with the vSAN cluster "compression only" feature are not supported.
  • NSX-configured vSphere environments are not supported.
  • vSphere environments with AVI Load Balancer configured are not supported.
  • vSphere environments with IaaS Control Plane configured are not supported.
  • VxRail environments are not supported.

It is worth noting that using the VCF Import Tool to convert an existing vSphere environment into a workload domain bypasses the requirement that you can only use a vSAN when deploying VCF using Cloud Builder. You can also see the requirements and limitations in this article (Introduction to the VMware Cloud Foundation (VCF) Import Tool) Article.

 

Existing vSphere environments

There are various limitations and considerations for using the VCF Import Tool tool, for these cases, let's confirm the information in the existing vSphere environment below to make sure we can meet the conversion requirements. Since I'm building this environment using nested virtualization, it will be different from the actual environment, so if you'd like to set up a test environment like this, you can refer to this article (To make it clear once and for all how vCenter Server is deployed with the CLI.) The method in the article, Deploying a Single-Node vSAN ESA Cluster before Adding and Configuring Additional ESXi Hosts.

First, the vCenter Server in an existing vSphere environment must be the version in the VCF 5.2 BOM bill of materials, and if this is not the case in the actual environment, you need to upgrade the vCenter Server to that version (or higher).

Hosts within a cluster in an existing vSphere environment that are used for conversion to a management domain must be of the version in the VCF 5.2 BOM bill of materials, and if this is not the case in your actual environment, you need to upgrade your ESXi hosts to that version (or higher). Since the cluster uses vSAN storage, there are at least four hosts required within the cluster. Standalone hosts in datacenters or host folders are not supported in current vSphere environments, and if there are standalone hosts, they must be moved to a cluster.

Each ESXi host's VMkernel NIC in the cluster must be enabled for only one service traffic type, i.e., there can only be one VMkernel NIC for each type of traffic service. vmk0 cannot be enabled for both management traffic and vMotion traffic at the same time, it must be separate. The IP addresses assigned to these VMkernel cards must not be DHCP-obtained, but must be statically configured and fixed.

Each ESXi host in the cluster should have NTP clock synchronization configured. Similarly, vCenter Server should be configured.

Cluster DRS configuration in an existing vSphere environment must be fully automated.

Existing vSphere clusters that are vSAN clusters can only be vSAN HCI standard/single-site clusters; vSAN HCI extended clusters are not currently supported. If a vSAN cluster with OSA architecture is used, compression only is not supported and both deduplication and compression need to be enabled.

Existing vSphere clusters use a vLCM-based lifecycle management approach.

An existing vSphere cluster currently has vCenter Server virtual machines and vCLS Cluster Services virtual machines. If you are converting to a management domain, the vCenter Server virtual machines must exist in this cluster and cannot be located elsewhere, or if there are multiple clusters and the cluster being used for the conversion does not have any vCenter Server virtual machines, you need to migrate the virtual machines to a management domain cluster first. If there are multiple clusters and there are no vCenter Server virtual machines in the cluster used for the conversion, you need to migrate the virtual machines to the management domain cluster first.

The vSAN storage used by the existing vSphere cluster.

The VDS Distributed Switch used by an existing vSphere cluster that is converted to a management domain cannot have a VSS standard switch, and if it does, you need to migrate the virtual machines to the Distributed Switch and remove the standard switch. This VDS switch cannot have LACP link aggregation enabled, and ESXi hosts in the cluster must have at least two NICs (10G) connected to the uplinks of this VDS switch. This VDS switch can only be dedicated to this cluster. Multiple clusters cannot share the same VDS switch.

The vCenter Server of an existing vSphere cluster cannot be configured with Enhanced Link Mode (ELM) or it is not supported by the current version of the VCF Import Tool.

A vCenter Server with an existing vSphere cluster cannot be registered to any NSX solution, and converting a vCenter Server with NSX registration is not currently supported.

The existing vSphere cluster does not have workload management (IaaS Control Plane, formerly known as vSphere with Tanzu) enabled. Otherwise, the existing vSphere environment must not be a VxRail environment and must not have other solutions such as AVI Load Balancer deployed.

 

Preparing the VCF Import Tool

To use the VCF Import Tool to convert or import an existing vSphere environment, you need to prepare the relevant software and tools in advance, as shown in the following figure. There are three main files, the first one is VCF Import Tool, a command line tool for performing Convert or Import; the second one is VCF-SDDC-Manager-Appliance, which is a standalone deployment device for the SDDC Manager component of VCF to perform Convert or Import; the second one is VCF-SDDC-Manager-Appliance, which is a standalone deployment device for the SDDC Manager component of VCF to perform Convert or Import. The second is the VCF-SDDC-Manager-Appliance, a standalone deployment appliance for the SDDC Manager component of VCF, which is required to run in the SDDC Manager virtual machine to perform the Convert or Import process; and the third is the VMware Software Installation Bundle - NSX_T_MANAGER 4.2.0.0, which is an installation package for the NSX Solution's NSX Manager that is required to run in the SDDC Manager virtual machine to perform the Convert or Import. Convert or Import process or Day 2 for deploying the NSX solution.

If you have an account, you can log inBroadcom Support Portal(BSP) and download it at the place in the picture above, or if you're too much trouble, the Baidu.com disk link below can be saved as well.

Name of the document MD5 Baidu online disk, a baidu.com website for online shopping
VCF-SDDC-Manager-Appliance-5.2.0. 1944511a2aaff3598d644059fbfc2c19 /s/1lUbrN0zjLUUC1oB8L7ZRAg?pwd=lvx9
vcf-brownfield-import-5.2.0. 22e66def7acdaa60fb2f057326fec1fd
dabf98d48d9b295bced0a5911ed7ff24

 

Checking the vSphere Environment

Once you have prepared the software and tools, you can use the VCF Import Tool to first validate your current vSphere environment to see if there are any areas of your existing vSphere environment that do not meet the VCF Import Tool's requirements for conversion (Convert) or import (Import). Connect to the vCenter Server via SSH as root and go to the Shell command line, run the chsh command to change the default terminal command line of the vCenter Server from API to Shell, and then create a temporary directory (vcfimport) to which the VCF Import Tool can be uploaded.

chsh
/bin/bash
mkdir /tmp/vcfimport
ls /tmp/vcfimport

Go to the directory where the tool was uploaded, use the tar command to extract the files, and then go to the vcf-brownfield-toolset directory.

cd /tmp/vcfimport/
tar -xf /tmp/vcfimport/vcf-brownfield-import-5.2.0.
cd vcf-brownfield-import-5.2.0.0-24108578/vcf-brownfield-toolset/

Requires the use of the toolkit'svcf_brownfield.py script to perform a check of the existing vSphere environment, as shown in the following command.

python3 vcf_brownfield.py precheck --vcenter  --sso-user administrator@ --sso-password Vcf520@password

The check passes, removing the VCF Import Tool kit from vCenter Server.

cd ~
rm -rf /tmp/vcfimport/

 

V. Deploying SDDC Manager

The SDDC Manager is a key core component of the VCF solution and will be deployed automatically if the management domain is built using the Cloud Builder tool, or manually deployed into an existing vSphere environment using the VCF Import Tool method of conversion. If you use the VCF Import Tool to convert, you need to manually deploy the SDDC Manager appliance into your existing vSphere environment.

Navigate to vCenter Server (vSphere Client) - > Datacenter - > Cluster, right-click and select "Deploy OVF Template".

Select Upload SDDC Manager OVA device from local file and click on the next page.

Configure the name of the SDDC Manager VM and select the location where it is stored, and click the next page.

Select the compute resources used by the SDDC Manager virtual machine and click the next page.

To check the summary information for the SDDC Manager OVA device, click the next page.

Accept the SDDC Manager Installation License Agreement and click the next page.

Select the storage used by the SDDC Manager VM and click the next page.

Select the network port group used by the SDDC Manager VM and click the next page.

Configure information such as passwords for various types of users of the SDDC Manager virtual machine and their addresses. click the next page.

Check all configurations, click Finish and start the deployment.

After successful deployment, you can create a snapshot of the SDDC Manager virtual machine.

Right-click the SDDC Manager virtual machine and click Power On.

At this point, if you access the SDDC Manager UI, you can see that it is initializing, so don't worry about it now, you just need to be able to access the SDDC Manager's Shell via SSH.

 

VI. Performing pre-conversion pre-checks

We will then need to perform the conversion process of the existing vSphere environment through SDDC Manager, but before we can formally perform the conversion, we will need to perform another pre-check on SDDC Manager to determine if the current vSphere environment meets the requirements for conversion to a management domain. Connect to the SDDC Manager command line via SSH as the vcf user, create a new directory (vcfimport) with the following command, then upload the VCF Import Tool file to this directory on the SDDC Manager, then extract the file and enter it into the vcf-brownfield-toolset directory.

mkdir /home/vcf/vcfimport
ls /home/vcf/vcfimport
cd /home/vcf/vcfimport
tar -xf vcf-brownfield-import-5.2.0.
cd vcf-brownfield-import-5.2.0.0-24108578/vcf-brownfield-toolset/

After entering the VCF Import Tool directory, use the following command to perform an environment check on SDDC Manager. There are 98 entries in total, 97 successful checks and 1 failure.

python3 vcf_brownfield.py check --vcenter  --sso-user administrator@ --sso-password Vcf520@password

The output (JSON file and CSV table file) can be used to see what specifically failed, and the All guardrails CSV file can be used to see what all the checks were.

Based on the JSON file viewed above, one of the check failures is due to an item in the vLCM configuration of the current vSphere environment that does not match the default vLCM configuration in SDDC Manager. In fact, we can pass theESX Upgrade Policy Guardrail Failure Look at the default configuration of vLCM in SDDC Manager, check the vLCM configurations of the current vSphere environment, and then just adjust those configurations to the default configuration of vLCM in SDDC Manager. Of course, this failed check should not affect the conversion of the existing vSphere environment to a management domain if you leave it alone. Based on the error message, change the vLCM configuration of the current vSphere environment to the default configuration of vLCM in SDDC Manager, as shown in the following figure.

After modifying the vLCM configuration, re-execute the checks again, all of which are now successful.

 

Preparing NSX Manager

When converting an existing vSphere environment to a management domain, we can perform the deployment of NSX at the same time, and since converting an existing NSX environment to a management domain is not currently supported, this step is sort of complementary to this solution. However, if you deploy the NSX solution while performing the conversion, this will only configure security-only NSX for the ESXi hosts, and if you want to implement full NSX Overlay networking features, such as support for micro-segmentation, T0/T1 gateways, and so on, you will need to go through a separate process of configuring the TEP network and other settings after NSX has been deployed. This step is optional, and you can perform the NSX deployment at the same time as the conversion, or at another time after the conversion has been performed.

Using the VCF Import Tool to perform a conversion of an existing vSphere environment and simultaneously perform an NSX deployment requires the preparation of aJSON Configuration FileThis configuration file defines the NSX Manager deployment size, the NSX cluster VIP addresses and the addresses of the three NSX Manager devices. This configuration file defines the deployment size of NSX Manager, the VIP address of the NSX cluster, and the addresses of the three NSX Manager devices. Be sure to configure forward and reverse domain name resolution for these addresses in advance, and one important thing to note is the path of the installation package deployed by NSX, which can be left at the default.

{
  "license_key": "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE",
  "form_factor": "medium",
  "admin_password": "Vcf520@password",
  "install_bundle_path": "/nfs/vmware/vcf/nfs-mount/bundle/",
  "cluster_ip": "192.168.32.66",
  "cluster_fqdn": "",
  "manager_specs": [{
    "fqdn": "",
    "name": "vcf-mgmt01-nsx01a",
    "ip_address": "192.168.32.67",
    "gateway": "192.168.32.254",
    "subnet_mask": "255.255.255.0"
  },
  {
    "fqdn": "",
    "name": "vcf-mgmt01-nsx01b",
    "ip_address": "192.168.32.68",
    "gateway": "192.168.32.254",
    "subnet_mask": "255.255.255.0"
  },
  {
    "fqdn": "",
    "name": "vcf-mgmt01-nsx01c",
    "ip_address": "192.168.32.69",
    "gateway": "192.168.32.254",
    "subnet_mask": "255.255.255.0"
  }]
}

Upload the JSON configuration file for the NSX deployment and the NSX installation package to SDDC Manager. You need to remember the path where you uploaded this configuration file, as you will need it later.

ls /home/vcf/vcfimport/
ls /nfs/vmware/vcf/nfs-mount/bundle/

 

VIII. Formal implementation of the conversion process

After logging in to the SDDC Manager command line through the vcf user and going to the vcf-brownfield-toolset directory, use the following command to perform the vSphere environment conversion process. After running the command, enter the passwords for the admin and backup users of SDDC Manager and the root password of vCenter Server for authentication.

cd /home/vcf/vcfimport/vcf-brownfield-import-5.2.0.0-24108578/vcf-brownfield-toolset/
python3 vcf_brownfield.py convert --vcenter  --sso-user administrator@ --sso-password Vcf520@password --nsx-deployment-spec-path /home/vcf/vcfimport/

At this point, you can log in to the SDDC Manager UI to check the status of task execution.

The conversion task actually fails after some time of execution!

However, when viewed through the SDDC Manager UI, the task has succeeded.

You can also see that the vSphere environment has been converted to a management domain.

After looking at the output using the following command, the reason is that the Deploy NSX task failed, meaning that the uploaded NSX installation package was not a valid ZIP file.

After checking the uploaded NSX installation package file again, I found that there was indeed a problem with the uploaded package, which was only 3+ G in size, which was indeed an oversight on my part. After deleting the file, I re-uploaded the NSX package using the FTP utility, and saw that the size was 12 gigabytes, and there was no problem this time. Under normal circumstances, you should not encounter this problem.

Since we've already converted the vSphere environment, we can't reuse the Convert command, which means we didn't deploy NSX at the time of the conversion, so this is treated as if we were performing this step on Day 2, and we'll need to use a different command toPerform NSX deployment aloneworkflow, as shown below.

python3 vcf_brownfield.py deploy-nsx --vcenter  --nsx-deployment-spec-path /home/vcf/vcfimport/

Now you can see that the NSX appliance tarball can be extracted normally, enter yes to start the deployment of NSX Manager.

NSX deployment was successful.

Checking the task status through the SDDC Manager UI also results in success.

After successful deployment, switch to the root user, restart all services of SDDC Manager, and wait for the UI to reinitialize.

 

IX. Validating converted administrative domains

Navigate to SDDC Manager UI->Inventory->Workload Domains and you can see that the existing vSphere environment has been converted to a management domain with the name domain-vcf-mgmt01-vcsa01.

Click to enter the management domain, view the summary information of the workload domain, prompting that the products in the current management domain lack a license, you need to click "Add License" to assign a license key to the products in the domain.

In the Hosts and Clusters tab, you can see configuration information for ESXi hosts and clusters that are part of a vSphere environment.

Navigate to the SDDC Manager UI->Administration->Network Settings and you can see that the VMkernel NICs of the ESXi hosts in the vSphere environment, the static IP addresses used for the vMotion service and the vSAN service have been created as network pools.

Navigate to SDDC Manager UI->Management->Licensing, and click "License Key" to assign license keys to the products in the current management domain because the products in the domain lack licenses.

You can perform a little pre-check on the management domain to check that the various components and configurations are working properly.

Looking at the results of the check, you can see that there are some errors and warnings, which can be ignored because the configuration really hasn't been done yet in the current environment.

Logging into vCenter Server (vSphere Client), you can see that the three NSX Manager appliances have been deployed to the management domain cluster, and virtual machine/host associativity rules have been created so that the three virtual machines must be running on different hosts, which is why, if you use Convert to convert an existing vSphere environment to a management domain, you will need three hosts if you are not using a vSAN solution and are using NFS or FC SAN, or just two hosts if you are not deploying NSX. This is why if you use a Convert to convert an existing vSphere environment to a management domain, if you are not using a vSAN solution, but are using NFS or FC SAN, you will need three hosts if you are deploying NSX, or just two hosts if you are not deploying NSX.

Log in to the NSX Manager UI (VIP) to view an overview of the NSX system configuration.

NSX Cluster Configuration, an NSX management cluster consisting of a three-node NSX Manager.

The management domain vCenter Server has been added to NSX as a Compute Manager.

Hosts in the management domain cluster have been configured with Distributed Virtual Port Groups (DVPGs) for NSX, which is NSX-Security only security, which means that you can apply the security features of NSX to the management component VMs on the management domain vCenter Server that are connected to the virtual port groups.

Note that before developing an NSX security policy, make sure that you whitelist or exclude critical management VMs to avoid lockout situations, and you can also create separate DFW exclusion lists for VMs that are not management components.