Location>code7788 >text

[VMware VCF] VCF 5.2: Mounting a remote vSAN datastore.

Popularity:954 ℃/2024-08-28 11:32:34

In the VMware vSAN solution, in order to fully utilize the storage resources within a vSAN HCI cluster, the vSAN HCI and vSAN HCI clusters can share storage resources with each other in a solution that was earlier called vSAN HCI Mesh and is now known as vSAN HCI with datastore sharing (vSAN HCI with datastore sharingVMware vSAN clusters are divided into Original Storage Architecture (OSA) and Express Storage Architecture (ESA) based on how the host disks are composed, and vSAN Compute Clusters (Compute Only), vSAN HCI Clusters (Compute & Storage), and vSAN Max Clusters (Storage Only) based on the types of architectures used to deploy them. Depending on the type of deployment architecture used, there are vSAN Compute Only, vSAN HCI clusters (Compute & Storage), and vSAN Max clusters (Storage Only).

The VMware Cloud Foundation solution allows you to add hosts of different vSAN types and create vSAN cluster types for different purposes, as well as manage and mount remote vSAN datastores directly in SDDC Manager. vSAN Max Disaggregated Storage type support has been added in VMware Cloud Foundation 5.2. VMware Cloud Foundation version 5.2 begins to add support for the vSAN Max Disaggregated Storage type, which allows you to use vSAN Max clustered storage as Principal Storage when adding VI workload domains or creating new clusters; however, VCF does not currently support the vSAN Max Extended Cluster architecture. For more content and details on vSAN Max disaggregated storage check out this article (Create a vSAN Max cluster and configure to mount a remote datastore.) Article.

There are a number of requirements and considerations for using the mounted remote vSAN datastore feature. The cluster on the mount side is called a Client cluster and the cluster on the mounted side is called a Server cluster. There are limits on the number of mounts and mounts between Client and Server clusters, and there are limits on the amount of storage that can be shared between the Client and Server clusters. There are also restrictions on the type of deployment architecture. vSAN compute clusters can mount vSAN HCI OSA/ESA clusters and vSAN Max clusters. vSAN HCI OSA clusters can only mount vSAN HCI OSA clusters and vSAN HCI ESA clusters can only mount vSAN HCI ESA clusters and vSAN Max clusters. and so on. Here's a look at the exact configuration process.

 

I. vSAN Compute Cluster

A vSAN compute cluster is really a standard vSphere cluster with no local/shared storage per se, only compute resources. By configuring as a vSAN compute cluster, a thin vSAN is installed on top of the cluster's hosts for use as a client cluster, and then the storage resources on the remote server-side clusters (vSAN HCI clusters/vSAN Max clusters) are made available to the workloads in the cluster for use after being mounted.

Navigate to SDDC Manager->Inventory->Hosts and click on "Service Hosts" to add ESXi hosts for vSAN compute clustering. Select the "vSAN" storage type and choose the vSAN type as "vSAN Compute Cluster", adjusting the other options as appropriate, and then complete the Service Host Addition workflow.

Since this is a vSAN cluster, you still need to have at least 3 hosts ready for vSAN compute clustering, and you can add the other hosts in the same way as described above. The hosts will be displayed in the unassigned hosts list after they are in service, as shown in the following figure.

Navigate to SDDC Manager->Inventory->Workload Domains and click "Add Cluster" three dots to the left of the management domain.

Select "vSAN" for the storage type and click Start.

Set the name of the vSAN compute cluster (vcf-mgmt01-cluster02) and click Next.

Select the vSAN cluster type as "vSAN Compute Cluster" and click Next.

Select the remote vSAN datastores that are currently available for mounting. the default cluster for the management domain is vSAN HCI so it is available for mounting by the vSAN compute cluster. click Next.

Select the ESXi hosts for the vSAN compute cluster and click Next.

Configure the virtual switch for the vSAN compute cluster host and select Create Custom Switch Configuration.

The configuration process is ignored. note that the vSAN VMkernel NIC assigned to the switch that the host is configured to use for vSAN network traffic needs to be interoperable with the vSAN VMkernel NIC of the vSAN cluster on which the remote is mounted. followed by the same.

Configure the license to be used for the vSAN compute cluster, you need to select the VMware vSAN license in this place, in fact, the vSAN compute cluster does not need a vSAN license, but you need to select it here, click Next.

Check the configuration and click Finish.

As you can see below, only VMware vSphere licenses are used.

If all goes well, navigating to SDDC Manager->Inventory->Workload Domains->Clusters, you can see that the vSAN compute cluster has been created successfully, and the status of the cluster configured to mount the remote vSAN datastore is shown at vSAN Configuration.

Click into this cluster to see the topology of the vSAN compute cluster mounting the vSAN HCI cluster storage.

The mounted client cluster topology is also visible in the server-side cluster.

View the status of a vSAN compute cluster in vCenter Server (vSphere Client).

View remote vSAN datastores mounted by a vSAN compute cluster.

 

II. vSAN HCI Clustering

A vSAN HCI cluster is a standard vSAN hyperconverged cluster that aggregates the local hard disks of hosts to form a unified pool of storage for workloads. vCF management domain clusters are standard vSAN HCI clusters, and we won't go into the details of creating a vSAN HCI cluster here, so check out the previous article (VMware Cloud Foundation Part 07: Managing Hosts and Clusters in a Workload Domain.) article, the following is only a statement of what you need to be aware of to create a vSAN HCI cluster and the process of remotely mounting a datastore between vSAN HCI clusters.

When adding a serving host for vSAN HCI clustering, select "vSAN HCI" for the vSAN type of the host and select the vSAN OSA/ESA architecture based on the requirements for mounting the remote datastore.

Since the initial clustering of the VCF management domain is a vSAN ESA architecture, you need to add three ESXi hosts for vSAN HCI ESA clustering.

To add a vSAN cluster, you need to select the vSAN ESA architecture and select the vSAN cluster type as "vSAN HCI".

Finally, the vSAN HCI cluster (vcf-mgmt01-cluster03) was created successfully, as shown in the following figure.

Click the three dots to the left of the cluster and select "Mount Remote Datastore".

Optional remote vSAN datastore, check it and click Next.

The newly created vSAN HCI cluster currently has no remote vSAN capacity. if you perform this mount of a remote datastore. the new remote vSAN capacity is shown in the following figure.

Remote vSAN datastore mount in progress ......

The remote vSAN datastore has been successfully mounted.

Click on the cluster to view the mounts.

The server-side cluster looks at the mounts and now has two client clusters mounting themselves.

In fact, for vSAN HCI clusters, there is support for mounting remote vSAN datastores to each other, by clicking "Mount Remote Datastores" on the VCF management domain cluster.

Optional remote vSAN datastore.

Remote vSAN capacity that can be increased if this remote vSAN datastore is mounted.

After a successful mount, view the vSAN configuration.

Click into the cluster to view the mounts.

View the status of the vSAN HCI cluster at vCenter Server (vSphere Client).

View remote vSAN datastores mounted by a vSAN HCI cluster.

 

vSAN Max Cluster

vSAN Max is a new disaggregated storage architecture based on vSAN ESA that is specifically designed to provide remote data storage to vSAN compute clusters and vSAN HCI clusters by leveraging the benefits of the vSAN ESA architecture and individually optimizing it for the task of storing data for processing. For more content and details check out this (Create a vSAN Max cluster and configure to mount a remote datastore.) Article.

Navigate to SDDC Manager->Inventory->Hosts and click on "Service Hosts" to add ESXi hosts for vSAN Max clustering. Select the "vSAN" storage type and choose the vSAN type as "vSAN Max", other options can be adjusted as appropriate, and then complete the service host addition workflow.

Add 3 hosts for vSAN Max clustering in the same manner as described above, and the hosts will show up in the unassigned hosts list when they are in service, as shown in the following figure.

Navigate to SDDC Manager->Inventory->Workload Domains and click "Add Cluster" three dots to the left of the management domain.

Select "vSAN" for Storage Type and check "Enable vSAN ESA", click Start.

Set the name of the vSAN Max cluster (vcf-mgmt01-cluster04) and click Next.

Select the vSAN cluster type as "vSAN Max" and click Next.

Select the ESXi hosts to be used for the vSAN Max cluster and click Next.

Configure the virtual switches for the vSAN Max cluster hosts, select Create Custom Switch Configuration, the process is similarly ignored, and click Next.

Configure the license to be used for the vSAN Max cluster and click Next.

Check the configuration and click Finish.

If all went well, the vSAN Max cluster was created successfully.

First mount the datastore of the remote vSAN Max cluster on the vSAN compute cluster.

Select the datastore for the vSAN Max cluster, the datastore for the vSAN HCI cluster below is already mounted so it is grayed out, click Next.

Since the datastore of vSAN HCI is already mounted, the current remote vSAN capacity is 3 TB, and the new remote vSAN capacity can be up to 4 TB if this mount task is performed, as shown in the following figure. click Finish.

After a successful mount, the vSAN configuration state of the vSAN Max cluster has changed to server-side.

Click vSAN Compute Cluster to view the mounts.

Click the vSAN Max cluster to view the mounts.

Similarly, vSAN Max clusters can be mounted for use by vSAN HCI clusters.

Click the vSAN HCI cluster to view the mount.

Note that vSAN Max clusters can only act as servers, not as clients, which means that they can only be mounted by other types of clusters and cannot mount other remote vSAN datastores. You cannot mount a remote vSAN datastore. Also, a cluster with a remote vSAN datastore mounted on it cannot be deleted; it can only be deleted after the remote datastore has been unmounted.

View the status of a vSAN Max cluster in vCenter Server (vSphere Client).

View the vSAN client clusters mounted by the vSAN Max cluster.

If you've read this (Create a vSAN Max cluster and configure to mount a remote datastore.If you're looking at the VMware Cloud Foundation solution, you should know that mounting a remote vSAN datastore supports sharing across vCenter Server instances, which for the VMware Cloud Foundation solution is sharing across workload domains, but currently it says that it only supports remote sharing of datastores between different clusters in the same workload domain, which I've read about, and it doesn't seem to be possible with SDDC Manager. I've looked at it and it doesn't seem to be possible in SDDC Manager, but technically it should be possible to configure to mount a remote vSAN datastore across a vCenter Server (vSphere Client) in a workload domain, and that's limited to VI workload domains, because there's only one workload domain to manage. Of course, VMware doesn't indicate that that's definitely not officially supported, so it's best to follow the documentation guidelines to the letter!