VMware Cloud Foundation Home Lab – Part 7 (VCF Business Services Console Online Registration)

Blog Date: December 2025

If you’ve been following along in this home lab series, in my previous blog, I finished my VCF 9.0.1 deployment to my 4 MINIS FORUM MS-A2s HERE.

With Broadcom’s VCF 9, we are now required to configure usage reporting. This process is pretty straight forward. Click on the [START REGISTRATION] button, and you’ll be redirected to the Broadcom portal for authentication.

You’ll need to enter a display name for this license file. It could be representative of the environment where used.

Select the licenses needed for the environment..

Validate the selection.

Now we just need to copy the activation code.

Back in VCF Operations, we now click the [ENTER ACTIVATION CODE] button to paste in the code.

With your Activation Code ready, now you can activate.

Now your licenses will be available to apply to vCenter (PRIMARY LICENSE) and vSAN (ADD-ON LICENSE).

But wait… There’s More!

I am rather surprised that we have to apply the download token twice considering it’s a requirement to download the bits for the installation, especially in a world where we are constantly striving to automate all the things. Maybe Broadcom’s VCF Engineering division just overlooked this simple quality of life automation task to copy said download Token from the Cloud Installer, and import it to VCF Operations?

You’ll want to add your download token again to VCF Operations:
Fleet Management ->Lifecycle ->Depot Configuration

This looks oddly familiar for creating a credential file for VCF (Aria) Operations Integration. Click the plus icon.

Well that’s… disappointing.

I fixed it for you, Broadcom.

Just like you would create a credential file, here you add your download token where you’d add your password, and click [ADD].

Now just select your Download Token and click [OK].

Now you can download your VCF Fleet (Aria) bits.

VMware Cloud Foundation Home Lab – Part 5 (Cloud Foundation Installer Password Requirements)

Blog Date: December 2025

In my previous blog VMware Cloud Foundation Home Lab – Part 4 (VCF Installer with VMUG Advantage Download Token), I walked through the process of getting the Cloud Installer connected to the Online Depot using the Download Token for VMUG Advantage Members.

In this blog, I’ll go over the basic deployment of the VMware Cloud Foundation Installer appliance (formally VCF Cloud Builder) OVA, because there’s a new ‘feature’ that will trip you up if you’re not careful.

We’ve all installed OVAs before, but for the Cloud Foundation Installer OVA, there was something that I wanted to call out.

Specifically, when you’re deploying the OVA, be mindful of the new password requirements as they have changed. Previous versions of VCF through VCF 5.x, did not require a 15 character password. Apparently the quality control folks over in the VCF Division at Broadcom also forgot about this, because you can enter an 8 character passwords here, and the OVA deployment will continue as normal.

Why does this matter, well if we fast forward here a little bit, and get to where we are in the VMware Cloud Foundation deployment wizard on Step 11, you are again asked for the Administrator Password (local user). If you did not use at least a 15 character password during the OVA deployment, the wizard when you click [NEXT] will state that the Administrator password is incorrect . It doesn’t warn you that it is too short. So after a couple of tries, you will get kicked out of the wizard and back to the Cloud Foundation Installer login page, unable to log in because the local user (admin account) is now locked… I first observed this behavior during a customer deployment of 9.0, and found that the ‘feature’ is still there in the 9.0.1 I deployed for my home lab.

If you do find yourself in this situation, the admin@local account unlock procedure is straight forward, and Broadcom has a KB (Article ID: 403316) on how-to:
How to unlock the admin@local account on VMware Cloud Foundation Installer 9.0

If you need to reset the password for the admin@local account, Broadcom has a KB (Article ID: 403099) for that too:
How to Reset the Admin@local password in SDDC Manager

(#-__-)

How to Update VMware Cloud Foundation SDDC Manager When NSX-T Certificate Has Been Replaced.

Blog Date: July 11, 2024

In VMware Cloud Foundation 4.5.1, managing certificates of the Aria Suite LCM, NSX, VXRAIL, and vCenter Certificates should be done via the SDDC manager, so that it trusts the components certificate. The official documentation on how to do it can be found here -> Manage Certificates in a VMware Cloud Foundation.

In some cases however, certificates can be replaced/updated outside of the SDDC manager either due to a lack of understanding, or in emergency situations where certificates expired. In either of those situations, the certificate must be imported into the trusted root store on the SDDC manager appliance to re-establish trust to those components. Otherwise, SDDC manager will not function as intended.

Official knowledge base article can be found here -> How to add/delete Custom CA Certificates to SDDC Manager and Common Services trust stores.

The following steps can be used to update the SDDC Manager trust store with the new NSX certificate.

  1. IMPORTANT: Take a snapshot of the SDDC Manager virtual machine. **Don’t Skip This Step**
  2. Use a file transfer utility to copy the new NSX certificate file to the /tmp directory on the SDDC Manager.
  3. Establish an SSH connection to the SDDC Manager as the VCF user, and then issue the su – command to switch to the root user.
  4. Obtain the trusted certificates key by issuing the following command:

    cat /etc/vmware/vcf/commonsvcs/trusted_certificates.key

    Note: You will see output similar to the following:

    p_03ZjNI7S^B7V@8a+
  5. Next, Issue a command similar to the following to import the new NSX-T certificate into the SDDC Manager trust store:

    keytool -importcert -alias <aliasname> -file <certificate file> -keystore /etc/vmware/vcf/commonsvcs/trusted_certificates.store --storepass <trust store key>

    Notes:
    • Type yes when prompted to trust the certificate.
    • Enter something meaningful, like sddc-mgmt-nsx for the <aliasname> value.
    • Replace <certificate file> with the full path to the certificate file that was uploaded in Step 2.
    • Replace <trust store key> with the trusted certificates key value returned in Step 4.

  6. Issue a command similar to the following to import the new NSX-T certificate into the java trust store. Here the storepass is changeit:

    keytool -importcert -alias <aliasname> -file <certificate file> -keystore /etc/alternatives/jre/lib/security/cacerts --storepass changeit

    Notes:
    • Type yes when prompted to trust the certificate.
    • Replace <aliasname> with the meaningful name chosen in Step 5.
    • Replace <certificate file> with the full path to the certificate file that was uploaded in Step 2.
  7. Issue a command similar to the following to verify that the new NSX-T certificate has been added to the SDDC Manager trust store:

    keytool -list -v -keystore /etc/vmware/vcf/commonsvcs/trusted_certificates.store -storepass <trust store key>

    Note: 
    • Replace <trust store key> with the trusted certificates key value returned in Step 4.
  8. Issue the following command to restart the SDDC Manager services:

    /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
  9. (Optional): You can utilize the SDDC manager SOS utility to check the health of the newly imported NSX-T certificate with the following command:

    /opt/vmware/sddc-support/sos --certificate-health --domain-name ALL

    Tip:
    For more information on the sos utility, check out the documentation here: -> SoS Utility Options (vmware.com)
  10. If everything checks out, remove the snapshot that was taken prior to starting this procedure.

Update Multiple 6.7 ESXi Hosts Root Password Via Host Profile.

Blog date: September 01, 2020
Completed on vSphere version: 6.7

Today my customer needed to change the root password for roughly 36 hosts across two data centers. These are two data centers that were recently built as part of my residency with them, and they have already seen the benefits of using host profiles. Today I was able to show them one more.

VMware has a KB68079 that details the process should the root password become unknown on a host. Well the same process can be applied and used to update the password on all hosts with that host profile attached. At the time of writing this article, all hosts are compliant with the current host profile, and there are no outstanding issues.

In the vSphere client, go to ‘Policies and Profiles’ and select ‘Host Profiles’ in the left column, click and select the desired host profile on the right.

  • Edit the desired host profile.
  • In the search field, type root and hit enter.
  • Select root in the left column.
  • In the right column, change the field below ‘Password’ to Fixed password Configuration.

Now you are prompted with password fields and can update the root password.

Click Save once the new password has been entered.

Now you can remediate the hosts against the updated host profile, and the root account will get updated on each host.
– Out of an abundance of caution, it is always good to spot check a handful of hosts to validate the new password.

I had my customer go back and edit the host profile once more and change the ‘Password’ field back to: Leave password unchanged for the default account. Click save, and then remediate the cluster again. The new password will stay.

Before I connected with the customer today, they had already researched how to update the root password on all hosts with a script, but this method is simple, automated and built into vSphere.

How To Setup NFS Server on CentOS 7 for vCloud Director v10 Appliance

Blog Date: 10/25/2019
Lasted Updated: 04/06/2020

For the purposes of this demonstration, I will be configuring NFS services on a CentOS 7 VM, deployed to a vSphere 6.7 U3 homelab environment.

NFS Server VM Configuration

Host Name: cb01-nfs01
IP Address: 10.0.0.35
CPU: 2
RAM: 4GB

Disk 1: 20GB – Linux installation (thin provisioned)
Disk 2: 100GB – Will be used for the vCD NFS share (thin provisioned)

Configure the vCD NFS share disk

For this demonstration, I have chosen not to configure Disk 2 that was added to the VM. Therefore, this “how-to” assumes that a new disk has been added to the VM, and the NFS server has been powered on after.

1) Open a secure shell to the NFS server. I have switched to the root account.
2) On my NFS server, the new disk will be “/dev/sdb”, if you are unsure run the following command to identify the new disk on yours:

fdisk -l

3) We need to format the newly added disk. In my case /dev/sdb. So run the following command:

fdisk /dev/sdb

4) Next with the fdisk utility, we need to partition the drive. I used the following sequence:
(for new partition) : n
(for primary partition) : p
(default 1) : enter
(default first sector) : enter
(default last sector) : enter

5) Before saving the partition, we need to change it to ‘Linux LVM’ from its current format ‘Linux’. We’ll first use the option ‘t’ to change the partition type, then use the hex code ‘8e’ to change it to Linux LVM like so:

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’.

Command (m for help): w

Once you see “Command (m for help):” type ‘w’ to save the config.

Create a ‘Physical Volume, Volume Group and Logical Volume

6) Now that the partition is prepared on the new disk, we can go ahead and create the physical volume with the following command:

# pvcreate /dev/sdb1

7) Now we to create a volume group. You can name it whatever suites your naming standards. For this demonstration, I’ve created a volume group named vg_nfsshare_vcloud_director using /dev/sdb1, using the following command:

# vgcreate vg_nfsshare_vcloud_director /dev/sdb1

Creating a volume group allows us the possibility of adding other devices to expand storage capacity when needed.

8) When it comes to creating logical volumes (LV), the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
– In this example I’ll create one LV named vol_nfsshare_vcloud_director using all the space.
– The -n option is used to indicate a name for the LV, whereas -l (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
The full command used looks like:
# lvcreate -n vol_nfsshare_vcloud_director -l 100%FREE vg_nfsshare_vcloud_director

9) Before a logical volume can be used, we need to create a filesystem on top of it. I’ve used ext4 since it allows us both to increase and reduce the size of the LV.
The command used looks like:

# mkfs.ext4 /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

Writing the filesystem will take some time to complete. Once successful you will be returned to the command prompt.

Mounting the Logical Volume on Boot

10) Next, create a mount point for the LV. This will be used later on for the NFS share. The command looks like:

# mkdir -p /nfsshare/vcloud_director

11) To better identify a logical volume we will need to find out what its UUID (a non-changing attribute that uniquely identifies a formatted storage device) is. The command looks like:

# blkid /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

In this example, the UUID is: UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3

12) Now edit the /etc/fstab and add the disk using the UUID obtained in the previous step.

# vi /etc/fstab

In this example, the entry would look like:

UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3 /nfsshare/vcloud_director ext4 defaults 0 0

Save the change with: wq

13) Mount the new LV with the following command:

# mount -a

To see that it was successfully mounted, use the following command similar to:

# mount | grep nfsshare

Assign Permissions to the NFS Share

14) According to the Preparing the Transfer Server Storage section of the vCloud DIrector 10.0 guide, you must ensure that its permissions and ownership are 750 and root:root .

Setting the permissions on the NFS share would look similar to:

# chmod 750 /nfsshare/vcloud_director

Setting the ownership would look similar to:

# chown root:root /nfsshare/vcloud_director

Install the NFS Server Utilities

15) Install the below package for NFS server using the yum command:

# yum install -y nfs-utils

16) Once the packages are installed, enable and start NFS services:

# systemctl enable nfs-server rpcbind

# systemctl start nfs-server rpcbind

16) Modify /etc/exports file to make an entry for the directory /nfsshare/vcloud_director .

– According to the Preparing the Transfer Server Storage guide, the method for allowing read-write access to the shared location for two cells named vcd-cell1-IP and vcd-cell2-IP is the no_root_squash method.

# vi /etc/exports

17) For this demonstration, my vCD appliance IP on the second nic is 10.0.0.38, so I add the following:

/nfsshare/vcloud_director 10.0.0.38(rw,sync,no_subtree_check,no_root_squash)

– There must be no space between each cell IP address and its immediate following left parenthesis in the export line. If the NFS server reboots while the cells are writing data to the shared location, the use of the sync option in the export configuration prevents data corruption in the shared location. The use of the no_subtree_check option in the export configuration improves reliability when a subdirectory of a file system is exported.
– As this is only a lab, I only have a single vCD appliance for testing. If a proper production deployment, add additional lines for each appliance IP.

18) Each server in the vCloud Director server group must be allowed to mount the NFS share by inspecting the export list for the NFS export. You export the mount by running exportfs -a to export all NFS shares. To re-export use exportfs -r.

# exportfs -a

– To check the export, run the following command:

# exportfs -v

– Validate NFS daemons are running on the server by using rpcinfo -p localhost or service nfs status. NFS daemons must be running on the server.

# rpcinfo -p localhost

or

# systemctl status nfs-server.service

Configure the Firewall

19) We need to configure the firewall on the NFS server to allow NFS client to access the NFS share. To do that, run the following commands on the NFS server.

# firewall-cmd --permanent --add-service mountd

# firewall-cmd --permanent --add-service rpc-bind
# firewall-cmd --permanent --add-service nfs
# firewall-cmd --reload

20) That’s it. Now we can deploy the vCloud Director 10.0 appliance(s).

Optional NFS Share Testing

I highly recommend testing the NFS share before continuing with the vCloud DIrector 10.0 appliance deployment. For my testing, I have deployed a temporary CentOS 7 VM, with the same hostname and IP address as my first vCD appliance. I have installed nfs-utils on my test VM.
# yum install -y nfs-utils

OT-1) Check the NFS shares available on the NFS server by running the following command on the test VM. change the IP and share here to your NFS server.

# showmount -e 10.0.0.35

As you can see, my mount on my NFS server is showing one exported list for 10.0.0.38, my only vCD appliance

OT-2) Create a directory on NFS test VM to mount the NFS share /nfsshare/vcloud_director which we have created on the NFS server.

# mkdir -p /mnt/nfsshare/vcloud_director

OT-3) Use below command to mount the NFS share /nfsshare/vcloud_director from NFS server 10.0.0.35 in /mnt/nfsshare/vcloud_director on NFS test VM.

# mount 10.0.0.35:/nfsshare/vcloud_director /mnt/nfsshare/vcloud_director

OT-4) Verify the mounted share on the NFS test VM using mount command.

# mount | grep nfsshare

You can also use the df -hT command to check the mounted NFS share.

# df -hT

OT-5) Next we’ll create a file on the mounted directory to verify the read and write access on NFS share. IMPORTANT** during the vCD appliance deployment, it is expected that this directory is empty, else it could make the deployment fail. Remember to cleanup after the test.

# touch /mnt/nfsshare/vcloud_director/test

OT-6) Verify the test file exists by using the following command:

# ls -l /mnt/nfsshare/vcloud_director/

OT-7) Clean your room. Cleanup the directory so that it is ready for the vCD deployment.

# rm /mnt/nfsshare/vcloud_director/test

After successfully testing the share, we now know that we can write to that directory from the vCD appliance IP address, and that we can remove files.




In my next post, I will cover deploying the vCloud Director 10.0 appliance. Stay tuned!

VMworld General Session, Day 1, Monday August 26th, 2019: VMware Tanzu, and Project Pacific.

The start of VMworld 2019 in San Francisco is underway, and Pat kicked off the general session talking about his excitement for being back in San Francisco, while poking fun at us “Vegas lovers”. Pat also talked about technology, our digital lives, and technologies role being a force for good. He talked about charities, and cancer research foundations.

Pat Then talked about The Law of Unintended Consequences, and how technology has advanced, we as a society have given up certain aspects of Privacy, the need to combat disinformation at scale available widely on the social media platforms.

Surprisingly, according to Pat, Bitcoin is Bad and contributes to the climate crisis.

First Major Announcement with Kubernetes, as VMware has been focussing on containers

Pat then announced the creation of VMware Tanzu, which is the initiative to have a common platform that allows developers to build modern apps, run enterprise Kubernetes, and platform to manage Kubernetes for developers and IT..

Second Major Announcement, Project Pacific. An ambitious project to unite vSphere and Kubernetes for the future of modern IT

Interestingly, Project Pacific was announced to be 30% faster than a traditional Linux VM, and 8% faster than solutions running on bare metal.

Project Pacific brings Kubernetes to the VMware Community, and will be offered by 20K+ Partner resellers, 4K+ Service providers and 1,100+ technology partners.

Tanzu also comes with mission control, a centralized tool allowing IT Operations to manage Kubernetes for developers and IT.

Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 2

In this second part of the blog series “Upgrading to vSphere 6.7 Update 1, and Using the vCenter Converge Tool”, I will go over my experience using the Convergence tool. Lets get started.

The Basics of the vCenter Converge Tool

David Stamen (@davidstamen) put together an excellent blog on Understanding the vCenter Server Converge Tool, at VMware’s offical blog site, which I found very useful. Shout-out to Nigel Hickey (@vCenterNerd) for answering some questions I had.

The Convergence Tool basically takes the external PSC and embeds it into the vCenter appliance like so:

Photo credit
@davidstamen

For this customer, I had three vCSA’s and three PSC’s that I needed to converge. Most of the blogs that I found didn’t cover PSC’s that were joined to a domain, environments with multiple vCenters, or with multiple PSC’s, so I thought I would write this up in a blog.

Planning the Convergence

The first thing I had to do was take note of any registered services with the SSO domain. I utilized VMware’s KB2043509 to identify these services, which I had none to worry about. VMware specifically calls out NSX and Site Recovery Manager (SRM), but since those were not in use at this customer, the only things I had to worry about were Horizon, vROps, vRLi and Zerto. Each of these services registered directly to the vCenters, so I had nothing to worry about there. If I had any services registered with the SSO domain, I’d simply need to re-register them once the convergence tool was ran. But since this didn’t apply, I can move forward with configuring the scripts for the convergence tool.

I also need to have an understanding of the replication typology of the existing SSO domain. VMware KB2127057 was an excellent resource I used to gather that information. Opening a putting session to a vCenter, and running the ‘vdcrepadmin’ command against each of the external PSCs, I was able to see the following:

# cd /usr/lib/vmware-vmdir/bin

./vdcrepadmin -f showpartners -h external_psc-a.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-b.domain.com
ldap://external_psc-c.domain.com

-----------------------------------------------------------------

./vdcrepadmin -f showpartners -h external_psc-b.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-a.domain.com
ldap://external_psc-c.domain.com

-----------------------------------------------------------------

./vdcrepadmin -f showpartners -h external_psc-c.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-a.domain.com
ldap://external_psc-b.domain.com

I can see they already have a ring topology, which is the desired architecture. If I were to draw the SSO typology out, it would look something like:

Setting Up the JSON Templates for the Convergence Tool

The converge.json template that the convergence tool uses, can be found in the VMware VCSA ISO, that was used for the 6.7 Update 1 upgrade, under the following path: DVD Drive (#):\VMware VCSA\vcsa-converge-cli\templates\

To make my life easier, I copied the contents of the entire ISO to a folder on the root of my C drive. I then made a seperate folder on the root of C called converge, and created a folder for each of the three vCenters I’d be working with: vCenter-A, vCenter-B, vCenter-C. I made a copy of the converge.json, and placed it into each folder.

Taking a look at the converge.json for vCenter-A, the template tells you what data needs to be filled in, so pay close attention. Lines 10 – 15 needs entries for the ESXi host where the vCenter resides, or the managing vCenter appliance. Here I chose the option to used the Managing ESXi host. All I needed to do, was look in vSphere to see where the vCSA appliance VM resided on which host. While there, I also set the Cluster DRS settings to manual, to prevent the VMs from moving during the upgrade. Once I obtained the information needed, I completed that portion of the json. (I’ve redacted environment specific information).

Lines 16 – 21 need data entries for the first vCenter appliance (vCenter-A) to be converged. Here I need the FQDN for vCenter-A, for the Username, I need the administrator@vsphere.local account, its password, and the root password of the appliance.

Lines 22 – 33 would be filled out IF the Platform Services Controller (PSC) appliance is joined to the domain. My customer was joined to the domain, so I needed to fill this section out. Otherwise you can remove this section from the JSON.

Now, because this is the first vCenter of three, in the same SSO domain, for the first convergence, I did not need this section, because the first vCenter does not have a partner yet. It will be needed however, on the second (vCenter-B) and third (vCenter-C) convergences.

Now I need to fill out a second and third converge.json file for the second and third convergence, saving each in its respective folder. For vCenter-B and vCenter-C, for the partner hostname on line 32, I used the FQDN of the first converged vCenter (vCenter-A), as that is the first partner of the SSO domain.

For vCenter-A, the first to be converged, the completed converge.json looks like this (take note of the commas, brackets and lines removed):

For the second convergence (vCenter-B), and third convergence (vCenter-C), the completed converge.json looks like this:

Now that we got the converge.json done for each of the vCenters, we can work on the decommission.json.

Here is the template VMware provides in the same directory:

Lines 11 – 15 require impute for the Managing vCenter or ESXi Host of the External PSC. Again, just like the vCenter, I used the ESXi host that the PSC is running on.

Lines 16 – 21 needs data for the Platform Services Controller that will be Decommissioned.

Lines 30 – 34 requires information for the vCenter the PSC was paired with. Again here I just used the ESXi host that the vCenter is currently running on

Lines 35 – 39 require the information for the vCenter, the PSC is paired with.

Now that we have the decommission.json filled out for the first vCenter (vCenter-A), I have to repeat the process for the second and third vCenters (vCenter-B, vCenter-C). The full decommission.json should look like

Now that both the converge.json and decommission.json have been filled out for each of my environments (3), and stored in the same directory on the root of C, I can move forward with the Convergence process.

Prerequisites and Considerations Before Starting the Convergence Process

  • The converge tool only supports the VCSA and PSC 6.7 Update 1. All nodes must be on 6.7 Update 1 before converting.
  • If you are currently running a Windows vCenter Server or PSC, you must migrate to the appliance first.
  • Before converting, take a backups of your VCSA(s) and PSCs in the vSphere SSO domain(VM snapshots, and DB backups).
  • Know all other solutions using the PSC for authentication in the environment. They will need to be re-registered after the convergence completes and before decommissioning.
  • A machine on a routable network which can communicate with the VCSA and PSC will be used to run the convergence and decommission process.
  • Set the DRS Automation Level to manual, and the Migration Threshold to conservative. There will be be issues if the VCSA being converged is moved during the process.
  • If VCHA is enabled, it must be disabled prior to running the convergence process.
  • The converge process will handle PSC HA load balancers. Make sure you point to the VIP in the JSON template if you have them.
  • All vSphere SSO data is migrated with the exception of local OS users.
  • Best to take snapshots of the vCSA and external PSC VMs before continuing. We’ve already backed up the database, but it doesn’t hurt to have snapshots as well.

Executing the Converge Tool

Now that converge.json template for each vCenter (vCenter-A, vCenter-B, vCenter-C) is filled out properly, we can now execute. We will run the convergence tool against the first vCenter (vCenter-A). Note: We can only run the converge tool against one vCSA at a time.

In powershell, we can first run the following command before proceeding with the upgrade to see what options/parameters are available with the converge tool.

.\vcsa-converge-cli\win32\vcsa-util.exe converge --help 

To execute the convergence tool against the first vCenter (vCenter-A), I ran the following command:

.\vcsa-converge-cli\win32\vcsa-util.exe converge --no-ssl-certificate-verification --backup-taken C:\pathtofile.json

The output in powershell should look something like:

It will then ask you to reboot the first vCenter before continuing.

Once the first vCenter (vCenter-A) came up, I executed the convergence tool for the second vCenter (vCenter-B). Once completed I restarted the appliance.

Finally, the last vCenter (vCenter-C) is on deck. I executed the converge.json against that vCenter, and once completed, I restarted it.

Here is where you would need to re-point those systems using the old SSO domain, but since I didn’t have any, I can move forward with the decommissioning steps.

Decommissioning the Old external Platform Services Controllers (PSC)

Using the Converge Tool with the decommission option to remove the external PSC’s. Just like before, we need to do this one PSC at a time. The command looks something like this:

 .\vcsa-converge-cli\win32\vcsa-util.exe decommission --no-ssl-certificate-verification C:\pathtofile.json 

Once the process successfully completes, move onto the next PSC. Repeat the process until all PSC’s have been decommissioned.

Validate the SSO Replication Topology After the Converge Process

If you’ll remember, when I setup the converge.json, I had the second vCenter (vCenter-B) and third vCenter (vCenter-C) replication partner set to the first converged vCenter (vCenter-A). My Replication topology currently looks like this:

I needed to close the loop between vCenter-B and vCenter-C. Using VMware’s KB2127057 , I used the ‘createagreement’ parameter. I opened a putty session to vCenter-B and ran the following command:

# cd /usr/lib/vmware-vmdir/bin

./vdcrepadmin -f createagreement -2 -h vcenter-b.domain.com -H vcenter-c.domain.com -u Administrator -w VMw@re123

Now that the SSO replication agreement has been made between vCenter-B and vCenter-C, my replication topology looks like this:

I’m not going to lie, the hardest part of using the convergence tool, was just getting started. I’ve been through enough fires in my day to know how bad of a time I would have had if something went wrong, and I lost either the vCenter, or external PSC before the convergence successfully completed. Once I got myself beyond that mental hurdle, the process was actually quite easy and smooth.

I know I’ve left this customer’s environment in a lot better shape than I found it, and having embedded PSCs will make future vCenter upgrades a breeze. For a VMware PSO consultant, this was a huge value add for the customer.

Blog Date: April 16, 2019

Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 1

I recently wrapped up a vSphere 6.7 U1 upgrade project, while on a VMware Professional Services (PSO) engagement, with a customer in Denver Colorado. On this project, I had to upgrade their three VMware environments from 6.5, to 6.7 Update 1. This customer also had three external Platform Services Controllers (PSC), a configuration that is now depreciated in VMware architecture.

Check the VMware Interoperability, and Compatibility Matrices

The first thing I needed to do, was to take inventory of the customer’s environment. I needed to know how many vCenters, if they had external Platform Services Controllers, how many hosts, vSphere Distributed Switch (VDS), and what the versions were.

  • From my investigation, this customer had three vCenters, and three external platform services controllers (PSC), all apart of the same SSO domain.
  • I also made note of which vCenter was paired with what external PSO. This information is critical not only for the vSphere 6.7 U1 upgrade, but also the convergence process that I will be doing in part two of this blog series.
  • Looking at the customer’s ESXi hosts, the majority were running the same ESXi 6.5 build, but I did find a few Nutanix clusters, and six ESXi hosts still on version 5.5.
  • The customer had multiple vSphere Distributed Switch (VDS) that needed to be upgraded to 6.5 before the 6.7 upgrade.

The second thing that I needed to do was to look at the model of each ESXi host and determine if it is supported for the vSphere 6.7 U1 upgrade. I also need to validate the firmware and BIOS each host is using, to see if I need to have the customer upgrade the firmware and BIOS of the hosts. We’ll plug this information into the VMware Compatibility Guide .

  • From my investigation, the three ESXi hosts running ESXi 5.5 were not compatible with 6.7U1, however they were compatible with the current build of ESXi 6.5 the customer was running on their other hosts. I would need to upgrade these hosts to ESXi 6.5 before starting the vSphere 6.7 U1 upgrade.
  • This customer had a mix of Dell and Cisco UCS hosts, and almost all needed to have their firmware and BIOS upgraded to be compatible with ESXi 6.7 U1.

The third thing I needed to check was to see what other platforms, owned by VMware, and/or bolt on third parties, that I needed to worry about for this upgrade.

  • The customer is using a later version of VMware’s Horizon solution, and luckily for me, it is compatible with vSphere 6.7 U1, so no worries there.
  • The customer has Zerto 6.0 deployed, and unfortunately it needed to be upgraded to a compatible version.
  • The customer has Actifio backup solution, but that is also running a compatible version, so again no need to update it.

Lets Get those ESXi 5.5 hosts Upgraded to 6.5

I needed to schedule an outage with the customer, as they had three offsite locations, with two ESXi 5.5 hosts each. These hosts were using local storage to house and run the VMs, so even though they were in a host cluster, HA was not an option, and the VMs would need to be powered off.

Once I had the outage secured, I was able to move forward with upgrading these six hosts to ESXi 6.5.

Time to Upgrade the vSphere Distributed Switch (VDS)

For this portion of the upgrade, I only needed to upgrade the customers VDS’s to 6.5. This portion of the upgrade was fast, and I was able to do it mid day without the customer experiencing an outage. We did submit a formal maintenance request for visibility, and CYA. Total upgrade time to do all of their VDS’s was less than 15 minutes. Each switch took roughly a minute.

Upgrade the External Platform Services Controllers Before the vCenter Appliances

Now that I had all hosts to a compatible ESXi 6.5 version, I can move forward with the upgrade. I was able to do this upgrade during the day, as the customer would only lose the ability to manage their VMs using the vCenters. I made backups of the PSC and vCSA databases, and created snapshots of the VMs just in case.

I first needed to upgrade the external PSCs (3) to 6.7 U1, so I simply attached the vCSA.iso to my jump VM, and launched the .exe. I did this process one PSC at a time until they were all upgraded to 6.7 U1.

Upgrade the vCenter Appliances to 6.7 Update 1

Now that the external platform services controllers are on 6.7 U1, it is time to upgrade the vCenters. The process is the same with the exe, so I just did one vCenter at a time. Both the external PSC’s and the vCSA’s upgraded without issue, and within a couple of hours both the external PSC’s and vCSA’s had finished the vSphere 6.7 Update 1.

Upgrade Compatible ESXi Hosts to 6.7 Update 1

I really wanted to use the now embedded VMware Update Manager (VUM), but I either faced users who re-attached ISO’s to their Horizon VMs, or had administrators who were upgrading/installing VMware Tools. In one cluster I even happened to find a host that had improper networking configured compared to its peers in the cluster. Once I got all of that out of the way, I was able to schedule VUM to work its way down through each cluster, and upgrade the ESXi hosts to 6.7 Update 1. There were still some fringe cases where VUM wouldn’t work as intended, and I needed to do one host at a time.

Conclusion for the Upgrade

In the end, upgrading the customer’s three environments, vCSA, PSC and ESXi to 6.7 Update 1 took me about a couple of weeks to do alone. Not too shabby considering I finished ahead of schedule, even with all of the issues I faced. After the upgrade, the customer started having their Cisco UCS blades purple screen at random. After opening a case with GSS, that week Cisco came out with an emergency patch for the fnic driver, on the customer’s UCS blades, for the very issue they were facing. The customer was able to quickly patch the blades. Talk about good timing.

Part 2 Incoming

Part 2 of this series will focus on using the vCenter Converge Tool. Stay tuned.

Blog Date: 4/15/2019

vRealize Operations Manager Dashboard: vSphere DRS Cluster Health. Part 2

This blog series assumes that the reader has some understanding of how to create a vRealize Operations Manager (vROps) dashboard.

vROps dashboards are made up of what is called widgets. These widgets can either be configured as “self providers”, or can be populated with data by a “parent” widget. Self provider widgets, are configured to individually show specific data. In other words, one widget shows hosts, another shows datastores, and another showing virtual machines, however the widgets will not interact, nor are they dependent of each other. Parent widgets, are configured to provide data from a specific source, and then feed it into other child widgets on the page. This is useful when data is desired to be displayed in different formats of consumption. The dashboard I configured called “vSphere DRS Cluster Health”, does just that. I will break the widgets down to different sections as I walk through the configuration.

Widget #1 – This widget is known as an “object list“, and will be the parent widget of this dashboard. In other words, widgets #2 through #6 rely on the data presented by widget #1. In this case I have the object list widget, configured to show/list the different host clusters in my homelab.

I have given it a title, set refresh content to ON, set the mode to PARENT, and have it set to auto select the first row. In the lower left section “Select which tags to filter”, I have created an environment group in vROps called “Cluster Compute Resource” where I have specified my host clusters. In the lower right box, I have a few metrics selected which I would also like this “object list” widget to show.

This is just a single esxi homelab, so this won’t look as grand as it would if it were to be configured for a production environment. But each object in this list is select-able, and the cool thing is that each object in this list, when it is selected, will change the other widgets.

Widgets #2 and #3 are called “health charts”. I have one configured with the metric for cluster CPU workload %, and the other configured with the metric cluster memory workload %. Both configurations are the same, with the exception that one has a custom metric of Cluster CPU Workload %, and the other is configured with the custom metric of Cluster Memory Workload %. I have both configured to show data for the past 24hrs.

Important: For these two widgets, under “widget interactions“, set both to the first widget: DRS Cluster Settings (Select a cluster to populate charts)

Widgets #4 and #5 are called “View widgets”. One is configured to show the current Cluster CPU Demand, and the other is configured to show the current Cluster Memory Demand. These are also configured to forecast out for 30 days, so that we can potentially see if the clusters will run short of capacity in the near future, allowing us the ability to add more compute to the cluster preemptively.

These are two custom “views” I created. I will go over how to create custom views in a future post, but for those who already know how, I have one “widget view” configured with each.

Important: For these two widgets, under “widget interactions“, set both to the first widget: ” DRS Cluster Settings (Select a cluster to populate charts) ” like we did above.

Widget #6 is another “Object List” widget, and I have this configured to show only host systems, of the selected cluster in Widget #1. Widget #6 will be used to provide data to Widgets #7 and #8.

I also have certain Host System metrics selected here so that I can get high level information of the hosts in the cluster.

Important: For these two widgets, under “widget interactions“, set both to the first widget: ” DRS Cluster Settings (Select a cluster to populate charts)” like we did above.

The final two widgets, #7 and #8, are also called “health chart” widgets. One is configured using the metric host system CPU workload %, and the other is configured using the metric host system Memory workload %. I have both configured to show data for the past 24hrs.

Important: For these two widgets, under “widget interactions“, set both to widget #6, in this example: Host Workload (select a host to populate charts to the right).