My Experience Passing The VMware Certified Professional – VMware Cloud Foundation 5.2 Certification Exam.

Blog Date: December 2024

Those of us who have taken the VMware Certified Professional Data Center Virtualization exams, can attest to those exams testing your knowledge and experience with vSphere, ESXi, and vSAN. We now have a new certification that tests our administration skills with VMware Cloud Foundation. Well, sort of…

What this exam got right: I do believe it was a good move to pull out questions regarding advanced deployment considerations around networking and VSAN stretch clusters, because those questions belong in a VCAP level exam that test our abilities around design and deployment. The exam also stayed away from questions that quiz us on deployment sizing, ports, and other factoids that in the real world, we would just consult the documentation for. I was also happy to see that there was significantly less “gotcha questions” than previous versions.

What I believe the exam got wrong: I do not believe this exam should have questions regarding the benefits and usage of add-ons like HCX, the Aria Suite, and Tanzu. To me, those questions should have been moved out to individual specialist exams that target those specific skillsets when used in conjunction with VCF. The exam did not go deep enough into the daily administration tasks like managing certificates and passwords, resolving trust issues between the SDDC manager and the VCF components like ESXi, vSAN, vCenter, and NSX. There should have been more questions on basic troubleshooting and questions regarding how to perform upgrades. These are basic administration skills that engineers should have, and are the area’s where I see engineers get themselves into trouble by coloring outside the VCF lines, especially coming from traditional vSphere environments with SAN storage.

Final thoughts: I do believe that this certification is a lot better than the VMware Cloud Foundation Specialist exams that have been retired, but this exam lacks focus on core skillsets necessary to administer VMware Cloud Foundation. This feels too much like an associate/specialist level exam. I would like to see a larger focus on testing an engineers skills administering VCF like what configurations should be done by the SDDC manager versus doing the configuration manually in the individual components. I would like to see questions that test an engineers basic VCF troubleshooting skills like what log files to look at for failed tasks and upgrades. The SOS command line tool in the SDDC manager is very powerful and VCF engineers should be aware of it’s basic functions. I would also like to see questions around the requirements and sequence of deploying hosts to a workload domain, decommissioning hosts, performing host maintenance, and some of the VSAN considerations engineers need to take into account for each. VMware Cloud Foundation is the modern private cloud, and although it is not feasible to have deep knowledge in each of the individual components that make up VCF like ESXi, vSAN, vCenter, vSphere, and NSX, I do believe we need to level-set on a basic set of skills to be successful.

I would highly recommend taking the VMware Cloud Foundation Administrator: Install, Configure, Manage 5.2 course. Many of the topics in the certification exam are covered in this training course. In its current form, you should also have a basic understanding HCX capabilities, and Aria Ops, Logs, and Automation. The exam also touches on the basic knowledge of the async patch tool and its function.

Testing VMware Cloud Foundation 4.x/5.x Depot Connections From The SDDC Manager CLI

Blog Date: September 30, 2024

While working with a customer recently, they were having a problem testing the SDDC managers connectivity to the online VCF_DEPOT and the VXRAIL_DEPOT. This particular customer was using VCF on VXRAIL.

After doing some searching, I came across our knowledge base article entitled: Troubleshooting VCF Depot Connection Issues

SSH into the SDDC manager as VCF, and then su to root. To test connectivity to the VMware Cloud Foundation Depot, run following curl command:

curl -kv https://depot.vmware.com:443/PROD2/evo/vmw/index.v3 -u customer_connect_username

If you have a VCF deployment running on VXRAIL, there’s an additional Dell Depot that will contain the rail update packages. To test connectivity to both VXRAIL and VCF Depots, run the following command:

curl -v http://localhost/lcm/depot/statuses| json_pp

The Depots can return a couple of status from the curl command:

“Status” : “SUCCESS” (everything is working as expected)
“Status” : “NOT_INITIALIZED” (This could indicate a connection problem with the depot)
“Status” : “USER_NOT_SET” (the depot user has not been specified)

For my customer, the VCF_DEPOT had a “SUCCESS” status, but the VXRAIL_DEPOT had a status of “USER_NOT_SET”.

Basic pings to test:

ping depot.vmware.com
ping download.emc.com

Basic curl commands to test:

curl -v https://depot.vmware.com
curl -v https://download.emc.com

Broadcom also offers a public list of URLs that the SDDC manager uses. That list can be found here: Public URL list for SDDC Manager

vCenter MOB No Healthy Upstream Error in VMware Cloud Foundation 4.X

Blog Date: September 25, 2024

One of my customers had a strange issue where the vCenter MOB wasn’t working on some of their vCenters in their VMware Cloud Foundation 4.X deployment.

The 10 vCenters are running in enhanced linked mode, and out of the 10, we only had one management vCenter where the MOB was working. All other services on the vCenter appear to be working fine.

On the vCenter, we can check and see if the vpxd-mob-pipe is listed in the following directory /var/run/vmware with the following command:

ls -la /var/run/vmware/

If we do not see vpxd-mob-pipe, then we need to look at the vpxd.cfg file. Specifically we are looking for the following parameter: <enableDebugBrowse>. If this is set to false, the MOB will not work.

vi /etc/vmware-vpx/vpxd.cfg

Once the vpxd.cfg opens, we can search the file by pressing the ‘ / ‘ key, and then enter:

/<enableDebugBrowse>

and then press enter.

This should take us to where we need to be. In my case, it was set to false as shown below:

<enableDebugBrowse>false</enableDebugBrowse>

Hit the ‘INSERT’ key, and change >false< to >true<.

<enableDebugBrowse>true</enableDebugBrowse>

Hit the ‘ESC’ key, and then hit the ‘ : ‘ key followed by entering ‘ wq! ‘ to save and exit the vpxd.cfg file.

:wq!

Now we need to stop and start the vmware-vpxd service with the following command:

service-control --stop vmware-vpxd && service-control --start vmware-vpxd

Once the service restarts, you should now be able to access the vCenter MOB.

VMware Cloud Foundation SDDC Manager Unable To Remediate Edge Admin and Audit Passwords. Part 2.

Blog Date: 08/23/2024
VMware Cloud Foundation 4.x

Continuing from my previous blog post where the VMware Cloud Foundation 4.x SDDC manager was unable to obtain SSH connection to NSX Edges, we determined that at some point, the edges were redeployed via NSX-T instead of through the SDDC manager, and we had to update the edge ID’s in the SDDC manager database. I’d certainly recommend checking that blog out here –> VMware Cloud Foundation SDDC Manager Unable To Remediate Edge Admin and Audit Passwords. Part 1.

In this blog, I will go through the second issue where we identified the HostKey of the edges had been changed by investigating the logs, and the process we used to fix it and restore the SDDC manager’s communication with the edges, so that we can successfully manage them via the SDDC manager in VMware Cloud Foundation.

We still see a similar error message in the SDDC manager UI when we attempt to remediate the edges admin and audit passwords. We established an SSH session to the SDDC manager to review the operationsmanager.log located in /var/log/vmware/vcf/. We did a “less operationsmanager.log” and searched for the edge, in this example “qvecootwedgen01”

After searching the log, we can see there’s an error in the operationsmanager.log that the HostKey has been changed. To resolve this issue, we can use a script called fix_known_hosts.sh. The fix_known_hosts.sh script was created by Laraib Kazi to address an issue where SSH attempts from the SDDC Manager fail with an error related to the HostKey. This script removes existing erroneous entries in the known_hosts files and updates them with new ones. It is recommended to take a snapshot of the SDDC Manager before executing the script, which edits 4 known_hosts files. This script is useful when dealing with SSH host key mismatches, which can occur due to various reasons like restoring from a backup, manual rebuild, or manual intervention to change the Host Key. The script can be downloaded from hist github page here.

Upload the script to a safe spot on the SDDC manager. You can put it in the /tmp directory, but remember it will be deleted on next reboot. The script can be run on the SDDC manager as is.  HOWEVER, you will want to prep before hand, and get the FQDN of the edge node(s) and IP address(s) in a text file as we will need those when we run the script.

**************************************************************************************************************
STOP… Before continuing, take an *offline* (powered off) snapshot of the SDDC manager VM as we will be updating the edgenode HostKey on the SDDC manager.
**************************************************************************************************************

Disclaimer: While this process has worked for me in different customer environments, not all environments are the same, and your mileage may vary.

Run the script:

./fixed_known_hosts.sh

You’ll want to enter the FQDN of the nsx-t edge being fixed. The resulting example output should follow:

Re-run this script against additional NSX-T edges as needed.

Now you’re ready to try password operations again in the SDDC manager against the edge(s). If you elected to create a new password on the NSX-T edge, you’ll need to choose the password “remediation” option in the SDDC manager to update the database with the new password created. If you set the password on the NSX-T edge back to what the SDDC manager already had, then just use the password “rotate now” function.

SDDC password operations should now be working as expected. If this did not resolve the issue, I would revert back to snapshot, and contact support for further troubleshooting.

If this resolved your issue, don’t forget to clean your room, and delete any snapshots taken.

How to Update VMware Cloud Foundation SDDC Manager When NSX-T Certificate Has Been Replaced.

Blog Date: July 11, 2024

In VMware Cloud Foundation 4.5.1, managing certificates of the Aria Suite LCM, NSX, VXRAIL, and vCenter Certificates should be done via the SDDC manager, so that it trusts the components certificate. The official documentation on how to do it can be found here -> Manage Certificates in a VMware Cloud Foundation.

In some cases however, certificates can be replaced/updated outside of the SDDC manager either due to a lack of understanding, or in emergency situations where certificates expired. In either of those situations, the certificate must be imported into the trusted root store on the SDDC manager appliance to re-establish trust to those components. Otherwise, SDDC manager will not function as intended.

Official knowledge base article can be found here -> How to add/delete Custom CA Certificates to SDDC Manager and Common Services trust stores.

The following steps can be used to update the SDDC Manager trust store with the new NSX certificate.

  1. IMPORTANT: Take a snapshot of the SDDC Manager virtual machine. **Don’t Skip This Step**
  2. Use a file transfer utility to copy the new NSX certificate file to the /tmp directory on the SDDC Manager.
  3. Establish an SSH connection to the SDDC Manager as the VCF user, and then issue the su – command to switch to the root user.
  4. Obtain the trusted certificates key by issuing the following command:

    cat /etc/vmware/vcf/commonsvcs/trusted_certificates.key

    Note: You will see output similar to the following:

    p_03ZjNI7S^B7V@8a+
  5. Next, Issue a command similar to the following to import the new NSX-T certificate into the SDDC Manager trust store:

    keytool -importcert -alias <aliasname> -file <certificate file> -keystore /etc/vmware/vcf/commonsvcs/trusted_certificates.store --storepass <trust store key>

    Notes:
    • Type yes when prompted to trust the certificate.
    • Enter something meaningful, like sddc-mgmt-nsx for the <aliasname> value.
    • Replace <certificate file> with the full path to the certificate file that was uploaded in Step 2.
    • Replace <trust store key> with the trusted certificates key value returned in Step 4.

  6. Issue a command similar to the following to import the new NSX-T certificate into the java trust store. Here the storepass is changeit:

    keytool -importcert -alias <aliasname> -file <certificate file> -keystore /etc/alternatives/jre/lib/security/cacerts --storepass changeit

    Notes:
    • Type yes when prompted to trust the certificate.
    • Replace <aliasname> with the meaningful name chosen in Step 5.
    • Replace <certificate file> with the full path to the certificate file that was uploaded in Step 2.
  7. Issue a command similar to the following to verify that the new NSX-T certificate has been added to the SDDC Manager trust store:

    keytool -list -v -keystore /etc/vmware/vcf/commonsvcs/trusted_certificates.store -storepass <trust store key>

    Note: 
    • Replace <trust store key> with the trusted certificates key value returned in Step 4.
  8. Issue the following command to restart the SDDC Manager services:

    /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
  9. (Optional): You can utilize the SDDC manager SOS utility to check the health of the newly imported NSX-T certificate with the following command:

    /opt/vmware/sddc-support/sos --certificate-health --domain-name ALL

    Tip:
    For more information on the sos utility, check out the documentation here: -> SoS Utility Options (vmware.com)
  10. If everything checks out, remove the snapshot that was taken prior to starting this procedure.

Migrate VMware Cloud Foundation 4.x/5.x from Legacy VUM Images to vSphere Lifecycle Managed (vLCM) Images. (Can’t do it… yet)

Blog Date: July 10, 2024

To get straight to the punch, this is not supported yet. If you already have an existing VCF deployment, there currently is no supported way to migrate your workload domains to vLCM, per the support article here -> Transition from vSphere Lifecycle Manager Baselines to vSphere Lifecycle Manager Images is not supported with VMware Cloud Foundation. While you can technically use the vCenter UI / APIs to make the switch, it will cause workflows in SDDC manager to break, VMware support/engineering will have to get involved, and most likely the changes will have to be reverted.

If you are in the beginnings of deploying a new workload domain, by default it will use vSphere Lifecycle Manager baselines as the update method, unless you specifically checked “Manage clusters in this workload domain using baselines (deprecated)” during the workload domain deployment. However, this option would require you to have an existing vLCM image prior to the workload domain being deployed. If you don’t have a vLCM image yet, the VMware documentation suggests that you deploy the workload domain using legacy images (VUM), and that documentation can be found here -> Deploy a VI Workload Domain Using the SDDC Manager UI.

Doing a little research on the available options if no vLCM baseline image is available, and if you already have identical ESXi hosts deployed to the VCF environment, in vSphere, you can create a new empty compute cluster, select the option to manage the cluster with vLCM baselines, select a identical host already deployed to the environment to import and create the vLCM baseline from including the NSX driver. Now you have a vLCM baseline you can use for new workload domains and clusters using identical hosts. The new vLCM baseline can be imported into the SDDC manager. One might ask if it is safe to create a new compute cluster using the vSphere UI in a VCF deployment? For this purpose because it is temporary, the answer is yes. Technically, if you add additional compute clusters in vSphere without going through the SDDC manager, the SDDC will have no knowledge of it and won’t interact with it, so for our purposes, it is safe to create the empty compute cluster to create the new baseline, and then just delete the empty cluster when finished. Always remember to clean your room.

Although it will take a little work on the font end if you currently do not have vLCM baseline images to deploy a new workload domain, the above process can be used to create it. Eric Gray put together an excellent blog and YouTube video on this here -> Updating VCF Workload Domains deployed using vLCM Images. This walks us through the process of creating a new vLCM baseline image for a vLCM enabled workload domain to upgrade it, but the same process can be used to create a new vLCM image for a new workload domain with identical hardware.

If you have just deployed a workload domain and selected Manage clusters in this workload domain using baselines (deprecated) (legacy VUM), there is no way to convert it to vLCM baselines (at the time of writing this blog). You have to REDEPLOY the workload domain. You could however, take the opportunity using the above method to create a vLCM baseline image for the workload domain, so that when you redeploy it, you’ll have a vLCM image to use. Silver lining?

Unconfirmed reports indicate that the functionality to migrate existing workload domains from legacy VUM to vSphere Lifecycle Manager baselines is *targeted* for VMware Cloud Foundation 9.