Deploying vCloud Director v10 Appliance.

Blog Date: 04/07/2020

Prior to this blog post, blogged and walked through the steps of creating a NFS linux server using CentOS 7. You can find the link to that blog post here.

The VMware Cloud Director (vCD) platform is primarily used by service providers, as a cloud offering for their customers. Back when I worked for a service provider, the bulk of my experience came from the version 8.x days, when vCD was a software package to be installed on a Linux VM. Fast forward a few years, and I’ve started deploying vCD 9.7 and vCD 10 appliances for VMware customers, part of Professional Services engagements for VMware that I’ve been working on. Interestingly enough, both customers were not cloud providers, but had specific use cases that vCD achieved.

The vCD appliance deployment certainly is not as clean as other appliances like vCSA and vROps, and I’ve found there to be a few gotchas that can lead to a failed appliance deployment.

Deploying the vCD appliance

  1. Like most appliance deployments, we’ll deploy an ovf template.

2. Name the virtual machine, and select desired deployment datacenter and VM folder location.

3. Select the desired compute location

4. Select the size of the appliance. As this is the first primary cell, select an option that contains “primary”. If you are deploying appliance cells two and three, then you’d select “standby” here if you are creating a cluster. The “vCD Cell Application” would be used for the fourth appliance.
– You’ll also notice two different sizes: Small and Large. These will depend on your environment needs. VMware’s official sizing documentation can be found here.

5. Select desired storage disk format and storage where the appliance will reside.

6. We’ve arrived at the first gotcha: Selecting the network. This is the only ovf deployment I’ve seen that lists the NICs in reverse order. VMware states in their official documentation that “the source network list might be in reverse order. Verify that you are selecting the correct destination network for each source network.” I have yet to see the networks display in the proper order. VMware also states that eth0 and eth1 must be on separate networks in their documentation here. I’ve asked GSS but wasn’t given an answer why. I haven’t found an issue with both connections being on the same network, but for demonstration purposes we’ll do as the official documentation says. Note: I have noticed at least in my lab that the appliance uses eth1 to connect to the NFS server.

7. The second gotcha: Filling out the template customization page. It’s not indicated here that ALL fields are REQUIRED. Yes even the email address is a hard requirement, even though no other appliance deployment requires it.

8. On the summary page, verify the deployment and click finish.

9. Before starting the appliance, it may be a good idea to take a snapshot. Once the appliance has been started, and the configuration scripts attempt to run and fail, the appliance will need to be redeployed.

10. Once you have started the appliance, watch for the “Guest OS Initialization Script”. This should take a couple of minutes to run in order to be successful. If it runs for less than 10 seconds, then there was a problem and the appliance will need to be redeployed.

10a – If the appliance failed to deploy, log into the appliance as root, and look at the /opt/vmware/var/log/vcd/setupvcd.log for details.

10b – On a successful run, you’d see something similar to:

11 – On a successful deployment, log into the appliance 5480 page, and you should see something similar to:

12 – The primary appliance has successfully been deployed. If additional standby appliances are needed, now would be the best time to deploy them.

End – That’s it. In upcoming blog posts, I’ll walk through the process of deploying additional standby appliances, and the initial configuration of vCloud Director.

Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 1

I recently wrapped up a vSphere 6.7 U1 upgrade project, while on a VMware Professional Services (PSO) engagement, with a customer in Denver Colorado. On this project, I had to upgrade their three VMware environments from 6.5, to 6.7 Update 1. This customer also had three external Platform Services Controllers (PSC), a configuration that is now depreciated in VMware architecture.

Check the VMware Interoperability, and Compatibility Matrices

The first thing I needed to do, was to take inventory of the customer’s environment. I needed to know how many vCenters, if they had external Platform Services Controllers, how many hosts, vSphere Distributed Switch (VDS), and what the versions were.

  • From my investigation, this customer had three vCenters, and three external platform services controllers (PSC), all apart of the same SSO domain.
  • I also made note of which vCenter was paired with what external PSO. This information is critical not only for the vSphere 6.7 U1 upgrade, but also the convergence process that I will be doing in part two of this blog series.
  • Looking at the customer’s ESXi hosts, the majority were running the same ESXi 6.5 build, but I did find a few Nutanix clusters, and six ESXi hosts still on version 5.5.
  • The customer had multiple vSphere Distributed Switch (VDS) that needed to be upgraded to 6.5 before the 6.7 upgrade.

The second thing that I needed to do was to look at the model of each ESXi host and determine if it is supported for the vSphere 6.7 U1 upgrade. I also need to validate the firmware and BIOS each host is using, to see if I need to have the customer upgrade the firmware and BIOS of the hosts. We’ll plug this information into the VMware Compatibility Guide .

  • From my investigation, the three ESXi hosts running ESXi 5.5 were not compatible with 6.7U1, however they were compatible with the current build of ESXi 6.5 the customer was running on their other hosts. I would need to upgrade these hosts to ESXi 6.5 before starting the vSphere 6.7 U1 upgrade.
  • This customer had a mix of Dell and Cisco UCS hosts, and almost all needed to have their firmware and BIOS upgraded to be compatible with ESXi 6.7 U1.

The third thing I needed to check was to see what other platforms, owned by VMware, and/or bolt on third parties, that I needed to worry about for this upgrade.

  • The customer is using a later version of VMware’s Horizon solution, and luckily for me, it is compatible with vSphere 6.7 U1, so no worries there.
  • The customer has Zerto 6.0 deployed, and unfortunately it needed to be upgraded to a compatible version.
  • The customer has Actifio backup solution, but that is also running a compatible version, so again no need to update it.

Lets Get those ESXi 5.5 hosts Upgraded to 6.5

I needed to schedule an outage with the customer, as they had three offsite locations, with two ESXi 5.5 hosts each. These hosts were using local storage to house and run the VMs, so even though they were in a host cluster, HA was not an option, and the VMs would need to be powered off.

Once I had the outage secured, I was able to move forward with upgrading these six hosts to ESXi 6.5.

Time to Upgrade the vSphere Distributed Switch (VDS)

For this portion of the upgrade, I only needed to upgrade the customers VDS’s to 6.5. This portion of the upgrade was fast, and I was able to do it mid day without the customer experiencing an outage. We did submit a formal maintenance request for visibility, and CYA. Total upgrade time to do all of their VDS’s was less than 15 minutes. Each switch took roughly a minute.

Upgrade the External Platform Services Controllers Before the vCenter Appliances

Now that I had all hosts to a compatible ESXi 6.5 version, I can move forward with the upgrade. I was able to do this upgrade during the day, as the customer would only lose the ability to manage their VMs using the vCenters. I made backups of the PSC and vCSA databases, and created snapshots of the VMs just in case.

I first needed to upgrade the external PSCs (3) to 6.7 U1, so I simply attached the vCSA.iso to my jump VM, and launched the .exe. I did this process one PSC at a time until they were all upgraded to 6.7 U1.

Upgrade the vCenter Appliances to 6.7 Update 1

Now that the external platform services controllers are on 6.7 U1, it is time to upgrade the vCenters. The process is the same with the exe, so I just did one vCenter at a time. Both the external PSC’s and the vCSA’s upgraded without issue, and within a couple of hours both the external PSC’s and vCSA’s had finished the vSphere 6.7 Update 1.

Upgrade Compatible ESXi Hosts to 6.7 Update 1

I really wanted to use the now embedded VMware Update Manager (VUM), but I either faced users who re-attached ISO’s to their Horizon VMs, or had administrators who were upgrading/installing VMware Tools. In one cluster I even happened to find a host that had improper networking configured compared to its peers in the cluster. Once I got all of that out of the way, I was able to schedule VUM to work its way down through each cluster, and upgrade the ESXi hosts to 6.7 Update 1. There were still some fringe cases where VUM wouldn’t work as intended, and I needed to do one host at a time.

Conclusion for the Upgrade

In the end, upgrading the customer’s three environments, vCSA, PSC and ESXi to 6.7 Update 1 took me about a couple of weeks to do alone. Not too shabby considering I finished ahead of schedule, even with all of the issues I faced. After the upgrade, the customer started having their Cisco UCS blades purple screen at random. After opening a case with GSS, that week Cisco came out with an emergency patch for the fnic driver, on the customer’s UCS blades, for the very issue they were facing. The customer was able to quickly patch the blades. Talk about good timing.

Part 2 Incoming

Part 2 of this series will focus on using the vCenter Converge Tool. Stay tuned.

Blog Date: 4/15/2019