Deploying Additional vCloud Director v10 Appliances for Database High Availability Configuration

Blog Date: 4/14/2020

In my last blog, I walked through the process of Deploying the vCloud Director Appliance v10, and today’s blog will feature the process of deploying two additional standby appliances to create an HA database configuration for vCD. To get an idea of what that architecture would look like, I’ll rip this excellent diagram from VMware’s own documentation.

Deploying additional appliances are pretty straight forward, so lets get started.

1 – Find and upload the OVF for vCD.

2 – Name the VM, select the datacenter and virtual machine folder.

3 – Select the compute cluster

4 – The primary appliance has already been deployed. It is important to note that the same size standby appliance has to be deployed. Because our first primary appliance was deployed as small, so to shall the standby appliance. VMware’s sizing guide and be found here.

5 – Select desired storage disk format and storage where the appliance will reside.

6 – Configure the networks for each network interface, keeping in mind that they will be in reverse order as discussed before.

7 – Fill out the template customization page just like before. Remember all fields including the administrator email are required.

Note:
– Be sure to use the same “System name” that was used for the original vCD primary appliance deployment.
– For the “Installation ID” section, make sure this value reflects increases with the number of appliance being deployed. In this demonstration I am deploying the 2nd and 3rd appliances, so the installation IDs would be 2 and 3 respectively.

8 – On the summary page, verify the deployment and click finish.

9 – Before starting the appliance, it may be a good idea to take a snapshot. Once the appliance has been started, and the configuration scripts attempt to run and fail, the appliance will need to be redeployed. I’d also take a snapshot of the primary appliance, to roll back any failed attempts to join.

10 – Once you have started the appliance, watch for the “Guest OS Initialization Script”. This should take a couple of minutes to run in order to be successful. If it runs for less than 10 seconds, then there was a problem and the appliance will need to be redeployed.

11 – After the appliance boots, look at the /opt/vmware/var/log/vcd/setupvcd.log to validate a successful cluster join. This log can also be used if the appliance deployment failed.

A successful join would look something like this:

12 – Now deploy the 3rd standby appliance using steps 1 through 11.

13 – Once the 3rd appliance has been deployed, it would be a good idea to log into the primary appliance’s 5480 page to validate the health of the new DB cluster.

14 – Log into the provider page (https://appliance_name.com/provider)

15 – Here we can also see the state of the cloud cells.

End – I’ll finish this blog post here. In my next blog, I’ll walk through the steps of configuring the newly deployed vCloud Director environment. Stay tuned.

Deploying vCloud Director v10 Appliance.

Blog Date: 04/07/2020

Prior to this blog post, blogged and walked through the steps of creating a NFS linux server using CentOS 7. You can find the link to that blog post here.

The VMware Cloud Director (vCD) platform is primarily used by service providers, as a cloud offering for their customers. Back when I worked for a service provider, the bulk of my experience came from the version 8.x days, when vCD was a software package to be installed on a Linux VM. Fast forward a few years, and I’ve started deploying vCD 9.7 and vCD 10 appliances for VMware customers, part of Professional Services engagements for VMware that I’ve been working on. Interestingly enough, both customers were not cloud providers, but had specific use cases that vCD achieved.

The vCD appliance deployment certainly is not as clean as other appliances like vCSA and vROps, and I’ve found there to be a few gotchas that can lead to a failed appliance deployment.

Deploying the vCD appliance

  1. Like most appliance deployments, we’ll deploy an ovf template.

2. Name the virtual machine, and select desired deployment datacenter and VM folder location.

3. Select the desired compute location

4. Select the size of the appliance. As this is the first primary cell, select an option that contains “primary”. If you are deploying appliance cells two and three, then you’d select “standby” here if you are creating a cluster. The “vCD Cell Application” would be used for the fourth appliance.
– You’ll also notice two different sizes: Small and Large. These will depend on your environment needs. VMware’s official sizing documentation can be found here.

5. Select desired storage disk format and storage where the appliance will reside.

6. We’ve arrived at the first gotcha: Selecting the network. This is the only ovf deployment I’ve seen that lists the NICs in reverse order. VMware states in their official documentation that “the source network list might be in reverse order. Verify that you are selecting the correct destination network for each source network.” I have yet to see the networks display in the proper order. VMware also states that eth0 and eth1 must be on separate networks in their documentation here. I’ve asked GSS but wasn’t given an answer why. I haven’t found an issue with both connections being on the same network, but for demonstration purposes we’ll do as the official documentation says. Note: I have noticed at least in my lab that the appliance uses eth1 to connect to the NFS server.

7. The second gotcha: Filling out the template customization page. It’s not indicated here that ALL fields are REQUIRED. Yes even the email address is a hard requirement, even though no other appliance deployment requires it.

8. On the summary page, verify the deployment and click finish.

9. Before starting the appliance, it may be a good idea to take a snapshot. Once the appliance has been started, and the configuration scripts attempt to run and fail, the appliance will need to be redeployed.

10. Once you have started the appliance, watch for the “Guest OS Initialization Script”. This should take a couple of minutes to run in order to be successful. If it runs for less than 10 seconds, then there was a problem and the appliance will need to be redeployed.

10a – If the appliance failed to deploy, log into the appliance as root, and look at the /opt/vmware/var/log/vcd/setupvcd.log for details.

10b – On a successful run, you’d see something similar to:

11 – On a successful deployment, log into the appliance 5480 page, and you should see something similar to:

12 – The primary appliance has successfully been deployed. If additional standby appliances are needed, now would be the best time to deploy them.

End – That’s it. In upcoming blog posts, I’ll walk through the process of deploying additional standby appliances, and the initial configuration of vCloud Director.

The Home Lab Part 2

The very long over due followup post to my The Home Lab entry made earlier this year.  I did recently purchase another 64GB (2x 32GB) Diamond Black DDR4 memory to bring my server up to 128GB.  I had some old 1TB spinning disks I installed in the box for some extra storage as well, although I will phase them out with more SSDs in the future.  So as a recap, this is my setup now:

IMG_20171117_170133

Motherboard

motherboardSUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

 

Memory

memory

(x2) Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

 

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

 

Additional Storage

x2 1TB Western Digital Black spinning disks

 

Initially when I built the lab, I decided to use VMware workstation, but I recently just rebuilt it, installing ESXi 6.7 as the base.  Largely for better performance and reliability.  For the time being this will be a single host environment, but keeping with the versioning, vCSA and vROps are 6.7 as well.  Can an HTML 5 interface be sexy?  This has come a long way from the flash client days.

vcenter view

I decided against fully configuring this host as a single vSAN node, just so that I can have the extra disk.  However, when I do decide to purchase more hardware and build a second or third box, this setup will allow me to grow my environment, and reconfigure it for vSAN use.  Although I am tempted to ingest the SSDs into my NAS, carve out datastores from it and not use vSAN, at least for the base storage.

storageview

Networking is flat for now, so there’s nothing really to show here.  As I expand and add a second host, I will be looking at some networking hardware, and have my lab in it’s own isolated space.

Now that I am in the professional services space, working with VMware customers, I needed a lab that was more production. I’m still building out the lab so I’ll have more content to come.

The Home Lab Hardware

IMG_20171117_170133

Setup

I decided to go with a Supermicro build as I wanted something power efficient, yet expandable, and this motherboard supports up to 128GB of ECC RDIMM DDR4 2133MHz server grade memory.  Now with this setup, when I feel the need to expand out my lab, I can build two more nodes, and I’ll have a rather nice VSAN cluster.  However I’m hoping the cost of DDR4 memory will have come down by then…

I did look at the Supermicro SYS-E300-8D and SYS-E200-8D style micro servers, but like most, I was concerned about the fan noise, and thus decided to go with a slightly larger chassis to get the larger fan.  Honestly the fan in the unit I bought makes no more noise then a regular desktop computer.

Here’s my hardware:


Motherboard

motherboard

SUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg


Memory

memory

Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg


M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg


SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg


Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

Who doesn’t love some internal shots after the lab-box has been put together?  🙂

In the coming blog posts, I’ll be building out my lab.  Stay tuned….