vCenter MOB No Healthy Upstream Error in VMware Cloud Foundation 4.X

Blog Date: September 25, 2024

One of my customers had a strange issue where the vCenter MOB wasn’t working on some of their vCenters in their VMware Cloud Foundation 4.X deployment.

The 10 vCenters are running in enhanced linked mode, and out of the 10, we only had one management vCenter where the MOB was working. All other services on the vCenter appear to be working fine.

On the vCenter, we can check and see if the vpxd-mob-pipe is listed in the following directory /var/run/vmware with the following command:

ls -la /var/run/vmware/

If we do not see vpxd-mob-pipe, then we need to look at the vpxd.cfg file. Specifically we are looking for the following parameter: <enableDebugBrowse>. If this is set to false, the MOB will not work.

vi /etc/vmware-vpx/vpxd.cfg

Once the vpxd.cfg opens, we can search the file by pressing the ‘ / ‘ key, and then enter:

/<enableDebugBrowse>

and then press enter.

This should take us to where we need to be. In my case, it was set to false as shown below:

<enableDebugBrowse>false</enableDebugBrowse>

Hit the ‘INSERT’ key, and change >false< to >true<.

<enableDebugBrowse>true</enableDebugBrowse>

Hit the ‘ESC’ key, and then hit the ‘ : ‘ key followed by entering ‘ wq! ‘ to save and exit the vpxd.cfg file.

:wq!

Now we need to stop and start the vmware-vpxd service with the following command:

service-control --stop vmware-vpxd && service-control --start vmware-vpxd

Once the service restarts, you should now be able to access the vCenter MOB.

VMware Cloud Foundation SDDC Manager Unable To Remediate Edge Admin and Audit Passwords. Part 2.

Blog Date: 08/23/2024
VMware Cloud Foundation 4.x

Continuing from my previous blog post where the VMware Cloud Foundation 4.x SDDC manager was unable to obtain SSH connection to NSX Edges, we determined that at some point, the edges were redeployed via NSX-T instead of through the SDDC manager, and we had to update the edge ID’s in the SDDC manager database. I’d certainly recommend checking that blog out here –> VMware Cloud Foundation SDDC Manager Unable To Remediate Edge Admin and Audit Passwords. Part 1.

In this blog, I will go through the second issue where we identified the HostKey of the edges had been changed by investigating the logs, and the process we used to fix it and restore the SDDC manager’s communication with the edges, so that we can successfully manage them via the SDDC manager in VMware Cloud Foundation.

We still see a similar error message in the SDDC manager UI when we attempt to remediate the edges admin and audit passwords. We established an SSH session to the SDDC manager to review the operationsmanager.log located in /var/log/vmware/vcf/. We did a “less operationsmanager.log” and searched for the edge, in this example “qvecootwedgen01”

After searching the log, we can see there’s an error in the operationsmanager.log that the HostKey has been changed. To resolve this issue, we can use a script called fix_known_hosts.sh. The fix_known_hosts.sh script was created by Laraib Kazi to address an issue where SSH attempts from the SDDC Manager fail with an error related to the HostKey. This script removes existing erroneous entries in the known_hosts files and updates them with new ones. It is recommended to take a snapshot of the SDDC Manager before executing the script, which edits 4 known_hosts files. This script is useful when dealing with SSH host key mismatches, which can occur due to various reasons like restoring from a backup, manual rebuild, or manual intervention to change the Host Key. The script can be downloaded from hist github page here.

Upload the script to a safe spot on the SDDC manager. You can put it in the /tmp directory, but remember it will be deleted on next reboot. The script can be run on the SDDC manager as is.  HOWEVER, you will want to prep before hand, and get the FQDN of the edge node(s) and IP address(s) in a text file as we will need those when we run the script.

**************************************************************************************************************
STOP… Before continuing, take an *offline* (powered off) snapshot of the SDDC manager VM as we will be updating the edgenode HostKey on the SDDC manager.
**************************************************************************************************************

Disclaimer: While this process has worked for me in different customer environments, not all environments are the same, and your mileage may vary.

Run the script:

./fixed_known_hosts.sh

You’ll want to enter the FQDN of the nsx-t edge being fixed. The resulting example output should follow:

Re-run this script against additional NSX-T edges as needed.

Now you’re ready to try password operations again in the SDDC manager against the edge(s). If you elected to create a new password on the NSX-T edge, you’ll need to choose the password “remediation” option in the SDDC manager to update the database with the new password created. If you set the password on the NSX-T edge back to what the SDDC manager already had, then just use the password “rotate now” function.

SDDC password operations should now be working as expected. If this did not resolve the issue, I would revert back to snapshot, and contact support for further troubleshooting.

If this resolved your issue, don’t forget to clean your room, and delete any snapshots taken.

See you there! We’ll be Presenting: DRaaS and On-Premises Disaster Recovery Solutions at The 2023 US VMware Explore in the CXS Theater!

It’s that time of year again, and I am very excited that VMware Explore has been moved back to sunny Las Vegas! To that end, I am also very excited to announce that I get to co-present a CSX Theater session CXS1280LV this year with Jack Levy.

VMware customers interested in leveraging new or existing cloud investments for Disaster Recovery and Ransomware Recovery, will most certainly be interested in attending our 45 minute presentation at the CSX Theater in the solutions exchange area.

A link to the CXS Theater session in the content catalog can be found here: DRaaS and On-Premises Disaster Recovery Solutions: Go from Zero to Hero

In today’s world, disaster recovery is not only about protecting the enterprise during a data center failure. The truth is there are bad actors out there also trying to extort money through ransomware. It’s time to get serious about disaster recovery, and VMware is here to help. Protect your business from human, climate and ransomware disasters. Learn from our experts about VMware’s various disaster recovery solutions from on-premises to our disaster recovery-as-a-service (DRaaS) offerings with ransomware recovery options in the cloud. Define the recovery strategy and protect your assets. In this session, we’ll be covering VMware Site Recovery Manager with vSphere replication, VMware Cloud Disaster Recovery plus the optional Ransomware recovery capabilities, and we’ll have a couple brief recorded demos of the setup process.

We will also be giving an exclusive presentation for our TAM and VMware Success 360 customers in a session that won’t be found in the content catalog TAM3792LV.

I’ll admit that I have always been excited to attend VMware Explore when I was a customer, back when it was called vmworld. My first time as a customer, was back in 2016 when vmworld was moved to Las Vegas. This year, the new Explore will be in Las Vegas for the first time, and it will be my first time presenting as a VMware employee. Does that mean I have come full circle?

Regardless, I am looking forward to the event, being able to meet with my customers I have worked with over the years as a consultant, and getting to meet some of my VMware colleagues that I have had the pleasure to work with over the years.

Jack Levy and I started around the same time on the same team Americas Professional Services Org, and became fast friends.

We are both excited about the opportunity to present at VMware Explore, and we look forward to seeing you at our session!

Microsoft CA: Installing Custom vSphere 7 Certificates, and Enabling VMCA To Act As The Intermediate CA.

Blog Date: 3/03/2021
Versions Tested:
vCSA: 7.0.1
ESXi: 7.0.1
Microsoft CA Server: 2016

Recently I had a customer that wanted to install their custom certificates on a new vCenter, and have it act as an Intermediate CA to install approved certificates on their hosts. This is common, but is something I have never done myself, so I ended up testing this out in my lab. In this long blog post, I will walk through:

  • Generating a Certificate Signing Request (CSR) for the vCenter.
  • Signing the request, creating the certificate using a standalone Microsoft CA.
  • Creating a trusted root chain certificate.
  • Installing the custom signed machine SSL certificate.
  • Installing the custom signed VMCA root certificate.
  • Renew host certificates and test.

Before we get started, it is worthwhile to note if you were unaware that there are different certificate modes in vSphere 7. Bob Plankers does a great job explaining these modes in the following VMware Blog: vSphere 7 – Certificate Management

I have done my best to focus the steps involved, and provided those below while testing this out in my home lab. Your mileage may vary.
_________________________________________________________________________________________________________

Snapshot the vCenter

We will be applying Custom Machine and Trusted Root certificates to the vCenter, and it is good practice to take a snapshot of the vCenter appliance beforehand.

_________________________________________________________________________________________________________

Testing vCenter Connection with WinSCP

We will be using WinSCP to transfer the files to and from the vCenter. You should validate that you can connect to yours first. If you are having problems connecting to the vCenter win WinSCP, follow this KB2107727.

_________________________________________________________________________________________________________

GENERATE CERTIFICATE SIGNING REQUEST (CSR)

Start an SSH session with the vCenter appliance, and enter “/usr/lib/vmware-vmca/bin/certificate-manager” to launch the utility. We want option #2

Enter “Y” at the prompt: Do you wish to generate all certificates using configuration file : Option[Y/N] ? :

Enter privileged vCenter credentials. The default administrator@vsphere.local will do the trick. Hit Enter key, then input the password and hit the enter key again.

If there is a valid certtool.cfg file that exists with your organizations information you can select N here.

Else if

IF there is no certool.cfg file or you would like to look at the file, use option “Y” here.

In the next series of prompts, you will be required to enter information specific to your environment. The inputs in purple below are examples I used for my lab, and yours will be different:
value for ‘Country’ [Default value : US] : US
value for ‘Name’ [Default value : CA]: cb01-m-vcsa01.cptvops-lab.net
value for ‘Organization’ [Default value : VMware]: CaptainvOPS
value for ‘OrgUnit’ [Default value : VMware Engineering]: CaptainvOPS Lab
value for ‘State’ [Default value : California]: Colorado
value for ‘Locality’ [Default value : Palo Alto] Fort Collins
value for ‘IPAddress’ (Provide comma separated values for multiple IP addresses) [optional]: 10.0.1.28
value for ‘Email’ [Default value : email@acme.com] : info@cptvops-lab.net
value for ‘Hostname’ (Provide comma separated values for multiple Hostname entries) [Enter valid Fully Qualified Domain Name(FQDN), For Example : example.domain.com] : cb01-m-vcsa01.cptvops-lab.net
Enter proper value for VMCA ‘Name’ : cb01-m-vcsa01.cptvops-lab.net

This image has an empty alt attribute; its file name is image-5.png

Enter option “1” to Generate Certificate signing requests

It will prompt for an export directory. In this example I used /tmp/

Now using WinSCP, copy the files off to the workstation you will be using to connect the the Microsoft CA.

_________________________________________________________________________________________________________

SIGNING THE REQUEST, CREATING THE CERTIFICATE USING STANDALONE MICROSOFT CA

I am using a standalone Microsoft CA I configured on my lab’s jump server, and don’t have the web portal. Launch the Certification Authority app.

Right-click the CA, select all tasks, and select Submit new request.

Once you browse to the directory where you stored the “vmca_issued_csr.csr” and “vmca_issued_key.key” files, you will need to change the view to “All Files (*.*)” in the lower right.

Now select the “vmca_issued_csr.csr”

Now go to “Pending Requests”. Right-click on pending cert request, go to “all tasks”, and select “Issue”.

Now go to “Issued Certificates”, double-click to open the certificate, go to the “details” tab and click the Copy to File… button.

When the wizard starts, click Next.

Select “Base-64-encoded X.509 (.CER), and Click Next

Click Browse

type in the file name and click save. In this example, I just use “vmca”.

Click Next, and then click Finish. Now click OK to close the wizard.


_________________________________________________________________________________________________________

CREATING A TRUSTED ROOT CHAIN CERTIFICATE

For this next part we will need the root certificate from the Microsoft CA. Right-click on the CA and select “Properties”

On the General tab, click the “View Certificate” button. Then on the new Certificate window that opens, click the details tab and then the “Copy to File…” button

When the wizard starts, click Next.

Select “Base-64-encoded X.509 (.CER), and Click Next

Just like before, browse to a locate to save the cert, and give it a meaningful name. In this example, I just use “root-ca”

Click Next and then click Finish. Close the windows.

In order to create a trusted root, we need to create a new file that has the root chain. This file will include the vmca.cer first, and then the root-ca.cer beneath it. I just open the vmca.cer file and root-ca.cer file with Notepad ++

Save it as a new file. In this example I use: vmca_rootchain.cer.

Using WinSCP, I create a new directory in the vCenter’s root directory called CertStore, and then copy the “vmca.cer”, “vmca_rootchain.cer”, and the “vmca_issued_key.key” there.

_________________________________________________________________________________________________________

INSTALLING THE CUSTOM MACHINE SSL CERTIFICATE

Open an SSH session to the vCenter, launch the certificate-manager: “/usr/lib/vmware-vmca/bin/certificate-manager”. First we will replace the Machine SSL certificate, so select option 1

Again we are prompted for vCenter authoritative credentials, and just like before we’ll use the administrator@vsphere.local account and password.

Now we reach the menu where we will use Option 2 to import custom certificates. Hit enter.

It will prompt for the directory containing the cert file for the Machine SSL. In this example I used: /CertStore/vmca.cer

Now it prompts for the custom key file. In this example I used: /CertStore/vmca_issued_key.key

Now it will ask for the signing certificate of the Machine SSL certificate. Here we will use the custom vmca_rootchain.cer file we created earlier: /CertStore/vmca_rootchain.cer

Respond with “Y” to the prompt: You are going to replace Machine SSL cert using custom cert
Continue operation : Option[Y/N] ? :

Assuming all of the information in the files was correct, the utility will begin replacing the machine SSL certificate. Watch the console for any errors and troupleshoot accordingly. 9 out of 10 times fat fingers are the cause for errors. Or dumb keyboards..

In my case it finished successfully.


_________________________________________________________________________________________________________

INSTALLING THE CUSTOM SIGNED VMCA ROOT CERTIFICATE

Relaunch the certificate-manager: “/usr/lib/vmware-vmca/bin/certificate-manager”. Use Option #2 now.

Enter Option: Y

Enter the authoritative vCenter account and password. Just like before I’m using the administrator@vsphere.local

We already have a known good certool.cfg file. We’ll skip ahead to the import, so use option: N

Use option #2 to import the custom certificates.

We are prompted for the custom certificate for root. Here we will use the custom vmca_rootchain.cer file we created earlier: /CertStore/vmca_rootchain.cer

Here we are prompted for the valid custom key for root. Just like last time we will use: /CertStore/vmca_issued_key.key

Use option “Y” to replace Root Certificates at the next prompt

Now the utility will replace the root certificate.


_________________________________________________________________________________________________________

TESTING THE VSPHERE CERTIFICATE

After clearing the browser cache, we can see the secure padlock on the login page for vSphere 7.

Now log into vSphere, go to Administration, and Certificate Management. Depending on how quickly you logged in after applying the certs, you may see an error on this page. Give it up to 5 minutes, and you’ll see a prompt at the top of the screen asking you to reload. Do it. Now you will see a clean Certificates Management screen. You should see a _MACHINE_CERT with your organization’s details. You will also see three Trusted Root Certificates. One of which will have your Organizations details.


_________________________________________________________________________________________________________

RENEW HOST CERTIFICATES AND TEST

If you have hosts already attached to the vCenter, and you would like to “Renew” the certificate, by default you will need to wait 24 hours before the certificate can be updated. Otherwise you will receive a similar error in vCenter:

If you need to update the certificate right away, follow KB2123386 to change the setting from 1440 to 10. This will allow for updating the ESXi certificate right away. I’d personally change it back after.

Either 24 hours has passed, or you followed the KB2123386 mentioned above. Now you can “Renew” the certificate on the ESXi host(s). You’ll see the Issuer of the certificate has changed, and should reflect your information. This operation doesn’t appear to work while the host is in maintenance mode.

If you browse to the ESXi host, it too will now have a padlock on its login page signifying a valid certificate.

Update Multiple 6.7 ESXi Hosts Root Password Via Host Profile.

Blog date: September 01, 2020
Completed on vSphere version: 6.7

Today my customer needed to change the root password for roughly 36 hosts across two data centers. These are two data centers that were recently built as part of my residency with them, and they have already seen the benefits of using host profiles. Today I was able to show them one more.

VMware has a KB68079 that details the process should the root password become unknown on a host. Well the same process can be applied and used to update the password on all hosts with that host profile attached. At the time of writing this article, all hosts are compliant with the current host profile, and there are no outstanding issues.

In the vSphere client, go to ‘Policies and Profiles’ and select ‘Host Profiles’ in the left column, click and select the desired host profile on the right.

  • Edit the desired host profile.
  • In the search field, type root and hit enter.
  • Select root in the left column.
  • In the right column, change the field below ‘Password’ to Fixed password Configuration.

Now you are prompted with password fields and can update the root password.

Click Save once the new password has been entered.

Now you can remediate the hosts against the updated host profile, and the root account will get updated on each host.
– Out of an abundance of caution, it is always good to spot check a handful of hosts to validate the new password.

I had my customer go back and edit the host profile once more and change the ‘Password’ field back to: Leave password unchanged for the default account. Click save, and then remediate the cluster again. The new password will stay.

Before I connected with the customer today, they had already researched how to update the root password on all hosts with a script, but this method is simple, automated and built into vSphere.

How To Setup NFS Server on CentOS 7 for vCloud Director v10 Appliance

Blog Date: 10/25/2019
Lasted Updated: 04/06/2020

For the purposes of this demonstration, I will be configuring NFS services on a CentOS 7 VM, deployed to a vSphere 6.7 U3 homelab environment.

NFS Server VM Configuration

Host Name: cb01-nfs01
IP Address: 10.0.0.35
CPU: 2
RAM: 4GB

Disk 1: 20GB – Linux installation (thin provisioned)
Disk 2: 100GB – Will be used for the vCD NFS share (thin provisioned)

Configure the vCD NFS share disk

For this demonstration, I have chosen not to configure Disk 2 that was added to the VM. Therefore, this “how-to” assumes that a new disk has been added to the VM, and the NFS server has been powered on after.

1) Open a secure shell to the NFS server. I have switched to the root account.
2) On my NFS server, the new disk will be “/dev/sdb”, if you are unsure run the following command to identify the new disk on yours:

fdisk -l

3) We need to format the newly added disk. In my case /dev/sdb. So run the following command:

fdisk /dev/sdb

4) Next with the fdisk utility, we need to partition the drive. I used the following sequence:
(for new partition) : n
(for primary partition) : p
(default 1) : enter
(default first sector) : enter
(default last sector) : enter

5) Before saving the partition, we need to change it to ‘Linux LVM’ from its current format ‘Linux’. We’ll first use the option ‘t’ to change the partition type, then use the hex code ‘8e’ to change it to Linux LVM like so:

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’.

Command (m for help): w

Once you see “Command (m for help):” type ‘w’ to save the config.

Create a ‘Physical Volume, Volume Group and Logical Volume

6) Now that the partition is prepared on the new disk, we can go ahead and create the physical volume with the following command:

# pvcreate /dev/sdb1

7) Now we to create a volume group. You can name it whatever suites your naming standards. For this demonstration, I’ve created a volume group named vg_nfsshare_vcloud_director using /dev/sdb1, using the following command:

# vgcreate vg_nfsshare_vcloud_director /dev/sdb1

Creating a volume group allows us the possibility of adding other devices to expand storage capacity when needed.

8) When it comes to creating logical volumes (LV), the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
– In this example I’ll create one LV named vol_nfsshare_vcloud_director using all the space.
– The -n option is used to indicate a name for the LV, whereas -l (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
The full command used looks like:
# lvcreate -n vol_nfsshare_vcloud_director -l 100%FREE vg_nfsshare_vcloud_director

9) Before a logical volume can be used, we need to create a filesystem on top of it. I’ve used ext4 since it allows us both to increase and reduce the size of the LV.
The command used looks like:

# mkfs.ext4 /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

Writing the filesystem will take some time to complete. Once successful you will be returned to the command prompt.

Mounting the Logical Volume on Boot

10) Next, create a mount point for the LV. This will be used later on for the NFS share. The command looks like:

# mkdir -p /nfsshare/vcloud_director

11) To better identify a logical volume we will need to find out what its UUID (a non-changing attribute that uniquely identifies a formatted storage device) is. The command looks like:

# blkid /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

In this example, the UUID is: UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3

12) Now edit the /etc/fstab and add the disk using the UUID obtained in the previous step.

# vi /etc/fstab

In this example, the entry would look like:

UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3 /nfsshare/vcloud_director ext4 defaults 0 0

Save the change with: wq

13) Mount the new LV with the following command:

# mount -a

To see that it was successfully mounted, use the following command similar to:

# mount | grep nfsshare

Assign Permissions to the NFS Share

14) According to the Preparing the Transfer Server Storage section of the vCloud DIrector 10.0 guide, you must ensure that its permissions and ownership are 750 and root:root .

Setting the permissions on the NFS share would look similar to:

# chmod 750 /nfsshare/vcloud_director

Setting the ownership would look similar to:

# chown root:root /nfsshare/vcloud_director

Install the NFS Server Utilities

15) Install the below package for NFS server using the yum command:

# yum install -y nfs-utils

16) Once the packages are installed, enable and start NFS services:

# systemctl enable nfs-server rpcbind

# systemctl start nfs-server rpcbind

16) Modify /etc/exports file to make an entry for the directory /nfsshare/vcloud_director .

– According to the Preparing the Transfer Server Storage guide, the method for allowing read-write access to the shared location for two cells named vcd-cell1-IP and vcd-cell2-IP is the no_root_squash method.

# vi /etc/exports

17) For this demonstration, my vCD appliance IP on the second nic is 10.0.0.38, so I add the following:

/nfsshare/vcloud_director 10.0.0.38(rw,sync,no_subtree_check,no_root_squash)

– There must be no space between each cell IP address and its immediate following left parenthesis in the export line. If the NFS server reboots while the cells are writing data to the shared location, the use of the sync option in the export configuration prevents data corruption in the shared location. The use of the no_subtree_check option in the export configuration improves reliability when a subdirectory of a file system is exported.
– As this is only a lab, I only have a single vCD appliance for testing. If a proper production deployment, add additional lines for each appliance IP.

18) Each server in the vCloud Director server group must be allowed to mount the NFS share by inspecting the export list for the NFS export. You export the mount by running exportfs -a to export all NFS shares. To re-export use exportfs -r.

# exportfs -a

– To check the export, run the following command:

# exportfs -v

– Validate NFS daemons are running on the server by using rpcinfo -p localhost or service nfs status. NFS daemons must be running on the server.

# rpcinfo -p localhost

or

# systemctl status nfs-server.service

Configure the Firewall

19) We need to configure the firewall on the NFS server to allow NFS client to access the NFS share. To do that, run the following commands on the NFS server.

# firewall-cmd --permanent --add-service mountd

# firewall-cmd --permanent --add-service rpc-bind
# firewall-cmd --permanent --add-service nfs
# firewall-cmd --reload

20) That’s it. Now we can deploy the vCloud Director 10.0 appliance(s).

Optional NFS Share Testing

I highly recommend testing the NFS share before continuing with the vCloud DIrector 10.0 appliance deployment. For my testing, I have deployed a temporary CentOS 7 VM, with the same hostname and IP address as my first vCD appliance. I have installed nfs-utils on my test VM.
# yum install -y nfs-utils

OT-1) Check the NFS shares available on the NFS server by running the following command on the test VM. change the IP and share here to your NFS server.

# showmount -e 10.0.0.35

As you can see, my mount on my NFS server is showing one exported list for 10.0.0.38, my only vCD appliance

OT-2) Create a directory on NFS test VM to mount the NFS share /nfsshare/vcloud_director which we have created on the NFS server.

# mkdir -p /mnt/nfsshare/vcloud_director

OT-3) Use below command to mount the NFS share /nfsshare/vcloud_director from NFS server 10.0.0.35 in /mnt/nfsshare/vcloud_director on NFS test VM.

# mount 10.0.0.35:/nfsshare/vcloud_director /mnt/nfsshare/vcloud_director

OT-4) Verify the mounted share on the NFS test VM using mount command.

# mount | grep nfsshare

You can also use the df -hT command to check the mounted NFS share.

# df -hT

OT-5) Next we’ll create a file on the mounted directory to verify the read and write access on NFS share. IMPORTANT** during the vCD appliance deployment, it is expected that this directory is empty, else it could make the deployment fail. Remember to cleanup after the test.

# touch /mnt/nfsshare/vcloud_director/test

OT-6) Verify the test file exists by using the following command:

# ls -l /mnt/nfsshare/vcloud_director/

OT-7) Clean your room. Cleanup the directory so that it is ready for the vCD deployment.

# rm /mnt/nfsshare/vcloud_director/test

After successfully testing the share, we now know that we can write to that directory from the vCD appliance IP address, and that we can remove files.




In my next post, I will cover deploying the vCloud Director 10.0 appliance. Stay tuned!

Key Takeaways from VMworld US 2019 in San Francisco.

Looking back on this past week, all I can say is that it was pretty crazy. It was my first time to San Francisco, and I honestly left with mixed feelings on the City.

VMworld itself was pretty good! VMware cut back the general sessions to just two days (Monday and Tuesday), and I am honestly conflicted about the missing Thursday general session, as they usually showcase some non VMware related tech for this session.

If I could sum up VMworld in just one word this year, it would be: Kubernetes

Image result for kubernetes meme

VMware debuted their cloud management solution VMware Tanzu with partnership with Pivital, and showcased the ability to manage multiple Kubernetes clusters across multiple clouds, all from one central management dashboard, and Project Pacific, VMware’s endeavor to embed Kubernetes into vSphere.

VMware also added the Odyssey competition this year just outside of the Hands on Labs area. This was in the HOL style, however this only gave attendees hints on what needed to be completed, and really allowed you to test your knowledge and skills in order to complete the task, without the hand holding that the typical HOL provides. Teams were able to compete against each other for the best times, and had some pretty decent prizes.

All in all, it was a decent VMworld, and they will be returning to San Francisco next year. I can’t say that I enjoyed the location, especially with the homeless problem San Francisco has, and I would much rather see VMworld bring it’s 20k+ attendees to a cleaner city, without the drugs, pan handlers, and human waste on the streets. You’d think that as someone who grew up on a farm, and is used to certain sights and smells, that it wouldn’t have bothered me so much, but this took me by surprise

This was also a special VMworld for me this year, as I was finally able to meet Pat Gelsinger. I can tell he really likes the community, and would love to stay longer and chat with everyone. I certainly would have loved the chance to talk with him longer, but I know he had other obligations that night.

The vExpert party was fun as always, and we were able to get a nice photo of the group.

The last session I attended this year was “If this then that for vSphere – the power of event-driven automation” with keynote speakers William Lam, and Michael Gasch. Several well known VMware employees and bloggers were in attendance, including Alan Renouf, who was two chairs down from me, and for this first time I felt this crippling awkwardness of wanting to take pictures with all of them, but was so star stuck that I couldn’t bring myself to it. I know these guys are just normal folks who just happen to be stars in the vCommunity, but I had to contain myself, and enjoy the keynote. Hopefully our paths will cross again, and I can personally meet them.

VMworld General Session, Day 1, Monday August 26th, 2019: VMware Tanzu, and Project Pacific.

The start of VMworld 2019 in San Francisco is underway, and Pat kicked off the general session talking about his excitement for being back in San Francisco, while poking fun at us “Vegas lovers”. Pat also talked about technology, our digital lives, and technologies role being a force for good. He talked about charities, and cancer research foundations.

Pat Then talked about The Law of Unintended Consequences, and how technology has advanced, we as a society have given up certain aspects of Privacy, the need to combat disinformation at scale available widely on the social media platforms.

Surprisingly, according to Pat, Bitcoin is Bad and contributes to the climate crisis.

First Major Announcement with Kubernetes, as VMware has been focussing on containers

Pat then announced the creation of VMware Tanzu, which is the initiative to have a common platform that allows developers to build modern apps, run enterprise Kubernetes, and platform to manage Kubernetes for developers and IT..

Second Major Announcement, Project Pacific. An ambitious project to unite vSphere and Kubernetes for the future of modern IT

Interestingly, Project Pacific was announced to be 30% faster than a traditional Linux VM, and 8% faster than solutions running on bare metal.

Project Pacific brings Kubernetes to the VMware Community, and will be offered by 20K+ Partner resellers, 4K+ Service providers and 1,100+ technology partners.

Tanzu also comes with mission control, a centralized tool allowing IT Operations to manage Kubernetes for developers and IT.

First Time Speaking At The St. Louis VMUG UserCon

Blog Date: July 21, 2012

The VMUG leadership invited me to speak at the St. Louis VMUG Usercon on April 18, 2019, and share my presentation on How VMware Home Labs Can Improve Your Professional Growth and Career.

This would be my second time giving a public presentation, but I left The Denver VMUG UserCon with a certain charge, or a spring in my step as it were. I didn’t have a lot of time to prepare or to change up my presentation, remembering that I have a PSO customer that I need to take care of. I arrived a day early for the speaker dinner that was being put on by the St. Louis VMUG leadership.

Prior to the dinner, I was able to explore the historical, and picturesque city of St. Charles.

The next day, we all converged on the convention center for the St. Louis UserCon. This way to success!

Seeing your name as a speaker amongst a list of people you’ve looked forward to meeting, have met, or follow on social media, certainly is humbling.

This time, my session was in the afternoon, so in true fashion of many public speakers in the #vCommunity, I had all day to make tweaks. I was also able to join a few sessions. Finally found my room in the maze of this convention center and got setup.

The ninja, and co-leader of the St. Louis UserCon, Jonathan Stewart (@virtuallyanadmi), managed to take a picture of me giving my presentation.

A special thank you to the St. Louis VMUG leadership team, who invited me out to meet and share with their community: Marc Crawford (@uber_tech_geek), Jonathan Stewart (@virtuallyanadmi) and Mike Masters (@vMikeMast)