Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 2

In this second part of the blog series “Upgrading to vSphere 6.7 Update 1, and Using the vCenter Converge Tool”, I will go over my experience using the Convergence tool. Lets get started.

The Basics of the vCenter Converge Tool

David Stamen (@davidstamen) put together an excellent blog on Understanding the vCenter Server Converge Tool, at VMware’s offical blog site, which I found very useful. Shout-out to Nigel Hickey (@vCenterNerd) for answering some questions I had.

The Convergence Tool basically takes the external PSC and embeds it into the vCenter appliance like so:

Photo credit
@davidstamen

For this customer, I had three vCSA’s and three PSC’s that I needed to converge. Most of the blogs that I found didn’t cover PSC’s that were joined to a domain, environments with multiple vCenters, or with multiple PSC’s, so I thought I would write this up in a blog.

Planning the Convergence

The first thing I had to do was take note of any registered services with the SSO domain. I utilized VMware’s KB2043509 to identify these services, which I had none to worry about. VMware specifically calls out NSX and Site Recovery Manager (SRM), but since those were not in use at this customer, the only things I had to worry about were Horizon, vROps, vRLi and Zerto. Each of these services registered directly to the vCenters, so I had nothing to worry about there. If I had any services registered with the SSO domain, I’d simply need to re-register them once the convergence tool was ran. But since this didn’t apply, I can move forward with configuring the scripts for the convergence tool.

I also need to have an understanding of the replication typology of the existing SSO domain. VMware KB2127057 was an excellent resource I used to gather that information. Opening a putting session to a vCenter, and running the ‘vdcrepadmin’ command against each of the external PSCs, I was able to see the following:

# cd /usr/lib/vmware-vmdir/bin

./vdcrepadmin -f showpartners -h external_psc-a.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-b.domain.com
ldap://external_psc-c.domain.com

-----------------------------------------------------------------

./vdcrepadmin -f showpartners -h external_psc-b.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-a.domain.com
ldap://external_psc-c.domain.com

-----------------------------------------------------------------

./vdcrepadmin -f showpartners -h external_psc-c.domain.com -u administrator -w kjdshfsdkjfhskjdhf

ldap://external_psc-a.domain.com
ldap://external_psc-b.domain.com

I can see they already have a ring topology, which is the desired architecture. If I were to draw the SSO typology out, it would look something like:

Setting Up the JSON Templates for the Convergence Tool

The converge.json template that the convergence tool uses, can be found in the VMware VCSA ISO, that was used for the 6.7 Update 1 upgrade, under the following path: DVD Drive (#):\VMware VCSA\vcsa-converge-cli\templates\

To make my life easier, I copied the contents of the entire ISO to a folder on the root of my C drive. I then made a seperate folder on the root of C called converge, and created a folder for each of the three vCenters I’d be working with: vCenter-A, vCenter-B, vCenter-C. I made a copy of the converge.json, and placed it into each folder.

Taking a look at the converge.json for vCenter-A, the template tells you what data needs to be filled in, so pay close attention. Lines 10 – 15 needs entries for the ESXi host where the vCenter resides, or the managing vCenter appliance. Here I chose the option to used the Managing ESXi host. All I needed to do, was look in vSphere to see where the vCSA appliance VM resided on which host. While there, I also set the Cluster DRS settings to manual, to prevent the VMs from moving during the upgrade. Once I obtained the information needed, I completed that portion of the json. (I’ve redacted environment specific information).

Lines 16 – 21 need data entries for the first vCenter appliance (vCenter-A) to be converged. Here I need the FQDN for vCenter-A, for the Username, I need the administrator@vsphere.local account, its password, and the root password of the appliance.

Lines 22 – 33 would be filled out IF the Platform Services Controller (PSC) appliance is joined to the domain. My customer was joined to the domain, so I needed to fill this section out. Otherwise you can remove this section from the JSON.

Now, because this is the first vCenter of three, in the same SSO domain, for the first convergence, I did not need this section, because the first vCenter does not have a partner yet. It will be needed however, on the second (vCenter-B) and third (vCenter-C) convergences.

Now I need to fill out a second and third converge.json file for the second and third convergence, saving each in its respective folder. For vCenter-B and vCenter-C, for the partner hostname on line 32, I used the FQDN of the first converged vCenter (vCenter-A), as that is the first partner of the SSO domain.

For vCenter-A, the first to be converged, the completed converge.json looks like this (take note of the commas, brackets and lines removed):

For the second convergence (vCenter-B), and third convergence (vCenter-C), the completed converge.json looks like this:

Now that we got the converge.json done for each of the vCenters, we can work on the decommission.json.

Here is the template VMware provides in the same directory:

Lines 11 – 15 require impute for the Managing vCenter or ESXi Host of the External PSC. Again, just like the vCenter, I used the ESXi host that the PSC is running on.

Lines 16 – 21 needs data for the Platform Services Controller that will be Decommissioned.

Lines 30 – 34 requires information for the vCenter the PSC was paired with. Again here I just used the ESXi host that the vCenter is currently running on

Lines 35 – 39 require the information for the vCenter, the PSC is paired with.

Now that we have the decommission.json filled out for the first vCenter (vCenter-A), I have to repeat the process for the second and third vCenters (vCenter-B, vCenter-C). The full decommission.json should look like

Now that both the converge.json and decommission.json have been filled out for each of my environments (3), and stored in the same directory on the root of C, I can move forward with the Convergence process.

Prerequisites and Considerations Before Starting the Convergence Process

  • The converge tool only supports the VCSA and PSC 6.7 Update 1. All nodes must be on 6.7 Update 1 before converting.
  • If you are currently running a Windows vCenter Server or PSC, you must migrate to the appliance first.
  • Before converting, take a backups of your VCSA(s) and PSCs in the vSphere SSO domain(VM snapshots, and DB backups).
  • Know all other solutions using the PSC for authentication in the environment. They will need to be re-registered after the convergence completes and before decommissioning.
  • A machine on a routable network which can communicate with the VCSA and PSC will be used to run the convergence and decommission process.
  • Set the DRS Automation Level to manual, and the Migration Threshold to conservative. There will be be issues if the VCSA being converged is moved during the process.
  • If VCHA is enabled, it must be disabled prior to running the convergence process.
  • The converge process will handle PSC HA load balancers. Make sure you point to the VIP in the JSON template if you have them.
  • All vSphere SSO data is migrated with the exception of local OS users.
  • Best to take snapshots of the vCSA and external PSC VMs before continuing. We’ve already backed up the database, but it doesn’t hurt to have snapshots as well.

Executing the Converge Tool

Now that converge.json template for each vCenter (vCenter-A, vCenter-B, vCenter-C) is filled out properly, we can now execute. We will run the convergence tool against the first vCenter (vCenter-A). Note: We can only run the converge tool against one vCSA at a time.

In powershell, we can first run the following command before proceeding with the upgrade to see what options/parameters are available with the converge tool.

.\vcsa-converge-cli\win32\vcsa-util.exe converge --help 

To execute the convergence tool against the first vCenter (vCenter-A), I ran the following command:

.\vcsa-converge-cli\win32\vcsa-util.exe converge --no-ssl-certificate-verification --backup-taken C:\pathtofile.json

The output in powershell should look something like:

It will then ask you to reboot the first vCenter before continuing.

Once the first vCenter (vCenter-A) came up, I executed the convergence tool for the second vCenter (vCenter-B). Once completed I restarted the appliance.

Finally, the last vCenter (vCenter-C) is on deck. I executed the converge.json against that vCenter, and once completed, I restarted it.

Here is where you would need to re-point those systems using the old SSO domain, but since I didn’t have any, I can move forward with the decommissioning steps.

Decommissioning the Old external Platform Services Controllers (PSC)

Using the Converge Tool with the decommission option to remove the external PSC’s. Just like before, we need to do this one PSC at a time. The command looks something like this:

 .\vcsa-converge-cli\win32\vcsa-util.exe decommission --no-ssl-certificate-verification C:\pathtofile.json 

Once the process successfully completes, move onto the next PSC. Repeat the process until all PSC’s have been decommissioned.

Validate the SSO Replication Topology After the Converge Process

If you’ll remember, when I setup the converge.json, I had the second vCenter (vCenter-B) and third vCenter (vCenter-C) replication partner set to the first converged vCenter (vCenter-A). My Replication topology currently looks like this:

I needed to close the loop between vCenter-B and vCenter-C. Using VMware’s KB2127057 , I used the ‘createagreement’ parameter. I opened a putty session to vCenter-B and ran the following command:

# cd /usr/lib/vmware-vmdir/bin

./vdcrepadmin -f createagreement -2 -h vcenter-b.domain.com -H vcenter-c.domain.com -u Administrator -w VMw@re123

Now that the SSO replication agreement has been made between vCenter-B and vCenter-C, my replication topology looks like this:

I’m not going to lie, the hardest part of using the convergence tool, was just getting started. I’ve been through enough fires in my day to know how bad of a time I would have had if something went wrong, and I lost either the vCenter, or external PSC before the convergence successfully completed. Once I got myself beyond that mental hurdle, the process was actually quite easy and smooth.

I know I’ve left this customer’s environment in a lot better shape than I found it, and having embedded PSCs will make future vCenter upgrades a breeze. For a VMware PSO consultant, this was a huge value add for the customer.

Blog Date: April 16, 2019

Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 1

I recently wrapped up a vSphere 6.7 U1 upgrade project, while on a VMware Professional Services (PSO) engagement, with a customer in Denver Colorado. On this project, I had to upgrade their three VMware environments from 6.5, to 6.7 Update 1. This customer also had three external Platform Services Controllers (PSC), a configuration that is now depreciated in VMware architecture.

Check the VMware Interoperability, and Compatibility Matrices

The first thing I needed to do, was to take inventory of the customer’s environment. I needed to know how many vCenters, if they had external Platform Services Controllers, how many hosts, vSphere Distributed Switch (VDS), and what the versions were.

  • From my investigation, this customer had three vCenters, and three external platform services controllers (PSC), all apart of the same SSO domain.
  • I also made note of which vCenter was paired with what external PSO. This information is critical not only for the vSphere 6.7 U1 upgrade, but also the convergence process that I will be doing in part two of this blog series.
  • Looking at the customer’s ESXi hosts, the majority were running the same ESXi 6.5 build, but I did find a few Nutanix clusters, and six ESXi hosts still on version 5.5.
  • The customer had multiple vSphere Distributed Switch (VDS) that needed to be upgraded to 6.5 before the 6.7 upgrade.

The second thing that I needed to do was to look at the model of each ESXi host and determine if it is supported for the vSphere 6.7 U1 upgrade. I also need to validate the firmware and BIOS each host is using, to see if I need to have the customer upgrade the firmware and BIOS of the hosts. We’ll plug this information into the VMware Compatibility Guide .

  • From my investigation, the three ESXi hosts running ESXi 5.5 were not compatible with 6.7U1, however they were compatible with the current build of ESXi 6.5 the customer was running on their other hosts. I would need to upgrade these hosts to ESXi 6.5 before starting the vSphere 6.7 U1 upgrade.
  • This customer had a mix of Dell and Cisco UCS hosts, and almost all needed to have their firmware and BIOS upgraded to be compatible with ESXi 6.7 U1.

The third thing I needed to check was to see what other platforms, owned by VMware, and/or bolt on third parties, that I needed to worry about for this upgrade.

  • The customer is using a later version of VMware’s Horizon solution, and luckily for me, it is compatible with vSphere 6.7 U1, so no worries there.
  • The customer has Zerto 6.0 deployed, and unfortunately it needed to be upgraded to a compatible version.
  • The customer has Actifio backup solution, but that is also running a compatible version, so again no need to update it.

Lets Get those ESXi 5.5 hosts Upgraded to 6.5

I needed to schedule an outage with the customer, as they had three offsite locations, with two ESXi 5.5 hosts each. These hosts were using local storage to house and run the VMs, so even though they were in a host cluster, HA was not an option, and the VMs would need to be powered off.

Once I had the outage secured, I was able to move forward with upgrading these six hosts to ESXi 6.5.

Time to Upgrade the vSphere Distributed Switch (VDS)

For this portion of the upgrade, I only needed to upgrade the customers VDS’s to 6.5. This portion of the upgrade was fast, and I was able to do it mid day without the customer experiencing an outage. We did submit a formal maintenance request for visibility, and CYA. Total upgrade time to do all of their VDS’s was less than 15 minutes. Each switch took roughly a minute.

Upgrade the External Platform Services Controllers Before the vCenter Appliances

Now that I had all hosts to a compatible ESXi 6.5 version, I can move forward with the upgrade. I was able to do this upgrade during the day, as the customer would only lose the ability to manage their VMs using the vCenters. I made backups of the PSC and vCSA databases, and created snapshots of the VMs just in case.

I first needed to upgrade the external PSCs (3) to 6.7 U1, so I simply attached the vCSA.iso to my jump VM, and launched the .exe. I did this process one PSC at a time until they were all upgraded to 6.7 U1.

Upgrade the vCenter Appliances to 6.7 Update 1

Now that the external platform services controllers are on 6.7 U1, it is time to upgrade the vCenters. The process is the same with the exe, so I just did one vCenter at a time. Both the external PSC’s and the vCSA’s upgraded without issue, and within a couple of hours both the external PSC’s and vCSA’s had finished the vSphere 6.7 Update 1.

Upgrade Compatible ESXi Hosts to 6.7 Update 1

I really wanted to use the now embedded VMware Update Manager (VUM), but I either faced users who re-attached ISO’s to their Horizon VMs, or had administrators who were upgrading/installing VMware Tools. In one cluster I even happened to find a host that had improper networking configured compared to its peers in the cluster. Once I got all of that out of the way, I was able to schedule VUM to work its way down through each cluster, and upgrade the ESXi hosts to 6.7 Update 1. There were still some fringe cases where VUM wouldn’t work as intended, and I needed to do one host at a time.

Conclusion for the Upgrade

In the end, upgrading the customer’s three environments, vCSA, PSC and ESXi to 6.7 Update 1 took me about a couple of weeks to do alone. Not too shabby considering I finished ahead of schedule, even with all of the issues I faced. After the upgrade, the customer started having their Cisco UCS blades purple screen at random. After opening a case with GSS, that week Cisco came out with an emergency patch for the fnic driver, on the customer’s UCS blades, for the very issue they were facing. The customer was able to quickly patch the blades. Talk about good timing.

Part 2 Incoming

Part 2 of this series will focus on using the vCenter Converge Tool. Stay tuned.

Blog Date: 4/15/2019

The Home Lab Part 2

The very long over due followup post to my The Home Lab entry made earlier this year.  I did recently purchase another 64GB (2x 32GB) Diamond Black DDR4 memory to bring my server up to 128GB.  I had some old 1TB spinning disks I installed in the box for some extra storage as well, although I will phase them out with more SSDs in the future.  So as a recap, this is my setup now:

IMG_20171117_170133

Motherboard

motherboardSUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

 

Memory

memory

(x2) Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

 

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

 

Additional Storage

x2 1TB Western Digital Black spinning disks

 

Initially when I built the lab, I decided to use VMware workstation, but I recently just rebuilt it, installing ESXi 6.7 as the base.  Largely for better performance and reliability.  For the time being this will be a single host environment, but keeping with the versioning, vCSA and vROps are 6.7 as well.  Can an HTML 5 interface be sexy?  This has come a long way from the flash client days.

vcenter view

I decided against fully configuring this host as a single vSAN node, just so that I can have the extra disk.  However, when I do decide to purchase more hardware and build a second or third box, this setup will allow me to grow my environment, and reconfigure it for vSAN use.  Although I am tempted to ingest the SSDs into my NAS, carve out datastores from it and not use vSAN, at least for the base storage.

storageview

Networking is flat for now, so there’s nothing really to show here.  As I expand and add a second host, I will be looking at some networking hardware, and have my lab in it’s own isolated space.

Now that I am in the professional services space, working with VMware customers, I needed a lab that was more production. I’m still building out the lab so I’ll have more content to come.

VMware Education Services has updated the naming conventions of VMware’s professional certifications

FYI – VMware is making some major changes to their certification naming conventions. Changes take affect August 2018 for newly released certifications listed below, and are not retroactive.  This will not affect the naming of existing certifications however.

  • VMware Certified Professional – Desktop and Mobility 2018 (VCP-DTM 2018)
  • VMware Certified Advanced Professional – Data Center Virtualization Deployment (VCAP-DCV 2018 Deployment)
  • VMware Certified Advanced Professional – Cloud Management and Automation Deployment (VCAP-CMA 2018 Deployment)

Read more on their official blog post here:

https://blogs.vmware.com/education/2018/08/22/we-are-changing-the-way-we-name-vmware-certifications-the-year-makes-it-clear/

The Home Lab Hardware

IMG_20171117_170133

Setup

I decided to go with a Supermicro build as I wanted something power efficient, yet expandable, and this motherboard supports up to 128GB of ECC RDIMM DDR4 2133MHz server grade memory.  Now with this setup, when I feel the need to expand out my lab, I can build two more nodes, and I’ll have a rather nice VSAN cluster.  However I’m hoping the cost of DDR4 memory will have come down by then…

I did look at the Supermicro SYS-E300-8D and SYS-E200-8D style micro servers, but like most, I was concerned about the fan noise, and thus decided to go with a slightly larger chassis to get the larger fan.  Honestly the fan in the unit I bought makes no more noise then a regular desktop computer.

Here’s my hardware:

Motherboard

motherboard

SUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

Memory

memory

Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

Who doesn’t love some internal shots after the lab-box has been put together?  🙂

In the coming blog posts, I’ll be building out my lab.  Stay tuned….

Go west young man! Looking ahead towards an exciting 2018, as I search for my next career opportunity.

VMW-LOGO-vEXPERT-2018-k

2018 VMware vExpert Award Announcement | My Community vExpert Profile

First of all, I would like to say that I am honored to be among some of the brightest VMware community technologists for a second year. Secondly I would like to personally welcome the new additions to the vExpert family.

vexpert-2-year

I’ve honestly debated using this platform to blog about things currently underway in my personal life, but the first step in solving any problem, is recognizing that there is one.  If I am being completely honest, 2018 started pretty rough for me. Shortly after returning to work after being out for surgery, I was informed that the company that I had been working for, for the past three plus years was shifting its priorities, and downsizing due to our parent company’s merger. Unfortunately my position with the company was affected. The past three years, and eight months had been some of the most exciting in my career, both from a technology standpoint, and a people standpoint as well. In those three plus years, I quickly had to ramp up on VMware technologies and concepts that I had never used before in a large cloud service provider environment.

My time spent with this company has afforded me with hands on expertise in managing multiple virtual environments that exceeded 500 ESXi hypervisors, several vCenter server appliances, NSX appliances, vROPS clusters, and several vCloud Director environments, in data centers all over the world.  I battled the on-call boogeyman in intense hand-to-hand combat, restored three production data centers affecting over a thousand vCloud Director and Zerto tenants, and got to work on several fun POCs including working with VMware engineering on deploying VMware’s vCloud Availability. It was an amazing ride, with some of the best teammates I ever had the pleasure to work with.  In those three plus years, I found time to obtain my VCP6-DCV certification, start my own tech blog, and become an active member in the VMware community sharing my experiences, and learning from others. But when one door closes, another will eventually open to greater opportunities.

IMG_20160703_110706

Recently I have been thinking a great deal about moving west for my next adventure in cloud computing, and I would be lying if I said Colorado wasn’t on my mind. My goal now is to continue contributing my passion for VMware technologies to the VMware community, to help others, and learn from others, while I search for my next career to elevate my skills even higher, and to help business adopt virtualization and cloud technologies.

As such, I am making plans to attend the Denver VMUG usercon in April.  Hope to see you there.

denverusercon

 

vmw-logo-vexpert-2017-k

vExpert-Cloud-2017-badge-300x198

VMW-LGO-CERT-PRO-6-DATA-CTR-VIRT

Network Scanners Can Crash vRealize Operations Manager Tomcat Service On Large Clusters

If network scanners are deployed in your production environments, it may be necessary to white-list the vROps nodes, as the network scanners can bring the tomcat service to its’ knees, especially on active vROps clusters.  In my case the network scanner was causing tomcat to crash, so when users would attempt to access the main vROps , they’d get the following error:

Unable to connect to platform services

While troubleshooting this issue, I went through the sizing of the cluster, performance, verifying there’s nothing backing up the vROps VMs, even made sure the datastores and specific hosts were health.  Even tried replacing the “/usr/lib/vmware-vcops/user/plugins/inbound” directory and files on all nodes from the master copy in hopes that it would make the cluster healthy again and stop tomcat from panicking.

The following was discovered after reviewing the /var/log/apache2/access_log on the master:

192.216.33.10 - - [10/Oct/2017:04:56:23 +0000] "GET /recipe/login.php?Password=%22'%3e%3cqqs%20%60%3b!--%3d%26%7b()%7d%3e&Username=&submit=Login HTTP/1.0" 301 362 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:56:23 +0000] "GET /recipe/recipe/login.php?Password=%22'%3e%3cqqs%20%60%3b!--%3d%26%7b()%7d%3e&Username=&submit=Login HTTP/1.0" 301 369 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:56:23 +0000] "GET /recipe/recipe_search.php?searchstring=alert(document.domain) HTTP/1.0" 301 326 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:56:23 +0000] "GET /recipe/recipe/recipe_search.php?searchstring=alert(document.domain) HTTP/1.0" 301 333 "-" "-"
192.216.33.10 - - [12/Oct/2017:08:30:43 +0000] "GET /recipe_view.php?intId=char%2839%29%2b%28SELECT HTTP/1.1" 301 282 "-" "-"
192.216.33.10 - - [12/Oct/2017:08:31:06 +0000] "GET /modules.php?name=Search&type=stories&query=qualys&catebgory=-1%20&categ=%20and%201=2%20UNION%20SELECT%200,0,aid,pwd,0,0,0,0,0,0%20from%20nuke_authors/* HTTP/1.1" 301 410 "-" "-"
192.216.33.10 - - [12/Oct/2017:08:31:06 +0000] "GET /modules.php?name=Top&querylang=%20WHERE%201=2%20ALL%20SELECT%201,pwd,1,1%20FROM%20nuke_authors/* HTTP/1.1" 301 342 "-" "-"
192.216.33.10 - - [12/Oct/2017:08:31:10 +0000] "GET /index.php?option=com_jumi&fileid=-530%27%20UNION%20SELECT%202,concat%280x6a,0x75,0x6d,0x69,0x5f,0x73,0x71,0x6c,0x5f,0x69,0x6e,0x6a,0x65,0x63,0x74,0x69,0x6f,0x6e%29,null,null,null,0,0,1%20--%20%27 HTTP/1.1" 301 445 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:20:19 +0000] "GET /recipe_view.php?intId=char%2839%29%2b%28SELECT HTTP/1.1" 301 282 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:20:42 +0000] "GET /modules.php?name=Search&type=stories&query=qualys&category=-1%20&categ=%20and%201=2%20UNION%20SELECT%200,0,aid,pwd,0,0,0,0,0,0%20from%20nuke_authors/* HTTP/1.1" 301 410 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:22:32 +0000] "GET /third_party/fckeditor/editor/_source/classes/fckstyle.js HTTP/1.1" 301 284 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:22:32 +0000] "GET /third_party/tinymce/jscripts/tiny_mce/plugins/advlink/readme.txt HTTP/1.1" 301 292 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:22:32 +0000] "GET /rsc/smilies/graysmile.gif HTTP/1.1" 301 253 "-" "-"
192.216.33.10 - - [10/Oct/2017:04:22:32 +0000] "GET /media/users/admin/faceyourmanga_admin_girl.png HTTP/1.1" 301 274 "-" "-"

 

Tomcat service is being pushed to the limits and using many more resources than planned. There is upwards of 10,000 requests in bursts from a single IP address.  From the logs it certainly looks like an attack, but that’s coming from an internal IP address.

My advice – get your security team to white-list your vROps appliances.

To restart the web service on all vROps nodes either by issuing this command to each node: ‘service vmware-vcops-web restart’ , or log into the admin page, take the cluster offline and then back online.

Upgrade Existing vRealize Operations Manager Add-on/Solution Paks

The following was recorded using a vRealize Operations Manager (VROps) 6.6 cluster, however older versions of VROps can be upgraded the same way.

  • Log into the vROps environment, go to the Administration tab, and select solutions in the left column.
  • Here you can see all of the add-on/solutions paks that I have installed in this environment.  To upgrade an existing solution, simply click the green plus button.
Image.png
  • Browse for the new pak.  In this example I have selected “Reset Default Content” option.  As the statement suggests, this can override policies, customized alerts, symptoms etc. that may have been customized by your organization, forcing that work to be re-created.  However, I like using this option because I get those new changes, and can adjust my monitoring accordingly.  Use at your own discretion

Image.png

  • Click ‘upload’
Image.png
  • Click ‘Next’
  • Read and accept the EULA if you so desire
  • Click ‘Next’

Now the installation process will begin.  This shouldn’t take longer than 5 minutes.

vrops54

  • Click Finish

vrops55

Now the latest version of the Add-on/solutions pak is installed and ready for use.  In most cases it will just pick up the config from older versions.
Image.png

Collecting Java Heap dump from vCloud Director Cells

You just need to generate the java heap dump from one of the cells.  What you’ll need to succeed:

  • JCONSOLE
  • IP tables disabled on the cell you are connecting to.
  • Disk space available on the cell to accommodate the dump – I believe these can be between 8 and 10 GB in size
  • Unless an emergency, do this operation outside of normal business hours as it will be CPU intensive for up to 3 minutes, can impact API call performance, and can potentially cause the VCD cell inventory service to hang.

Step #1: Disable iptables on the cell

  • ssh to the desired cell and run the following command:

# service iptables stop

Step #2: Connect with jconsole (java console)

  • domain credentials should work here depending on your environment
  • connect to port: 8999
  • connect to desired cell

vcd9

  • If you get this message “Secure connection failed. Retry Insecurely?” just click the ‘insecure’ button to continue

 

vcd10

Step #3: Generate the heap dump

  1. On the MBeans tab, in the com.sun.management/HotSpotDiagnostics object, select the Operation section.
  2. In dumpHeap parameters, enter the following information:

    p0: [heap-output-path]

    p1: true – do a garbage collection before dump heap

    For example:

    p0: /opt/vmware/vcloud-director/vcd_cell_name_heap-dump-file.hprof

    p1: true

  3. Click the dumpHeap button.

vcd11

 

  • There will be no indication that the heapdump completes.  I just watch the size of the file until the growth stops on the cell.  This process typically takes less than two minutes.

Step #4: Cleanup and send-off

  • Locate the heap dump in /opt/vmware/vcloud-director/ and move off to a location where you can compress and upload to VMware FTP site as you would for logs.
  • Start the iptables on the cell: # service iptables start

Upgrading VMware vCloud Director to 8.20

This document was creating while upgrading an existing vCloud Director 8.10.1 environment with an Oracle database, and multiple cloud cells.

After downloading the latest version of vCloud Director 8.20 for service providers,  SCP the upgrade to all VCD cells.  You can review the release notes here.

What you’ll need to do before getting started:

  • SSH into each cell and ‘sudo su -‘ to root
  • move the bin to the root directory
  • chmod +x vmware-vcloud-director-distribution-8.20.0-5515092.bin
  • I strongly advise opening an support request with VMWare before proceeding with the upgrade.  You may not need it, but it comes in handy having one logged beforehand.

Maintenance – Shutdown the cells

1. Open an SSH session into each VCD cell

 

2. Sudo to root using the following command:

# sudo su -

3. Change to the vcloud-director/bin/  directory

# cd /opt/vmware/vcloud-director/bin/

4. Use the Cell Management Tool to quiesce the cell.  This will move active jobs over to another cell.

# ./cell-management-tool -u administrator cell --quiesce true

5. Get the status of any running jobs on each cell.   ** Verify Job count = 0   |  Is Active = false  | In Maintenance Mode  = false

# ./cell-management-tool -u administrator cell --status
Example Output:

vcd6

6. Shut the cell down to prevent any other jobs from becoming active on the cell.  This command will also allow active jobs to cleanly finish

# ./cell-management-tool -u administrator cell --shutdown

Example Output:

vcd7

7. Get a status on the cells to be sure everything is down

# service vmware-vcd status

8. Now complete steps 4 – 7 on the remaining cells to cleanly shutdown the vCD service on all cells.

9. Here is where I would shutdown the VCD cell virtual machines, and database to get a clean snapshot while the environment is powered off

10. Once the database virtual machine is fully up, power-on the VCD cell virtual machines.

11. Log back into the vCloud Director environment to verify functionality before the upgrade.

12. SSH to all VCD cell virtual machines and use the following command to stop the service again on each cell.  Here there is an assumption made that we are now well within a maintenance window.

# service vmware-vcd stop

Starting The vCloud Director Upgrade

1. Start with the first cell, and run the first half of the upgrade.  DO NOT upgrade the database yet.

# ./vmware-vcloud-director-distribution-8.20.0-5515092.bin

Example Output:

vCD1

2. Respond with: y

Example Output:

vcd2

3. Stop.  Now you need to run steps one and two on the rest of the vCloud Director Cells, and install the upgrade.  Do them one at a time.  DO NOT upgrade the database yet.

4. Now that all cells have been upgraded, go back to the first cell and run the database upgrade.

# ./opt/vmware/vcloud-director/bin/upgrade

Example vCD Database upgrade output:

vcd3

5. Respond with: y

vcd4

6. Start the the first cell by responding with ‘y’

vcd8

7. Manually start the VCD service on the remaining cells

# service vmware-vcd start

8. Get the VCD status of all cells by running the following command on each

# service vmware-vcd status

9. Log into the cell, and watch/wait for vCenter to sync with vCD under the Manage & Monitor section → vCenters.  This normally takes 30 minutes or so.  Once done the status will change from a spinning circle to a green check mark.

10. Run some environment validation tests to be sure everything is working and is proper, and then delete those snapshots taken earlier.