The Home Lab Part 2

The very long over due followup post to my The Home Lab entry made earlier this year.  I did recently purchase another 64GB (2x 32GB) Diamond Black DDR4 memory to bring my server up to 128GB.  I had some old 1TB spinning disks I installed in the box for some extra storage as well, although I will phase them out with more SSDs in the future.  So as a recap, this is my setup now:

IMG_20171117_170133

Motherboard

motherboardSUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

 

Memory

memory

(x2) Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

 

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

 

Additional Storage

x2 1TB Western Digital Black spinning disks

 

Initially when I built the lab, I decided to use VMware workstation, but I recently just rebuilt it, installing ESXi 6.7 as the base.  Largely for better performance and reliability.  For the time being this will be a single host environment, but keeping with the versioning, vCSA and vROps are 6.7 as well.  Can an HTML 5 interface be sexy?  This has come a long way from the flash client days.

vcenter view

I decided against fully configuring this host as a single vSAN node, just so that I can have the extra disk.  However, when I do decide to purchase more hardware and build a second or third box, this setup will allow me to grow my environment, and reconfigure it for vSAN use.  Although I am tempted to ingest the SSDs into my NAS, carve out datastores from it and not use vSAN, at least for the base storage.

storageview

Networking is flat for now, so there’s nothing really to show here.  As I expand and add a second host, I will be looking at some networking hardware, and have my lab in it’s own isolated space.

Now that I am in the professional services space, working with VMware customers, I needed a lab that was more production. I’m still building out the lab so I’ll have more content to come.

Hard cut-over to a new vCenter Appliance

I went through this a couple of years ago, found it in my notes, and thought I would share.  We experienced a SAN outage that corrupted the vCSA 5.5 appliance internal database.
The symptoms that we had something bad happening in the vCenter where the following:  The thick client wouldn’t always connect, and if it did you could only stay connected for a maximum of 5 minutes before getting kicked back to the login screen.  The web client was acting very similar.  We opened a Support request, and after looking at the logs we could see that there was corruption in one of the tables.  Given that we were already going to upgrade this appliance anyway, VMware had suggested a hard cut-over, where we would backup the DVSwitch config, disconnect the hosts from the old 5.5 vCSA with the virtual machines still running, power down the old vCSA appliance, power on the new 6.0u1 vCSA, and re-attach the hosts to it.  Sounds easy enough right?
The following is a high level view of the steps required to cut over to a new vCenter.  This process assumes that traditional methods of upgrading to a new vCenter version cannot be trusted, and that standing up a new vCenter, and reconnecting the hosts to it, is the only viable option. 
If the vCenter Appliance is in a bad state, it is always recommended to contact VMware GSS first and open an SR, to properly determine what is wrong, and what the best recovery options are.  These steps were recorded on a 5.5 vCSA and 6.0u1 vCSA.  Your mileage may very.
 
Step-by-step guide
 
-=Process on the old vCenter Appliance=-
  – Log in as the local Administrator
  – Export DVSwitch config
  – Create a standard switch mimicking distributed switch on first host
  – Migrate one physical host nic (pnic) to the standard switch
  – Update networking on all virtual machines on host over to the standard switch
  – Migrate other host pnics to standard switch
  – Disabled HA and DRS for the cluster
  – Disconnected host from the vCenter
**Rinse wash repeat on remaining hosts until all are disconnected**
  – Shutdown old vCenter Appliance.
-=Process on the new vCenter Appliance=-
  – Startup the new vCenter Appliance and configure it.
  – Log in as local Administrator
  – Setup the data center and host clusters
  – Add all hosts to the new vCenter
  – Import DVSwitch config
  – Add DVSwitch to hosts,
  – Migrate one pnic on the host to DVSwitch
  – Updated all VMs networking to DVSwitch
  – Migrate other pnic to DVSwitch
**Rinse wash repeat on remaining hosts and VMs until they are on the DVSwitch**
  – Disconnect standard switch from hosts

High CPU utilization on NSX Appliance 6.2.4

I realize that writing up this blog post now, may be irrelevant considering most if not all VMware customers are well beyond NSX appliance 6.2.4.  But some folks may still find the information shared here still relevant.  At the very least the instructions for restarting the bluelane-manager service on the NSX appliance is still something handy to keep in your Rolodex of commands.

There’s an interesting bug in versions of the NSX appliance ranging from versions 6.2.4 – 6.2.8, where the utilization slowly climbs, eventually maxing out at 100% CPU utilization after few hours.  For my environment, we had vSphere version 6, and roughly 60 hosts that were also on ESXi 6.  We were also using traditional SAN storage on FCOE.  In this case a combination of IBM XIV, and INFINIDATs.  In most cases, we could just restart the NSX appliance, which would resolve the CPU utilization issue, however sometimes within two hours, the CPU utilization would climb back up to 100% again. When the appliance CPU maxed out, after a few seconds the NSX manager user interface would typically crash.

The Cause: (copied from KB2145934)

“This issue occurs when the PurgeTask process is not purging the proper amount of job tasks in the NSX database causing job entries to accumulate. When the number of job entries increase, the PurgeTask process attempts to purge these job entries resulting in higher CPU utilization which triggers (GC) Garbage Collection. The GC adds more CPU utilization.”

The only problem with the KB, is that our environment was currently on 6.2.4, so clearly the problem was not resolved.

In order to buy ourselves some time, without needing to restart the NSX appliance, we found that simply restarting a service on the NSX appliance called ‘bluelane-manger‘, had the same affect, but this was only a work around.

You can take the following steps to restart the bluelane-manager service:

 

  • SSH to the NSX Manager using the ‘admin’ account
  • Type
en
  • Type:
st en
  • When prompted for the password, type:
IAmOnThePhoneWithTechSupport
  • To get the status of the bluelane manager service type:
/etc/rc.d/init.d/bluelane-manager status
  • To restart the bluelane-manager service, type:
 /etc/rc.d/init.d/bluelane-manager restart

Now after a few seconds, you should notice that the NSX appliance user interface has restored to normal functionality, and you can log in, and validate that the CPU has fallen to normal usage.

What made the issue worse, was the fact that we had hosts going into the purple diagnostic screen.  I’m not talking one or two here.  Imagine having over 20 ESXi hosts drop at the same time, during production hours, and keep in mind that all of these hosts were running customer workloads….. If you’ll excuse the vulgarity, that certainly has a pucker factor exceeding 10.  At the time, I was working for a service provider running vCloud Director.  The customers were basically sharing the ESXi host resources.  We were also utilizing VMware’s Guest Introspection (GI) service, as we also had trend micro deployed, and as a result most customers were sitting in the default security group.

Through extensive troubleshooting with VMware developers, at a high level we determined the following:  Having all customer VMs in the default NSX security group, every time a customer VM powered on or off, was created or destroyed, vMotioned, replicated in or out of the environment, all had to be synced back to the NSX appliance, which then synced with the ESXi hosts.  Looking at the at specific logs on the ESXi hosts that only VMware had access to, we saw a backlog of sync instructions that the hosts would never have time to process, which was contributing to the NSX appliance CPU issue.  This was also causing the hosts to eventually purple screen.  Fun fact was that by restarting the hosts we could buy ourselves close to two weeks before the issue would reoccur, however, performing many simultaneous vMotions would also cause 100% CPU on the NSX appliance, which would put us into a bad state again.

Thankfully, VMware was currently working on a bug fix release at the time NSX 6.2.8, and our issue served to spur the development team along in finalizing the release, along with adding a few more bug fixes they had originally thought was resolved in the 6.2.4 release.

NSX 6.2.8 release notes

Most relevant to our issues that we faced were the following fixes:

  • Fixed Issue 1849037: NSX Manager API threads get exhausted when communication link with NSX Edge is broken
  • Fixed Issue 1704940: You may encounter the purple diagnostic screen on the ESXi host if the pCPU count exceeds 256
  • Fixed Issue 1760940: NSX Manager High CPU triggered by many simultaneous vMotion tasks
  • Fixed Issue 1813363: Multiple IP addresses on same vNIC causes delays in firewall publish operation
  • Fixed Issue 1798537: DFW controller process on ESXi (vsfwd) may run out of memory

Upgrading to NSX 6.2.8 release, and rethinking our security groups, brought stability back to our environment, although not all above issues were completely resolved as we later found out.  In short most “fixes” were really just process improvements under the hood.  Specifically we could still cause 100% CPU utilization on the NSX appliance by putting too many hosts into maintenance mode consecutively, however at the very least the CPU utilization was more likely able to recover on its own, without us needed to restart the service or appliance. Now why is that important you might ask?  Being a service provider, you want to quickly and efficiently roll through your hosts while doing upgrades, and having something like this inefficiency in the NSX code base, can drastically extend maintenance windows.  Unfortunately for us at the time, as VMware came out with the 6.2.8 maintenance patch after 6.3.x, so the fixes were also not apart of the 6.3.x release yet.  KB2150668

As stated above, the instructions for restarting the bluelane-manager service on the NSX appliance is still something that is very handy to have.

 

 

 

VMware Education Services has updated the naming conventions of VMware’s professional certifications

FYI – VMware is making some major changes to their certification naming conventions. Changes take affect August 2018 for newly released certifications listed below, and are not retroactive.  This will not affect the naming of existing certifications however.

  • VMware Certified Professional – Desktop and Mobility 2018 (VCP-DTM 2018)
  • VMware Certified Advanced Professional – Data Center Virtualization Deployment (VCAP-DCV 2018 Deployment)
  • VMware Certified Advanced Professional – Cloud Management and Automation Deployment (VCAP-CMA 2018 Deployment)

Read more on their official blog post here:

https://blogs.vmware.com/education/2018/08/22/we-are-changing-the-way-we-name-vmware-certifications-the-year-makes-it-clear/

NSX SSL Certificate Failure on ESXi: SSL handshake failed

Some time ago I was having an issue putting a host back into service in an NSX environment.  In Log Insight, and in the /var/log/netcpa.log I was seeing errors similar to the following:

2018-05-26T11:07:50.486Z [FFD53B70 error 'Default'] SSL handshake failed on 172.15.4.100:0 : error = SSL Exception: error:140000DB:SSL routines:SSL routines:short read
2018-05-26T11:07:55.545Z [FFD12B70 error 'Default'] SSL handshake failed on 172.15.4.100:0 : error = SSL Exception: error:140000DB:SSL routines:SSL routines:short read
2018-05-26T11:08:00.600Z [FFD12B70 error 'Default'] SSL handshake failed on 172.15.4.100:0 : error = SSL Exception: error:140000DB:SSL routines:SSL routines:short read

Browsing through VMware’s archive, I came across KB2151089, very similar to the issue I was having, however upgrading to NSX 6.3.5 was not an option at the time.  I remembered a similar issue at my previous workplace, and dug through my evernote archive to find my notes.

Before we continue, this should go without saying, but your milage may very, and I’d recommend opening a ticket with VMware’s GSS.  At the very least you should test this process out in a lab.

These steps outlined here will resolve the issue.  Keep in mind at this point, the host is not in production, and currently is in maintenance mode:

  • Determine if the NSX controllers are connected by logging into the ESXi host, and running the following commands:
# esxcli network ip connection list |grep 1234

— and —

# esxcli network ip connection list |grep 5671

 

  • Next, log into the NSX appliance and backup the config.  While the config backup is taking place, get the ESXi host mob id from the vCenter mob page https://<vcsa-fqdn>/mob
  1. select the link for the ‘root folder‘, eg. group-d1
  2. select the link for the ‘child entity‘ eg. datacenter-2
  3. select the link for the ‘host folder‘ eg. group-h4
  4. select the link for the ‘child entity‘ eg. domain-c7
  5. Now locate the ‘host‘ and find the host-xxxx value. eg: host-1234 
  • After the NSX backup is complete, ssh into the NSX manager.  Root access to the appliance will be needed, so at the command prompt:
  1. Enter ‘en‘ and the enter the ‘admin’ password
  2. Enter ‘st en‘ and enter the following password: IAmOnThePhoneWithTechSupport 
  • Log into the sql prompt
# psql -U secureall
secureall=#
  • Issue the following command to verify that there is a record associated with the host mob ID.  Below is an example using host-1234
# select host_uuid,node_uuid,thumbprint from vnvp_hot_key where host_uuid='host-1234';

Example output:

host_uuid  |              node_uuid               | thumbprint                      
-----------+--------------------------------------+------------------
host-1234  | a2a68660-515e-4f87-811d-306c54b0b2e8 |AD:58:C0:84:FF:DF: 5E:95:50:B7:63:2E:3F:B2:67:22:56:F7:DC:9B

(1 row)

  • Next, in vCenter move the host to an isolation cluster.  We will need to validate the NSX vibs installed by running the following command on the host:
# esxcli software vib list |grep -E 'esx-dvfilter-switch-security|vsip|vxlan'

 

Example output:

esx-dvfilter-switch-security   6.3.1-0.0.5124716  VMware  VMwareCertified 2017-02-28
esx-vsip                       6.3.1-0.0.5124716  VMware                VMwareCertified 2017-02-28

esx-vxlan                      6.3.1-0.0.5124716  VMware VMwareCertified 2017-02-28

 

  • Remove the NSX vibs with the following commands:
# esxcli software vib remove -n esx-vxlan
# esxcli software vib remove -n esx-vsip
# esxcli software vib remove -n esx-dvfilter-switch-security

 

  • Returning to the NSX terminal window, now delete the record using the secureall=# prompt. Using ‘host-1234’ as an example.
# delete from vnvp_host_key where host_uuid='host-1234';
DELETE 1

 

  • Reboot the ESXi host.  Once the host has rebooted, put the host back into the proper cluster.  To be safe, I would temporarily turn down DRS (move slider left), and exit maintenance mode.
  • We can validate that the host looks proper in vSphere web UI: ‘Network & Security -> Installation -> Host Preparation Tab‘ .
  • Click the ‘Resolve‘ link next to the cluster name

Validation

  • Once the tasks are all completed you can run the ‘esxcli software vib list….‘ command again to see that the three vibs have been installed.
  • Test that the vxlan network is functioning on the host.
  • Verify that the SSL Exception is no longer showing in the /var/log/netcpa.log.
  • If there are no errors, then the host is all set to be put back into service.

 

 

 

VMworld 2018 is right around the corner! Where will you be?

It’s almost that time a year again….some might even call it that special time of year where VMware geeks from across the globe converge on VMworld.  One might even consider this summer camp, and like any who have experienced this before, you meet new people in the vCommunity, make friends, and part ways after the week of technical sessions, social gatherings, and just the straight up shop talking, war story sharing, and the sharing of ideas.  Personally, this will be my third year attending, and I am super excited to be going.  This conference means enough to me that, due to other circumstances that happened early this year, I purchased my own pass so to ensure that I wouldn’t miss out.

Now is the perfect time to cash in on those early bird discounts on conference passes, good until June 15.  Why wouldn’t you want to save a couple hundred dollars on one of the best IT conferences of the year?  For an individual, it’s $1,795 vs $2,095.  That’s before other discounts that may be applied like vmug memberships, or the discount for VMware Certified Professionals who hold an active VCP.

So, why go to VMworld?

I think for many first timers, there’s a certain electricity, and excitement about going.  Let me be the first to tell you, that feeling…. never really goes away.  Like the past couple of years, VMworld in the US will be held once again in Las Vegas.

Image result for VMworld 2018

I personally love coming to VMworld and have looked forward to it every year.  There’s always good energy here; the minute you get off the plane, it is happy.  Every experience I’ve had here is fun, and people genuinely are in a good mood.  This conference gives attendees the chance to attend VMware lead, and partner lead sessions on platforms you may have thought about using or are currently using.  These sessions are meant to share best practices with the community, transfer knowledge in ways to use VMware platforms, and also give you a chance to ask the experts, many of whom work for VMware, and in some cases, are very involved with the development of the platforms you use.

VMworld is not just about attending sessions however.  This conference gives you the unique opportunity to network with other IT professionals from across the globe and establish relationships that you would otherwise never be able to do.  Like it did for me, this conference may also inspire you to join the vCommunity, a thriving community of professionals who not only share their knowledge with others, but who also need help themselves.  I think we can all agree that no two environments/businesses are alike, and we have all used VMware’s platforms in ways that were intended, and in ways that even VMware might not have ever considered.  Members of the vCommunity take it upon themselves to share their experiences with others, through blogs, social media, and support forums to help others.  This conference gives us a chance to get together, share war stories from our time in the trenches, and many times, you will find attendees getting together to engineer and develop something cool.

VMware {code} group has even put together a hackathon, where members from the vCommunity can get together while at VMworld, to develop some amazing things, and sometimes there are even prizes to be had for the coolest of the cool ideas.  But don’t let those words “code” or “hackathon” scare you.  These sessions are not just for developers!  Sure it will certainly help, but the power of the community, enables you to participate in these teams anyways.  You may not be able to contribute code, but you can still contribute ideas to the team, and you might even pick up a few coding skills in the fun.  Let’s face it; some pretty cool ideas are cooked up during hackathons.  VMware’s internal hackathon cooked up the idea to bring VR into the datacenter, and allow you to virtually move your workloads from On-Premises Data Centers, into the cloud.  It’s freakin VR man!  How cool is that?

Screenshot2

The VMworld conference also affords you the opportunity to attend instructor lead labs, along with VMware’s hands on labs that you can also experience from home.  While at the conference, there will be many vendors out on the floor where you can experience new products, ask questions about products that you already use, and lets not forget the vendor haul crawl where there will be free adult beverages, snacks, and cool swag vendors are giving out.  All can be found in the solutions exchange area.

Image result for VMworld 2017

I’m not going to lie, the parties at VMworld are pretty wild too.  Not saying that should be the only reason you go, but it is a good way to mingle with other conference attendees, jam out to some good music, and of course escape the Las Vegas heat.  VMworld of course wraps up with it’s own party, before the last day of the conference.

Screen Shot 2018-06-02 at 12.16.46 PM

So what are you waiting for?  I can’t think of any reason not to attend the US 2018 VMworld in Las Vegas, August 26th – 30th, or the UK 2018 VMworld in Barcelona, November 5th – 8th.  Follow this link here, and I will see you at the conference in Las Vegas!  Remember to take advantage of those early bird rates, good until June 15th!  REGISTER HERE FOR VMWORLD 2018

Screen Shot 2018-06-03 at 9.29.50 AM

 

Add Custom Recommendation to vROps alert definition for versions prior to 6.6

  • This is useful when a new SOP document is created, we will be able to link to it directly on the alert email that is sent.

Step-By-Step

  1. Log into the main vRealize Operations Manager page.
  2. Click Content and then Recommendations

  3. On this page you can create, edit and delete custom recommendations.  Click the green plus sign to create a new custom recommendation.
  4. Here you can enter the test for the custom recommendation.  Paste the link to the SOP, highlight it, and then click the hyperlink icon.  Now paste the link again and click OK.  The “actions” section will allow the use of automated functions if you were looking at the triggered alert in vROps.  For now, just click save.
  5. Now you can add the custom recommendation to an alert definition.  Click Content and then Alert Definitions.
  6. Search for the alert that you would like to add the SOP to, select it and click the edit button.
  7. Click on section 5: Add Recommendations and then click on the plus sign
  8. Now you will need to search for the new SOP recommendation you just created, so search for SOP, find it in the list on the left, click and drag to position under the Recommendations section.
  9. Finally click save.  Now when this alert is triggered, and an email is sent, there will be a clickable link in the email to the SOP document.

The Home Lab Hardware

IMG_20171117_170133

Setup

I decided to go with a Supermicro build as I wanted something power efficient, yet expandable, and this motherboard supports up to 128GB of ECC RDIMM DDR4 2133MHz server grade memory.  Now with this setup, when I feel the need to expand out my lab, I can build two more nodes, and I’ll have a rather nice VSAN cluster.  However I’m hoping the cost of DDR4 memory will have come down by then…

I did look at the Supermicro SYS-E300-8D and SYS-E200-8D style micro servers, but like most, I was concerned about the fan noise, and thus decided to go with a slightly larger chassis to get the larger fan.  Honestly the fan in the unit I bought makes no more noise then a regular desktop computer.

Here’s my hardware:


Motherboard

motherboard

SUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg


Memory

memory

Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg


M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg


SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg


Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

Who doesn’t love some internal shots after the lab-box has been put together?  🙂

In the coming blog posts, I’ll be building out my lab.  Stay tuned….

Failure Installing NSX VIB Module On ESXi Host: VIB Module For Agent Is Not Installed On Host

Now admittedly I did this to myself as I was tracking down a root cause on how operations engineers were putting hosts back into production clusters without a properly functioning vxlan.  Apparently the easiest way to get a host into this state is to repeatedly move a host in and out of a production cluster to an isolation cluster where the NSX VIB module is uninstalled.  This is a bug that is resolved in vCenter 6 u3, so at least there’s that little nugget of good news.

Current production setup:

  • NSX: 6.2.8
  • ESXi:  6.0.0 build-4600944 (Update 2)
  • VCSA: 6 Update 2
  • VCD: 8.20

So for this particular error, I was seeing the following in vCenter events: “VIB Module For Agent Is Not Installed On Host“.  After searching the KB articles I came across this one KB2053782 “Agent VIB module not installed” when installing EAM/VXLAN Agent using VUM”.  Following the KB, I made sure my update manager was in working order, and even tried following steps in the KB, but I still had the same issue.

  • Investigating the EAM.log, and found the following:
 1-12T17:48:27.785Z | ERROR | host-7361-0 | VibJob.java | 761 | Unhandled response code: 99 
 2018-01-12T17:48:27.785Z | ERROR | host-7361-0 | VibJob.java | 767 | PatchManager operation failed with error code: 99 
 With VibUrl: https://172.20.4.1/bin/vdn/vibs-6.2.8/6.0-5747501/vxlan.zip 
 2018-01-12T17:48:27.785Z | INFO | host-7361-0 | IssueHandler.java | 121 | Updating issues: 

 eam.issue.VibNotInstalled { 
 time = 2018-01-12 17:48:27,785, 
 description = 'XXX uninitialized', 
 key = 175, 
 agency = 'Agency:7c3aa096-ded7-4694-9979-053b21297a0f:669df433-b993-4766-8102-b1d993192273', 
 solutionId = 'com.vmware.vShieldManager', 
 agencyName = '_VCNS_159_anqa-1-zone001_VMware Network Fabri', 
 solutionName = 'com.vmware.vShieldManager', 
 agent = 'Agent:f509aa08-22ee-4b60-b3b7-f01c80f555df:669df433-b993-4766-8102-b1d993192273', 
 agentName = 'VMware Network Fabric (89)',
  • Investigating the esxupdate.log file and found the following:
 bba9c75116d1:669df433-b993-4766-8102-b1d993192273')), com.vmware.eam.EamException: VibInstallationFailed 
 2018-01-12T17:48:25.416Z | ERROR | agent-3 | AuditedJob.java | 75 | JOB FAILED: [#212229717] 
 EnableDisableAgentJob(AgentImpl(ID:'Agent:c446cd84-f54c-4103-a9e6-fde86056a876:669df433-b993-4766-8102-b1d993192273')), 
 com.vmware.eam.EamException: VibInstallationFailed 
 2018-01-12T17:48:27.821Z | ERROR | agent-2 | AuditedJob.java | 75 | JOB FAILED: [#1294923784] 
 EnableDisableAgentJob(AgentImpl(ID:'Agent:f509aa08-22ee-4b60-
  • Restarting the VUM services didn’t work, as the VIB installation would still fail.
  • Restarting the host didn’t work.
  • On the ESXi host I ran the following command to determine if any VIBS were installed, but it didn’t show any information:  esxcli software vib list 

Starting to suspect that the ESXi host may have corrupted files.  Digging around a little more, I found the following KB2122392 Troubleshooting vSphere ESX Agent Manager (EAM) with NSX“, and KB2075500 Installing VIB fails with the error: Unknown command or namespace software vib install

Decided to manually install the NSX VIB package on the host following KB2122392 above.  Did the manuel extract the downloaded “vxlan.zip”. Below are contents of the vxlan.zip. It Contains the 3 VIB files:
  • esx-vxlan
  • esx-vsip
  • esx-dvfilter-switch-security

Tried install them manually, but got errors indicating corrupted files on the esxi host.  Had to run the following commands first to restore the corrupted files.  **CAUTION – NEEDED TO REBOOT HOST AFTER THESE TWO COMMANDS**:

  • # mv /bootbank/imgdb.tgz /bootbank/imgdb.gz.bkp
  • # cp /altbootbank/imgdb.tgz /bootbank/imgdb.tgz
  • # reboot

Once the host came back up, I attempted to continue with the manual VIB installation.  All three NSX VIBS successfully installed.  Host now showing a healthy status in NSX preparation.  Guest introspection (GI) successfully installed.

 

Manually starting vRealize Hyperic 5.8.X Appliance

I’ve had this happen to me on the 5.8.4 appliance and thought I would share.  Normally The Hyperic appliance is deployed as a vApp consisting of two VMs, and when the vApp is started/restarted, they each start in the proper order.  This process might be needed if the database doesn’t exit/shutdown normally and thus doesn’t start up right the next time.  And if the database isn’t running, the Hyperic UI server won’t start.

Login to the server with ssh, use the hqamdin password with the root username that you specified during the vRealize Hyperic Appliance deployment, unless you have changed them of course…

First start the Postgresql database: hypericdb.  These services have to be started under the hqadmin account.  

  • To check the status of the service run the following command:
# su -c '/opt/vmware/vpostgres/9.1/bin/pg_ctl status -D /opt/vmware/vpostgres/9.1/data/' - hqadmin
  • To start the service run the following command:
# su -c '/opt/vmware/vpostgres/9.1/bin/pg_ctl start -D /opt/vmware/vpostgres/9.1/data/' - hqadmin

Once the database is running, start the hyperic server: hyperic.  This service has to be started under the hyperic account.

  • You can check the status of the hyperic server service by running the following command:
# su -c '/opt/hyperic/server-5.8.4-EE/bin/./hq-server.sh status' - hyperic
  • You can start the service by running the following command:
# su -c '/opt/hyperic/server-5.8.4-EE/bin/./hq-server.sh start' - hyperic

 

You can follow if the Hyperic server starts properly from the bootstrap log on the xx01-m-hyperic server.

# tail -f /opt/hyperic/server-5.8.4-EE/bin/logs/bootstrap.log

 

Hope this helps anyone out there who still uses vRealize Hyperic