How To Setup NFS Server on CentOS 7 for vCloud Director v10 Appliance

Blog Date: 10/25/2019

For the purposes of this demonstration, I will be configuring NFS services on a CentOS 7 VM, deployed to a vSphere 6.7 U3 homelab environment.

NFS Server VM Configuration

Host Name: cb01-nfs01
IP Address: 10.0.0.35
CPU: 2
RAM: 4GB

Disk 1: 20GB – Linux installation (thin provisioned)
Disk 2: 100GB – Will be used for the vCD NFS share (thin provisioned)

Configure the vCD NFS share disk

For this demonstration, I have chosen not to configure Disk 2 that was added to the VM. Therefore, this “how-to” assumes that a new disk has been added to the VM, and the NFS server has been powered on after.

1) Open a secure shell to the NFS server. I have switched to the root account.
2) On my NFS server, the new disk will be “/dev/sdb”, if you are unsure run the following command to identify the new disk on yours:

fdisk -l

3) We need to format the newly added disk. In my case /dev/sdb. So run the following command:

fdisk /dev/sdb

4) Next with the fdisk utility, we need to partition the drive. I used the following sequence:
(for new partition) : n
(for primary partition) : p
(default 1) : enter
(default first sector) : enter
(default last sector) : enter

5) Before saving the partition, we need to change it to ‘Linux LVM’ from its current format ‘Linux’. We’ll first use the option ‘t’ to change the partition type, then use the hex code ‘8e’ to change it to Linux LVM like so:

Once you see “Command (m for help):” type ‘w’ to save the config.

Create a ‘Physical Volume, Volume Group and Logical Volume

6) Now that the partition is prepared on the new disk, we can go ahead and create the physical volume with the following command:

# pvcreate /dev/sdb1

7) Now we to create a volume group. You can name it whatever suites your naming standards. For this demonstration, I’ve created a volume group named vg_nfsshare_vcloud_director using /dev/sdb1, using the following command:

# vgcreate vg_nfsshare_vcloud_director /dev/sdb1

Creating a volume group allows us the possibility of adding other devices to expand storage capacity when needed.

8) When it comes to creating logical volumes (LV), the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
– In this example I’ll create one LV named vol_nfsshare_vcloud_director using all the space.
– The -n option is used to indicate a name for the LV, whereas -l (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
The full command used looks like:
# lvcreate -n vol_nfsshare_vcloud_director -l 100%FREE vg_nfsshare_vcloud_director

9) Before a logical volume can be used, we need to create a filesystem on top of it. I’ve used ext4 since it allows us both to increase and reduce the size of the LV.
The command used looks like:

# mkfs.ext4 /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

Writing the filesystem will take some time to complete. Once successful you will be returned to the command prompt.

Mounting the Logical Volume on Boot

10) Next, create a mount point for the LV. This will be used later on for the NFS share. The command looks like:

# mkdir -p /nfsshare/vcloud_director

11) To better identify a logical volume we will need to find out what its UUID (a non-changing attribute that uniquely identifies a formatted storage device) is. The command looks like:

# blkid /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

In this example, the UUID is: UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3

12) Now edit the /etc/fstab and add the disk using the UUID obtained in the previous step.

# vi /etc/fstab

In this example, the entry would look like:

UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3 /nfsshare/vcloud_director ext4 defaults 0 0

Save the change with: wq

13) Mount the new LV with the following command:

# mount -a

To see that it was successfully mounted, use the following command similar to:

# mount | grep nfsshare

Assign Permissions to the NFS Share

14) According to the Preparing the Transfer Server Storage section of the vCloud DIrector 10.0 guide, you must ensure that its permissions and ownership are 750 and root:root .

Setting the permissions on the NFS share would look similar to:

# chmod -R 750 /nfsshare/vcloud_director

Setting the ownership would look similar to:

# chown root:root /nfsshare/vcloud_director

Install the NFS Server Utilities

15) Install the below package for NFS server using the yum command:

# yum install -y nfs-utils

16) Once the packages are installed, enable and start NFS services:

# systemctl enable nfs-server rpcbind

# systemctl start nfs-server rpcbind

16) Modify /etc/exports file to make an entry for the directory /nfsshare/vcloud_director .

– According to the Preparing the Transfer Server Storage guide, the method for allowing read-write access to the shared location for two cells named vcd-cell1-IP and vcd-cell2-IP is the no_root_squash method.

# vi /etc/exports

17) For this demonstration, my vCD appliance IP is 10.0.0.38, so I add the following:

/nfsshare/vcloud_director 10.0.0.38(rw,sync,no_subtree_check,no_root_squash)

– There must be no space between each cell IP address and its immediate following left parenthesis in the export line. If the NFS server reboots while the cells are writing data to the shared location, the use of the sync option in the export configuration prevents data corruption in the shared location. The use of the no_subtree_check option in the export configuration improves reliability when a subdirectory of a file system is exported.
– As this is only a lab, I only have a single vCD appliance for testing. If a proper production deployment, add additional lines for each appliance IP.

18) Each server in the vCloud Director server group must be allowed to mount the NFS share by inspecting the export list for the NFS export. You export the mount by running exportfs -a to export all NFS shares. To re-export use exportfs -r.

# exportfs -a

– Validate NFS daemons are running on the server by using rpcinfo -p localhost or service nfs status. NFS daemons must be running on the server.

# rpcinfo -p localhost

or

# service nfs status

Configure the Firewall

19) We need to configure the firewall on the NFS server to allow NFS client to access the NFS share. To do that, run the following commands on the NFS server.

# firewall-cmd --permanent --add-service mountd

# firewall-cmd --permanent --add-service rpc-bind
# firewall-cmd --permanent --add-service nfs
# firewall-cmd --reload

20) That’s it. Now we can deploy the vCloud Director 10.0 appliance(s).

Optional NFS Share Testing

I highly recommend testing the NFS share before continuing with the vCloud DIrector 10.0 appliance deployment. For my testing, I have deployed a temporary CentOS 7 VM, with the same hostname and IP address as my first vCD appliance. I have installed nfs-utils on my test VM.
# yum install -y nfs-utils

OT-1) Check the NFS shares available on the NFS server by running the following command on the test VM. change the IP and share here to your NFS server.

# showmount -e 10.0.0.35

As you can see, my mount on my NFS server is showing one exported list for 10.0.0.38, my only vCD appliance

OT-2) Create a directory on NFS test VM to mount the NFS share /nfsshare/vcloud_director which we have created on the NFS server.

# mkdir -p /mnt/nfsshare/vcloud_director

OT-3) Use below command to mount the NFS share /nfsshare/vcloud_director from NFS server 10.0.0.35 in /mnt/nfsshare/vcloud_director on NFS test VM.

# mount 10.0.0.35:/nfsshare/vcloud_director /mnt/nfsshare/vcloud_director

OT-4) Verify the mounted share on the NFS test VM using mount command.

# mount | grep nfsshare

You can also use the df -hT command to check the mounted NFS share.

# df -hT

OT-5) Next we’ll create a file on the mounted directory to verify the read and write access on NFS share. IMPORTANT** during the vCD appliance deployment, it is expected that this directory is empty, else it could make the deployment fail. Remember to cleanup after the test.

# touch /mnt/nfsshare/vcloud_director/test

OT-6) Verify the test file exists by using the following command:

# ls -l /mnt/nfsshare/vcloud_director/

OT-7) Clean your room. Cleanup the directory so that it is ready for the vCD deployment.

# rm /mnt/nfsshare/vcloud_director/test

After successfully testing the share, we now know that we can write to that directory from the vCD appliance IP address, and that we can remove files.




In my next post, I will cover deploying the vCloud Director 10.0 appliance. Stay tuned!

Key Takeaways from VMworld US 2019 in San Francisco.

Looking back on this past week, all I can say is that it was pretty crazy. It was my first time to San Francisco, and I honestly left with mixed feelings on the City.

VMworld itself was pretty good! VMware cut back the general sessions to just two days (Monday and Tuesday), and I am honestly conflicted about the missing Thursday general session, as they usually showcase some non VMware related tech for this session.

If I could sum up VMworld in just one word this year, it would be: Kubernetes

Image result for kubernetes meme

VMware debuted their cloud management solution VMware Tanzu with partnership with Pivital, and showcased the ability to manage multiple Kubernetes clusters across multiple clouds, all from one central management dashboard, and Project Pacific, VMware’s endeavor to embed Kubernetes into vSphere.

VMware also added the Odyssey competition this year just outside of the Hands on Labs area. This was in the HOL style, however this only gave attendees hints on what needed to be completed, and really allowed you to test your knowledge and skills in order to complete the task, without the hand holding that the typical HOL provides. Teams were able to compete against each other for the best times, and had some pretty decent prizes.

All in all, it was a decent VMworld, and they will be returning to San Francisco next year. I can’t say that I enjoyed the location, especially with the homeless problem San Francisco has, and I would much rather see VMworld bring it’s 20k+ attendees to a cleaner city, without the drugs, pan handlers, and human waste on the streets. You’d think that as someone who grew up on a farm, and is used to certain sights and smells, that it wouldn’t have bothered me so much, but this took me by surprise

This was also a special VMworld for me this year, as I was finally able to meet Pat Gelsinger. I can tell he really likes the community, and would love to stay longer and chat with everyone. I certainly would have loved the chance to talk with him longer, but I know he had other obligations that night.

The vExpert party was fun as always, and we were able to get a nice photo of the group.

The last session I attended this year was “If this then that for vSphere – the power of event-driven automation” with keynote speakers William Lam, and Michael Gasch. Several well known VMware employees and bloggers were in attendance, including Alan Renouf, who was two chairs down from me, and for this first time I felt this crippling awkwardness of wanting to take pictures with all of them, but was so star stuck that I couldn’t bring myself to it. I know these guys are just normal folks who just happen to be stars in the vCommunity, but I had to contain myself, and enjoy the keynote. Hopefully our paths will cross again, and I can personally meet them.

VMworld General Session, Day 2, Tuesday August 27th, 2019: VMware Tanzu demos, and new CTO announcement!

Day 3 of VMworld 2019 in San Francisco is underway, and it is the second day of General sessions. Clearly today’s theme is Kubernetes, and VMware’s Ray O’Farrell kicked off the keynote by talking about VMware Tanzu and Tanzu’s mission control.

The Keynote then included the integration of NSX-T with Tanzu. The ability to test changes, to see the impact on the environment before going live, was truly amazing

There was also an interesting demo with VMware Horizon and Workspace ONE, showcasing the usage deploying work spaces rapidly from the cloud, and creating zero-trust security policy withing workspace ONE with Carbon Black

Pat jumped up on stage to announce that Ray O’Ferrell (@ray_ofarrell) would be leading VMware’s cloud native apps division, and Greg Lavender (@GregL_VMware) was named the New CTO of VMware.

VMware also announced a limited edition t-shirt that would be given away later that day. VMware had roughly 1000 of these shirts made up, and luckily I was able to get a shirt before they ran out.

Plenty of people were upset about not getting a shirt due to the limited run. Gives a whole new meaning to nerd rage…. (sorry I couldn’t help myself).

VMworld General Session, Day 1, Monday August 26th, 2019: VMware Tanzu, and Project Pacific.

The start of VMworld 2019 in San Francisco is underway, and Pat kicked off the general session talking about his excitement for being back in San Francisco, while poking fun at us “Vegas lovers”. Pat also talked about technology, our digital lives, and technologies role being a force for good. He talked about charities, and cancer research foundations.

Pat Then talked about The Law of Unintended Consequences, and how technology has advanced, we as a society have given up certain aspects of Privacy, the need to combat disinformation at scale available widely on the social media platforms.

Surprisingly, according to Pat, Bitcoin is Bad and contributes to the climate crisis.

First Major Announcement with Kubernetes, as VMware has been focussing on containers

Pat then announced the creation of VMware Tanzu, which is the initiative to have a common platform that allows developers to build modern apps, run enterprise Kubernetes, and platform to manage Kubernetes for developers and IT..

Second Major Announcement, Project Pacific. An ambitious project to unite vSphere and Kubernetes for the future of modern IT

Interestingly, Project Pacific was announced to be 30% faster than a traditional Linux VM, and 8% faster than solutions running on bare metal.

Project Pacific brings Kubernetes to the VMware Community, and will be offered by 20K+ Partner resellers, 4K+ Service providers and 1,100+ technology partners.

Tanzu also comes with mission control, a centralized tool allowing IT Operations to manage Kubernetes for developers and IT.

First Time Speaking At The St. Louis VMUG UserCon

Blog Date: July 21, 2012

The VMUG leadership invited me to speak at the St. Louis VMUG Usercon on April 18, 2019, and share my presentation on How VMware Home Labs Can Improve Your Professional Growth and Career.

This would be my second time giving a public presentation, but I left The Denver VMUG UserCon with a certain charge, or a spring in my step as it were. I didn’t have a lot of time to prepare or to change up my presentation, remembering that I have a PSO customer that I need to take care of. I arrived a day early for the speaker dinner that was being put on by the St. Louis VMUG leadership.

Prior to the dinner, I was able to explore the historical, and picturesque city of St. Charles.

The next day, we all converged on the convention center for the St. Louis UserCon. This way to success!

Seeing your name as a speaker amongst a list of people you’ve looked forward to meeting, have met, or follow on social media, certainly is humbling.

This time, my session was in the afternoon, so in true fashion of many public speakers in the #vCommunity, I had all day to make tweaks. I was also able to join a few sessions. Finally found my room in the maze of this convention center and got setup.

The ninja, and co-leader of the St. Louis UserCon, Jonathan Stewart (@virtuallyanadmi), managed to take a picture of me giving my presentation.

A special thank you to the St. Louis VMUG leadership team, who invited me out to meet and share with their community: Marc Crawford (@uber_tech_geek), Jonathan Stewart (@virtuallyanadmi) and Mike Masters (@vMikeMast)

First Time Speaking At The Denver UserCon

Blog date: July 9th, 2019

This post is a little late, considering the Denver VMUG UserCon was on April 9th, but alas I have been traveling a lot over the past few months

The Denver UserCon was my first time speaking at a public event in front of a large audience. Public speaking is something that I have thought about doing for a while now, and how fitting that my first event, be a year after my very first UserCon attendance, at the very same venue. If I am completely honest, as this was my first time, I was a little nervous, but like anything you just have to take that leap of faith.

My good friend Ariel Sanchez (@arielsanchezmor) has been encouraging me to start this journey, and because this will give me the skills I need for a future role as a marketing engineer, I decided this would be the year to get my feet wet. But what to present?

I’d like to think that most presenters have a blog, that they can reshape into a power point presentation, so I submitted two community sessions. One about vROps, and the other about VMware homelabs. We in the #vCommunity use home labs a lot. Not only our daily use, but to better ourselves professionally. Home labs give us a safe place to experiment with new VMware releases, plan upgrades, and just familiarize ourselves with other products that we may not have a chance to work with in a production environment.

As such, the VMUG community leadership selected my presentation on “How VMware Home labs How a VMware Home lab Can Accelerate Your Career”.

Given the amount of attendees who came to learn and support my first presentation, I’d say that the desire to learn and build a VMware home lab is strong.

I’m not going to lie, as this was my first time, I was certainly nervous. Breath…..count to five……..jump. The presentation was well received, and my friends who joined in support, gave me some positive feedback, along with constructive feedback for improvement.

The evening was capped off with a nice dinner with friends from the #vCommunity.

Dinner with @vGonzilla @vCenterNerd @hcsherwin @crystal_lowe @scott_lowe and @arielsanchezmor

A special thank you to the Denver VMUG leadership team, who invited me out to meet and share with the community: Jason Valentine (@JasonV_VCP5), Tony Gonzalez (@vGonzilla) and Scott Seifert (@vScottSeifert)

Upgrading To vSphere 6.7 Update 1, and Using The vCenter Converge Tool: Part 1

I recently wrapped up a vSphere 6.7 U1 upgrade project, while on a VMware Professional Services (PSO) engagement, with a customer in Denver Colorado. On this project, I had to upgrade their three VMware environments from 6.5, to 6.7 Update 1. This customer also had three external Platform Services Controllers (PSC), a configuration that is now depreciated in VMware architecture.

Check the VMware Interoperability, and Compatibility Matrices

The first thing I needed to do, was to take inventory of the customer’s environment. I needed to know how many vCenters, if they had external Platform Services Controllers, how many hosts, vSphere Distributed Switch (VDS), and what the versions were.

  • From my investigation, this customer had three vCenters, and three external platform services controllers (PSC), all apart of the same SSO domain.
  • I also made note of which vCenter was paired with what external PSO. This information is critical not only for the vSphere 6.7 U1 upgrade, but also the convergence process that I will be doing in part two of this blog series.
  • Looking at the customer’s ESXi hosts, the majority were running the same ESXi 6.5 build, but I did find a few Nutanix clusters, and six ESXi hosts still on version 5.5.
  • The customer had multiple vSphere Distributed Switch (VDS) that needed to be upgraded to 6.5 before the 6.7 upgrade.

The second thing that I needed to do was to look at the model of each ESXi host and determine if it is supported for the vSphere 6.7 U1 upgrade. I also need to validate the firmware and BIOS each host is using, to see if I need to have the customer upgrade the firmware and BIOS of the hosts. We’ll plug this information into the VMware Compatibility Guide .

  • From my investigation, the three ESXi hosts running ESXi 5.5 were not compatible with 6.7U1, however they were compatible with the current build of ESXi 6.5 the customer was running on their other hosts. I would need to upgrade these hosts to ESXi 6.5 before starting the vSphere 6.7 U1 upgrade.
  • This customer had a mix of Dell and Cisco UCS hosts, and almost all needed to have their firmware and BIOS upgraded to be compatible with ESXi 6.7 U1.

The third thing I needed to check was to see what other platforms, owned by VMware, and/or bolt on third parties, that I needed to worry about for this upgrade.

  • The customer is using a later version of VMware’s Horizon solution, and luckily for me, it is compatible with vSphere 6.7 U1, so no worries there.
  • The customer has Zerto 6.0 deployed, and unfortunately it needed to be upgraded to a compatible version.
  • The customer has Actifio backup solution, but that is also running a compatible version, so again no need to update it.

Lets Get those ESXi 5.5 hosts Upgraded to 6.5

I needed to schedule an outage with the customer, as they had three offsite locations, with two ESXi 5.5 hosts each. These hosts were using local storage to house and run the VMs, so even though they were in a host cluster, HA was not an option, and the VMs would need to be powered off.

Once I had the outage secured, I was able to move forward with upgrading these six hosts to ESXi 6.5.

Time to Upgrade the vSphere Distributed Switch (VDS)

For this portion of the upgrade, I only needed to upgrade the customers VDS’s to 6.5. This portion of the upgrade was fast, and I was able to do it mid day without the customer experiencing an outage. We did submit a formal maintenance request for visibility, and CYA. Total upgrade time to do all of their VDS’s was less than 15 minutes. Each switch took roughly a minute.

Upgrade the External Platform Services Controllers Before the vCenter Appliances

Now that I had all hosts to a compatible ESXi 6.5 version, I can move forward with the upgrade. I was able to do this upgrade during the day, as the customer would only lose the ability to manage their VMs using the vCenters. I made backups of the PSC and vCSA databases, and created snapshots of the VMs just in case.

I first needed to upgrade the external PSCs (3) to 6.7 U1, so I simply attached the vCSA.iso to my jump VM, and launched the .exe. I did this process one PSC at a time until they were all upgraded to 6.7 U1.

Upgrade the vCenter Appliances to 6.7 Update 1

Now that the external platform services controllers are on 6.7 U1, it is time to upgrade the vCenters. The process is the same with the exe, so I just did one vCenter at a time. Both the external PSC’s and the vCSA’s upgraded without issue, and within a couple of hours both the external PSC’s and vCSA’s had finished the vSphere 6.7 Update 1.

Upgrade Compatible ESXi Hosts to 6.7 Update 1

I really wanted to use the now embedded VMware Update Manager (VUM), but I either faced users who re-attached ISO’s to their Horizon VMs, or had administrators who were upgrading/installing VMware Tools. In one cluster I even happened to find a host that had improper networking configured compared to its peers in the cluster. Once I got all of that out of the way, I was able to schedule VUM to work its way down through each cluster, and upgrade the ESXi hosts to 6.7 Update 1. There were still some fringe cases where VUM wouldn’t work as intended, and I needed to do one host at a time.

Conclusion for the Upgrade

In the end, upgrading the customer’s three environments, vCSA, PSC and ESXi to 6.7 Update 1 took me about a couple of weeks to do alone. Not too shabby considering I finished ahead of schedule, even with all of the issues I faced. After the upgrade, the customer started having their Cisco UCS blades purple screen at random. After opening a case with GSS, that week Cisco came out with an emergency patch for the fnic driver, on the customer’s UCS blades, for the very issue they were facing. The customer was able to quickly patch the blades. Talk about good timing.

Part 2 Incoming

Part 2 of this series will focus on using the vCenter Converge Tool. Stay tuned.

Blog Date: 4/15/2019

2019 VMware vExpert Announcement

It’s that time of year again. I’m honored and humbled to continue to be apart of the VMware vExpert program. This program challenges me every day to continue to learn, and contribute to the #vCommunity. For me, this isn’t just some title. This is a family of community warriors where we learn from and help each other grow. Everyone in some way gives back to the community. This year, I am also excited to try my hand at public speaking, and give back to the VMUG community as a community session speaker. I don’t think that I would have had the courage to apply to be a speaker, if it wasn’t for my fellow vExperts encouraging me to do so.

Congrats to all the new and returning vExperts! https://blogs.vmware.com/vexpert/2019/03/07/vexpert-2019-award-announcement/

vRealize Operations Manager Dashboard: vSphere DRS Cluster Health. Part 1

A few weeks ago, I had a customer ask me about creating a custom vROPs dashboard for them, so that they could monitor the health of the clusters. For those of you who were unaware, VMware has packaged vROPs with a widget called “DRS Cluster Settings”, that does something similar, and look like this:

The idea behind this widget, is that it will list all clusters attached to the vCenter, giving you high level information such as the DRS setting, and the memory and CPU workload of the cluster. With a cluster selected, in the lower window you will see all of the ESXi hosts apart of that cluster, with their CPU and memory workloads as well. If you are interested in this widget, it can be added when creating a new custom dashboard, and you will find it at the bottom of the available widget list.

While this widget gave me some high level detail, it wasn’t exactly what I wanted, so I decided to create my own to give a deeper level of detail. I used the widget above as a template, and went from there.

This dashboard gives me the current memory and CPU workloads for each cluster in the upper left box, and once a cluster selected, it populates the right, and two middle boxes with data. The top right boxes gives me the memory and CPU workload for the past 24hrs, and the two middle boxes gives me the CPU demand and memory demand forecasts for the next 30 days.

Much like the widget mentioned above, by selecting a cluster in the upper left side, in the lower left side there is a box that will populate with all hosts attached to that cluster. Once a host is selected, in the lower right box, we also get a memory and CPU workload for the past 24hrs for the selected host. This dashboard is slightly larger than a page will allow, so unfortunately users would need to scroll down to see all of the data, but I believe it gives an outstanding birds-eye view of the clusters DRS capabilities.

In my next blog post, I’ll break down what’s involved in creating this dashboard.

The Home Lab Part 2

The very long over due followup post to my The Home Lab entry made earlier this year.  I did recently purchase another 64GB (2x 32GB) Diamond Black DDR4 memory to bring my server up to 128GB.  I had some old 1TB spinning disks I installed in the box for some extra storage as well, although I will phase them out with more SSDs in the future.  So as a recap, this is my setup now:

IMG_20171117_170133

Motherboard

motherboardSUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

 

Memory

memory

(x2) Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

 

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

 

Additional Storage

x2 1TB Western Digital Black spinning disks

 

Initially when I built the lab, I decided to use VMware workstation, but I recently just rebuilt it, installing ESXi 6.7 as the base.  Largely for better performance and reliability.  For the time being this will be a single host environment, but keeping with the versioning, vCSA and vROps are 6.7 as well.  Can an HTML 5 interface be sexy?  This has come a long way from the flash client days.

vcenter view

I decided against fully configuring this host as a single vSAN node, just so that I can have the extra disk.  However, when I do decide to purchase more hardware and build a second or third box, this setup will allow me to grow my environment, and reconfigure it for vSAN use.  Although I am tempted to ingest the SSDs into my NAS, carve out datastores from it and not use vSAN, at least for the base storage.

storageview

Networking is flat for now, so there’s nothing really to show here.  As I expand and add a second host, I will be looking at some networking hardware, and have my lab in it’s own isolated space.

Now that I am in the professional services space, working with VMware customers, I needed a lab that was more production. I’m still building out the lab so I’ll have more content to come.