How To Manually Set The IP Address On Photon OS From Command Line

Blog Date: December 8th 2022.

Out in the field, I have had a Photon OS appliance like Site Recovery Manager, and vRealize Lifecycle Manager deploy without the proper networking configuration. For me, these appliances have been deployed to a network where the Gateway and DNS were reachable, but the appliance will sometimes complete its first boot without successfully configuring the network. Sometimes you can get out of this situation by simply redeploying the appliance. There are those other times where the appliance just refuses to configure the networking, so a manual approach is required.

DISCLAIMER: I have only done this process on VMware Photon OS based appliances that did not successfully complete their first boot. Your mileage may vary attempting this on an appliance that has successfully deployed.

As always – Take snapshots before proceeding.

Commands To Update Photon OS Network Configuration

Using the VMware console, we first need to check and see what interface is being used with this command:

# /opt/vmware/share/vami/vami_interfaces

Next we need to set the interface IP, NETMASK, and GATEWAY with this command:

# /opt/vmware/share/vami/vami_set_network <INTERFACE> STATICV4 <IP_ADDRESS> <NETMASK> <GATEWAY>

To set the domain suffix and DNS, use this command:

# /opt/vmware/share/vami/vami_set_dns -d <domain suffix> <DNS1> <DNS2>

Assuming no errors, reboot the appliance.

In the event those commands do not work because not all Photon Operating Systems are 100% alike, you can also try this method below:


Update Photon OS Network Configuration File:


Let’s first list the network configuration files. To do this, run the following command:

# ls /etc/systemd/network

In this example, we see one configuration file “10-eth0.network”. We will use the next command to edit the file using vi and make the appropriate changes:

# vi /etc/systemd/network/10-eth0.network

Once you are done making the necessary changes, do a “wq!” to save the changes and quit the editor.

Reboot the appliance for the changes to take affect.

Current CaptainvOPS Homelab 2020

It has been a while since I posted updates about my VMware home lab. If you have followed my recent posts (here) and (here), I have made some minor upgrades to my original home lab. In the later part of 2019, I expanded my home lab an added an additional sister host luckily with the same hardware.

In short, this is a two host setup currently with 128GB DDR4 each for memory, and 1 socket 8 cores for compute, attached to my NAS that is providing ISCSI VMFS 6 storage.

Storage
The white QNAP here, is not providing storage for my lab, and only acts as the conduit to the black QNAP expansion bay attached which is solely used for lab storage. It is equipped with 4 WD Blue 3D NAND 1TB SSDs running as pooled storage LUN in VJBOD mode. In this mode I am just shy of 4tb usable capacity.

I do have some local storage for the hosts, that may or may not be used for vSAN in the future, but right now it is unnecessary.

Overall Capacity
Each host single socket Xeon processor with 8 cores, so combined that gives me 33.59 GHz total. Each host has 128GB so around 255.8 GB total memory. 3.17 TB usable shared storage. Each host has roughly 1.5tb local storage that I use for nested labs mostly. Each host has dual 10GB NICs. There’s also an additional NIC specifically for console connection which is super handy. If I need additional storage I can always carve it out of my white NAS, but as that runs plex, there’s a noticeable performance hit while streaming, which is the main reason I am using the black expansion bay to take all of the I/O from the lab.

Motherboard (x2)

motherboard

SUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

Cost: First Host 11/14/2017 = $789.99
Cost: Second Host 10/22/2019 = $929.99


Memory (x4)

memory

Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26
 Newegg

Cost: First Host Set 64GBx64GB 11/14/2017 & 10/31/2018 (693.34+698.99) 128GB = $1,392.33
Cost: Second Host Set 128GB 10/22/2019 = $539.22 ($259.99 ea.)                    


M.2 SSD (x2)

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg
Cost: First Host: 11/14/2017 = $89.99
Cost: Second Host 10/22/2019 = $49.99


SSD (x2)

ssd

SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg
Cost: 11/14/2017 $459.98 ($229.99 ea.)


Case (x2)

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply
Newegg
Cost: First host 11/14/2017 = $159.99
Cost: Second host 10/22/2019 = $184.85


NAS Expansion

QNAP TR-004-US 4-Bay USB 3.0 Type-C (5Gbps) Hardware RAID Expansion Enclosure / DAS
Newegg
Cost: 10/22/2019 = $199.00


NAS SSDs (x4)

WD Blue 3D NAND 1TB Internal SSD – SATA III 6Gb/s 2.5″/7mm Solid State Drive – WDS100T2B0A

Newegg

Cost: 10/22/2019 = $459.96 ($114.99 ea.)


ESXi USB (x2)

SanDisk – Ultra Fit™ 32GB USB 3.1 Flash Drive – Black

Best Buy
Cost: 10/22/2019 = $35.98


UPS

APC SMC1500C 1440 VA 900 Watts 8 Outlets Pure Sinewave Smart-UPS with Smart Connect (Replaces SMC1500)
Newegg
Cost: 10/22/2019 = $369.99 ($17.99 each)


Switch

TP-Link PoE Switch Gigabit 8 Port | 4 Port PoE 55W | Easy Smart | 802.3af Compliant | Shielded Ports | Traffic Optimization

Amazon
Cost: 10/22/2019 = $59.49


CAT6

Insignia™ – 8′ Cat-6 Network Cable – Gray
Best Buy

Cost: $89.94 ($14.99 ea.)




Home lab total cost as of today 1/25/2020: $5,580.70

According to my APC, the total power being consumed atm: 185 watts (+/-). This also includes the white NAS and other home network equipment.

The lab itself is used for various things now that I’ve been working with VMware customers as a PSO sub contractor. I have nested hosts for different vCloud Director labs, vRealize Operations Manager, NSX and vRealize Log Insight to name a few VMware appliances. I’ve been adding more to it over time based on customer needs. I also use this lab for teaching myself scripting when I find the time. I currently have around 50 virtual machines in total, but that can change depending on the need for other labs.

If you have a VMware homelab, please consider sharing the config at William Lam’s site https://virtuallyghetto.com/homelab

How To Setup NFS Server on CentOS 7 for vCloud Director v10 Appliance

Blog Date: 10/25/2019
Lasted Updated: 04/06/2020

For the purposes of this demonstration, I will be configuring NFS services on a CentOS 7 VM, deployed to a vSphere 6.7 U3 homelab environment.

NFS Server VM Configuration

Host Name: cb01-nfs01
IP Address: 10.0.0.35
CPU: 2
RAM: 4GB

Disk 1: 20GB – Linux installation (thin provisioned)
Disk 2: 100GB – Will be used for the vCD NFS share (thin provisioned)

Configure the vCD NFS share disk

For this demonstration, I have chosen not to configure Disk 2 that was added to the VM. Therefore, this “how-to” assumes that a new disk has been added to the VM, and the NFS server has been powered on after.

1) Open a secure shell to the NFS server. I have switched to the root account.
2) On my NFS server, the new disk will be “/dev/sdb”, if you are unsure run the following command to identify the new disk on yours:

fdisk -l

3) We need to format the newly added disk. In my case /dev/sdb. So run the following command:

fdisk /dev/sdb

4) Next with the fdisk utility, we need to partition the drive. I used the following sequence:
(for new partition) : n
(for primary partition) : p
(default 1) : enter
(default first sector) : enter
(default last sector) : enter

5) Before saving the partition, we need to change it to ‘Linux LVM’ from its current format ‘Linux’. We’ll first use the option ‘t’ to change the partition type, then use the hex code ‘8e’ to change it to Linux LVM like so:

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’.

Command (m for help): w

Once you see “Command (m for help):” type ‘w’ to save the config.

Create a ‘Physical Volume, Volume Group and Logical Volume

6) Now that the partition is prepared on the new disk, we can go ahead and create the physical volume with the following command:

# pvcreate /dev/sdb1

7) Now we to create a volume group. You can name it whatever suites your naming standards. For this demonstration, I’ve created a volume group named vg_nfsshare_vcloud_director using /dev/sdb1, using the following command:

# vgcreate vg_nfsshare_vcloud_director /dev/sdb1

Creating a volume group allows us the possibility of adding other devices to expand storage capacity when needed.

8) When it comes to creating logical volumes (LV), the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.
– In this example I’ll create one LV named vol_nfsshare_vcloud_director using all the space.
– The -n option is used to indicate a name for the LV, whereas -l (lowercase L) is used to indicate a percentage of the remaining space in the container VG.
The full command used looks like:
# lvcreate -n vol_nfsshare_vcloud_director -l 100%FREE vg_nfsshare_vcloud_director

9) Before a logical volume can be used, we need to create a filesystem on top of it. I’ve used ext4 since it allows us both to increase and reduce the size of the LV.
The command used looks like:

# mkfs.ext4 /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

Writing the filesystem will take some time to complete. Once successful you will be returned to the command prompt.

Mounting the Logical Volume on Boot

10) Next, create a mount point for the LV. This will be used later on for the NFS share. The command looks like:

# mkdir -p /nfsshare/vcloud_director

11) To better identify a logical volume we will need to find out what its UUID (a non-changing attribute that uniquely identifies a formatted storage device) is. The command looks like:

# blkid /dev/vg_nfsshare_vcloud_director/vol_nfsshare_vcloud_director

In this example, the UUID is: UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3

12) Now edit the /etc/fstab and add the disk using the UUID obtained in the previous step.

# vi /etc/fstab

In this example, the entry would look like:

UUID=2aced5a0-226e-4d36-948a-7985c71ae9e3 /nfsshare/vcloud_director ext4 defaults 0 0

Save the change with: wq

13) Mount the new LV with the following command:

# mount -a

To see that it was successfully mounted, use the following command similar to:

# mount | grep nfsshare

Assign Permissions to the NFS Share

14) According to the Preparing the Transfer Server Storage section of the vCloud DIrector 10.0 guide, you must ensure that its permissions and ownership are 750 and root:root .

Setting the permissions on the NFS share would look similar to:

# chmod 750 /nfsshare/vcloud_director

Setting the ownership would look similar to:

# chown root:root /nfsshare/vcloud_director

Install the NFS Server Utilities

15) Install the below package for NFS server using the yum command:

# yum install -y nfs-utils

16) Once the packages are installed, enable and start NFS services:

# systemctl enable nfs-server rpcbind

# systemctl start nfs-server rpcbind

16) Modify /etc/exports file to make an entry for the directory /nfsshare/vcloud_director .

– According to the Preparing the Transfer Server Storage guide, the method for allowing read-write access to the shared location for two cells named vcd-cell1-IP and vcd-cell2-IP is the no_root_squash method.

# vi /etc/exports

17) For this demonstration, my vCD appliance IP on the second nic is 10.0.0.38, so I add the following:

/nfsshare/vcloud_director 10.0.0.38(rw,sync,no_subtree_check,no_root_squash)

– There must be no space between each cell IP address and its immediate following left parenthesis in the export line. If the NFS server reboots while the cells are writing data to the shared location, the use of the sync option in the export configuration prevents data corruption in the shared location. The use of the no_subtree_check option in the export configuration improves reliability when a subdirectory of a file system is exported.
– As this is only a lab, I only have a single vCD appliance for testing. If a proper production deployment, add additional lines for each appliance IP.

18) Each server in the vCloud Director server group must be allowed to mount the NFS share by inspecting the export list for the NFS export. You export the mount by running exportfs -a to export all NFS shares. To re-export use exportfs -r.

# exportfs -a

– To check the export, run the following command:

# exportfs -v

– Validate NFS daemons are running on the server by using rpcinfo -p localhost or service nfs status. NFS daemons must be running on the server.

# rpcinfo -p localhost

or

# systemctl status nfs-server.service

Configure the Firewall

19) We need to configure the firewall on the NFS server to allow NFS client to access the NFS share. To do that, run the following commands on the NFS server.

# firewall-cmd --permanent --add-service mountd

# firewall-cmd --permanent --add-service rpc-bind
# firewall-cmd --permanent --add-service nfs
# firewall-cmd --reload

20) That’s it. Now we can deploy the vCloud Director 10.0 appliance(s).

Optional NFS Share Testing

I highly recommend testing the NFS share before continuing with the vCloud DIrector 10.0 appliance deployment. For my testing, I have deployed a temporary CentOS 7 VM, with the same hostname and IP address as my first vCD appliance. I have installed nfs-utils on my test VM.
# yum install -y nfs-utils

OT-1) Check the NFS shares available on the NFS server by running the following command on the test VM. change the IP and share here to your NFS server.

# showmount -e 10.0.0.35

As you can see, my mount on my NFS server is showing one exported list for 10.0.0.38, my only vCD appliance

OT-2) Create a directory on NFS test VM to mount the NFS share /nfsshare/vcloud_director which we have created on the NFS server.

# mkdir -p /mnt/nfsshare/vcloud_director

OT-3) Use below command to mount the NFS share /nfsshare/vcloud_director from NFS server 10.0.0.35 in /mnt/nfsshare/vcloud_director on NFS test VM.

# mount 10.0.0.35:/nfsshare/vcloud_director /mnt/nfsshare/vcloud_director

OT-4) Verify the mounted share on the NFS test VM using mount command.

# mount | grep nfsshare

You can also use the df -hT command to check the mounted NFS share.

# df -hT

OT-5) Next we’ll create a file on the mounted directory to verify the read and write access on NFS share. IMPORTANT** during the vCD appliance deployment, it is expected that this directory is empty, else it could make the deployment fail. Remember to cleanup after the test.

# touch /mnt/nfsshare/vcloud_director/test

OT-6) Verify the test file exists by using the following command:

# ls -l /mnt/nfsshare/vcloud_director/

OT-7) Clean your room. Cleanup the directory so that it is ready for the vCD deployment.

# rm /mnt/nfsshare/vcloud_director/test

After successfully testing the share, we now know that we can write to that directory from the vCD appliance IP address, and that we can remove files.




In my next post, I will cover deploying the vCloud Director 10.0 appliance. Stay tuned!

The Home Lab Part 2

The very long over due followup post to my The Home Lab entry made earlier this year.  I did recently purchase another 64GB (2x 32GB) Diamond Black DDR4 memory to bring my server up to 128GB.  I had some old 1TB spinning disks I installed in the box for some extra storage as well, although I will phase them out with more SSDs in the future.  So as a recap, this is my setup now:

IMG_20171117_170133

Motherboard

motherboardSUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg

 

Memory

memory

(x2) Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg

M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg

SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg

 

Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

 

Additional Storage

x2 1TB Western Digital Black spinning disks

 

Initially when I built the lab, I decided to use VMware workstation, but I recently just rebuilt it, installing ESXi 6.7 as the base.  Largely for better performance and reliability.  For the time being this will be a single host environment, but keeping with the versioning, vCSA and vROps are 6.7 as well.  Can an HTML 5 interface be sexy?  This has come a long way from the flash client days.

vcenter view

I decided against fully configuring this host as a single vSAN node, just so that I can have the extra disk.  However, when I do decide to purchase more hardware and build a second or third box, this setup will allow me to grow my environment, and reconfigure it for vSAN use.  Although I am tempted to ingest the SSDs into my NAS, carve out datastores from it and not use vSAN, at least for the base storage.

storageview

Networking is flat for now, so there’s nothing really to show here.  As I expand and add a second host, I will be looking at some networking hardware, and have my lab in it’s own isolated space.

Now that I am in the professional services space, working with VMware customers, I needed a lab that was more production. I’m still building out the lab so I’ll have more content to come.

The Home Lab Hardware

IMG_20171117_170133

Setup

I decided to go with a Supermicro build as I wanted something power efficient, yet expandable, and this motherboard supports up to 128GB of ECC RDIMM DDR4 2133MHz server grade memory.  Now with this setup, when I feel the need to expand out my lab, I can build two more nodes, and I’ll have a rather nice VSAN cluster.  However I’m hoping the cost of DDR4 memory will have come down by then…

I did look at the Supermicro SYS-E300-8D and SYS-E200-8D style micro servers, but like most, I was concerned about the fan noise, and thus decided to go with a slightly larger chassis to get the larger fan.  Honestly the fan in the unit I bought makes no more noise then a regular desktop computer.

Here’s my hardware:


Motherboard

motherboard

SUPERMICRO MBD-X10SDV-TLN4F-O Mini ITX Server Motherboard Xeon processor D-1541 FCBGA 1667 

Newegg


Memory

memory

Black Diamond Memory 64GB (2 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model BD32GX22133MQR26

                                   Newegg


M.2 SSD

m.2ssd

WD Blue M.2 250GB Internal SSD Solid State Drive – SATA 6Gb/s – WDS250G1B0B

Newegg


SSD

ssd

(x 2) SAMSUNG 850 PRO 2.5″ 512GB SATA III 3D NAND Internal Solid State Drive (SSD) MZ-7KE512BW

Newegg


Case

chassis

SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case 250W Flex ATX Multi-output Bronze Power Supply

Newegg

Who doesn’t love some internal shots after the lab-box has been put together?  🙂

In the coming blog posts, I’ll be building out my lab.  Stay tuned….