vSphere with Tanzu: Configuring the NSX-ALB Controller

Blog Date: August 12, 2022
NSX-ALB Controller version: 22.1.1
vSphere version 7.0.3 Build 20150588

In my previous blog: vSphere with Tanzu: Deployment of NSX-ALB Controller, I went over the basic NSX-ALB controller deployment and activation. In this blog, I will go over the configuration of the controller in preparation of deploying Tanzu to the target workload compute cluster.

Picking up where I left off previously, I just assigned the Essentials license to the controller.

Next, we need to configure our default cloud. 


Configure the Default Cloud

Click on the Infrastructure tab, then select Clouds in the left menu. We see the controller comes with a default cloud already configured, and we can edit this for our needs. To the right of the green dot, you’ll see several icons. Click the gear to the right of the pencil.

On the Convert Cloud Type window, select “VMware vCenter/vSphere ESX” in the cloud type drop menu. Click YES, CONTINUE.

For the Default-Cloud, select “Prefer Static Routes vs Directly Connected Network” under Default Network IP Address Management.

Then under vCenter/vSphere section, click add credentials.

Here you will need to add the FQDN of the vCenter, along with the service account the controller will use to access the vCenter. We can use the example spreadsheet we filled out earlier. Click CONNECT.

This will kick you back to the setup wizard. However we now see a little blue information bar “VMware vCenter/vSphere ESX cloud needs to be created before proceeding. Please ‘Save & Relaunch’ the modal to complete setup.” However, the SAVE & RELAUNCH button in the lower right corner is grayed out. We first need to deselect the “Use Content Library” . Now we can click save & relaunch.

Make sure Data Center drop menu has the desired data center, else select it.

Now we can configure the management network information. Select the management network, add its CIDR and gateway. Under the Static IP Address Pool, we need to click the ADD button. This will need 5 consecutive IP addresses.

Click the SAVE button in the lower left. We will come back and edit this section later on.


________________________________________________________________________________

Configure the NSX-ALB Controller to use a certificate

Now we need to update the NSX-ALB SSL certificate. We can either use a self signed certificate, or we can create a CSR and sign the certificate with a CA. In my lab, I have applied the signed CA certificate.

Check out my blog where I go over both options and how to create them here: vSphere with Tanzu: Replacing NSX-ALB Controller Certificates Applying a certificate to the controller has to be done before proceeding to the next step!
________________________________________________________________________________


Configure the Service Engines

For that, we click the Infrastructure tab, and then on the left we expand Cloud Resources, and select Service Engine Group.

1 – Click the pencil on the default-group.
2 – The default configuration, the Legacy HA is already configured to be Active/Standby.  This is the only mode available to the essentials license.
3 – The number of Virtual Services per Service Engine, is 10 at default. This is the number of load balancing services the SE will support.  Each Tanzu cluster you create, will consume one of these load balancing services, and every service you expose from the tanzu cluster will consume a service.  This can be turned up to 1000 depending on your needs.
4 – The maximum number of service engines is limited by the essentials license used.

Click on the Advanced tab.

Under cluster we need to select the workload cluster that will run Tanzu, and we need to specify all hosts.

Click Save in the lower right.


Configure The NSX-ALB Controller Networks

Now that we have configured the Service Engine in the previous section, we now need to configure our networks.  On the left menu, select Networks under Cloud Resources.  We can see that it has detected all of our available networks in the vCenter. 

You’ll notice that it does not detect the network settings because we are using static IPs instead of DHCP, so first we will edit the Data network. Click the pencil on the right.

Click the ‘+ Add Subnet’ button. Refer to the spreadsheet again, copy the ‘Data Network CIDR address’, and paste it into the ‘IP subnet field’. Also click the ‘+ Add Static IP Address Pool’ button, and copy the pool for the Data Network off the spreadsheet. End result should look similar to this.

Click Save in the lower right. Click SAVE again on the next screen. Now the Data network is configured. Next we need to configure the routing.

On the left hand side, select VRF Context to configure the routing. To the right of ‘global’ select the edit button.

We need add the default gateway route and set to 0.0.0.0/0. In the Next hop, we can add the gateway for the data network from the spreadsheet.

Click Save in the lower right.

Now the Data network has been setup.


Configure the IPAM profile

Next, we need to make sure that the NSX ALB knows which IPs it should use, so we need to setup an IPAM as well.

1 – Click on the Templates tab, and then under Profiles, select IPAM/DNS Profiles.
2 – Click the CREATE button, and select ‘IPAM Profile’ from the drop menu.  With the essentials license, we can only create an IPAM Profile.

1 – Name the profile. In this example we use: tanzu-ip.
2 – Under Cloud, select Default-Cloud in the drop menu.
3 – Under Usable Networks, click the ADD button, and in the lower menu, select the data network.

Click SAVE in the lower right.

Now the IPAM profile is configured.


Assign the IPAM profile to the Default-Cloud

Next we need to assign the IPAM to the default cloud. Click the Infrastructure tab, select Clouds, and then to the right of the default-cloud, click the edit button.

Now we can update the default-cloud IPAM Profile, with the IPAM profile just created.

Click SAVE in the lower right. Next, wait for the status to turn green if it hasn’t already.

Congrats! We have finished the setup for the NSX-ALB Controller, and are now ready to deploy Tanzu. I’ll cover that in my next blog. Stay tuned.

vSphere with Tanzu: NSX-ALB Controller Requirements and Deployment Prep.

Blog Date: August 11, 2022
NSX-ALB Controller version: 22.1.1
vSphere version 7.0.3 Build 20150588

VMware Customers can find additional details on system design options and preparation tasks in the vSphere 7 with Tanzu Prerequisites and Preparations Guide. This blog is a summary focused on requirements when using vSphere Networking and NSX-ALB Load Balancing Controller, and the deployment of the controller. This example is for the the deployment of the controller without NSX (NSXT).

Hardware Requirements  

vSphere Cluster No. of Hosts CPU Cores Per Host Memory Per Host NICs Per Host Shared Datastore 
Minimum Recommended 16 (Intel CPU Only) 128GB 2x 10GbE 2.5 TB 

Note: Increasing the number of hosts eases the per-host resource requirements and expand the resource pool for deploying additional or larger Kubernetes clusters, applications, other integrations, etc. 

Software Requirements 

Note: For the most current information, visit the VMware Product Interoperability Matrices and vSphere 7 Release Notes 

Product Supported Version Required /Optional Download Location 
VMware ESXi hosts 7.x Required VMware Product Downloads 
VMware vCenter** vSphere 7.0U2 and later Enterprise Plus w/ Add-on for Kubernetes (or Higher) Required 
NSX ALB* NSX ALB Required 

Network Requirements – Layer 2/3 Physical Network Configuration 

vSphere with Kubernetes requires the following L2/L3 networks to be available in the physical infrastructure and extended to all vSphere cluster(s)’. This is the official reference page with the networking information and options for configuring vSphere with Tanzu networking: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-8908FAD7-9343-491B-9F6B-45FA8893C8CC.html 

To mitigate deployment delays from troubleshooting infrastructure-related problems, customers need to preconfigure both NICs with the appropriate network access, as detailed in the table below, and test for connectivity in advance of any on-site work. 

VLAN Description Host vmnic(s) Virtual Switch MTU IPv4 CIDR Prefix  Routable 
Management Network* NIC 1 & 2 vDS ≥≥1500 ≥≥ /27 Yes 
vMotion Network** NIC 1 & 2 vDS ≥≥1500 ≥≥ /29 No 
Storage / vSAN Network NIC 1 & 2 vDS ≥≥1500 ≥≥ /29 No 
Workload Network*** NIC 1 & 2 vDS ≥≥1500 ≥≥ /24 Yes 
Data Network NIC 1 & 2 vDS ≥≥1500 ≥≥ /24 Yes 

* If the ESXi hosts’ mgmt vmkNIC and other core components such as vCenter operate on separate networks, the two networks must be routable. 

** As opposed to a separate network, vMotion can operate on a shared network with ESXi hosts’ mgmt vmkNIC  

*** The workload network hosts K8s control plane and worker nodes 

When choosing the vSphere Networking model, all network segments and routed connectivity must be provided by the underlying network infrastructure. The Management network can be the same network used for your standard vCenter and ESXi VMKernel port functions, or a separate network with full routed connectivity.  Five consecutive IP addresses on the Management network are required to accommodate the Supervisor VMs, and one additional IP is required for the ASX ALB controller. The Workload CIDRs in the table above account for the typical number of IP addresses required to interface with the physical infrastructure and provide IP addresses to Kubernetes clusters for ingress and egress communications. If the CIDR ranges for Workload and Frontend functions are consolidated onto a single segment, they must be different ranges and non-overlapping. 

Additionally, the Workload Management enablement will default the IP address range for Kubernetes pods and internal services to 10.96.0.0/23. This range is used inside the cluster and will be masked behind the load balancer from the system administrators, developers, and app users. This range can be overridden if needed but should remain a minimum of a  /24. 

Tanzu Mission Control (if available): 

TKG cluster components use TCP exclusively (gRPC over HTTP to be specific) to communicate back to Tanzu Mission Control with no specific MTU outbound requirements (TCP supports packet segmentation and reassembly). 

Firewall Requirements 

VMware HIGHLY RECOMMENDS unfiltered traffic between networks for the system. Reference the VMware vSphere 7 with Kubernetes Prerequisites and Preparations Guide for the summary firewall requirements. 

If Tanzu Mission Control (TMC) is available, the platform needs internet access connectivity. 

Storage 

You will need to use vSphere supported shared storage solution. Typically, this is vSAN, NFS, iSCSI or Fibre Channel. Shared storage is required. Presenting storage volumes directly is not. 

Enterprise Service Requirements 

  • DNS: System components require unique resource records and access to domain name servers for forward and reverse resolution  
  • NTP: System management components require access to a stable, common network time source; time skew < 10 seconds  
  • AD/LDAP (Optional): Service bind account, User/Group DNs, Server(s) FQDN, and port required for authentication with external LDAP identity providers 
  • DRS and HA need to be enabled in the vSphere cluster. 

With the above mentioned requirements in mind, after the environment prep has been completed, I like to fill out the following excel with all of the information needed for the deployment and configuration of the NSX-ALB controller.

That covers the basic requirements and prep work for the NSX-ALB controller deployment. In my next blog: vSphere with Tanzu: Deployment of NSX-ALB Controller, I will walk through the basic deployment of the controller.