vSphere with Tanzu: NSX-ALB Controller Requirements and Deployment Prep.

Blog Date: August 11, 2022
NSX-ALB Controller version: 22.1.1
vSphere version 7.0.3 Build 20150588

VMware Customers can find additional details on system design options and preparation tasks in the vSphere 7 with Tanzu Prerequisites and Preparations Guide. This blog is a summary focused on requirements when using vSphere Networking and NSX-ALB Load Balancing Controller, and the deployment of the controller. This example is for the the deployment of the controller without NSX (NSXT).

Hardware Requirements  

vSphere Cluster No. of Hosts CPU Cores Per Host Memory Per Host NICs Per Host Shared Datastore 
Minimum Recommended 16 (Intel CPU Only) 128GB 2x 10GbE 2.5 TB 

Note: Increasing the number of hosts eases the per-host resource requirements and expand the resource pool for deploying additional or larger Kubernetes clusters, applications, other integrations, etc. 

Software Requirements 

Note: For the most current information, visit the VMware Product Interoperability Matrices and vSphere 7 Release Notes 

Product Supported Version Required /Optional Download Location 
VMware ESXi hosts 7.x Required VMware Product Downloads 
VMware vCenter** vSphere 7.0U2 and later Enterprise Plus w/ Add-on for Kubernetes (or Higher) Required 
NSX ALB* NSX ALB Required 

Network Requirements – Layer 2/3 Physical Network Configuration 

vSphere with Kubernetes requires the following L2/L3 networks to be available in the physical infrastructure and extended to all vSphere cluster(s)’. This is the official reference page with the networking information and options for configuring vSphere with Tanzu networking: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-8908FAD7-9343-491B-9F6B-45FA8893C8CC.html 

To mitigate deployment delays from troubleshooting infrastructure-related problems, customers need to preconfigure both NICs with the appropriate network access, as detailed in the table below, and test for connectivity in advance of any on-site work. 

VLAN Description Host vmnic(s) Virtual Switch MTU IPv4 CIDR Prefix  Routable 
Management Network* NIC 1 & 2 vDS ≥≥1500 ≥≥ /27 Yes 
vMotion Network** NIC 1 & 2 vDS ≥≥1500 ≥≥ /29 No 
Storage / vSAN Network NIC 1 & 2 vDS ≥≥1500 ≥≥ /29 No 
Workload Network*** NIC 1 & 2 vDS ≥≥1500 ≥≥ /24 Yes 
Data Network NIC 1 & 2 vDS ≥≥1500 ≥≥ /24 Yes 

* If the ESXi hosts’ mgmt vmkNIC and other core components such as vCenter operate on separate networks, the two networks must be routable. 

** As opposed to a separate network, vMotion can operate on a shared network with ESXi hosts’ mgmt vmkNIC  

*** The workload network hosts K8s control plane and worker nodes 

When choosing the vSphere Networking model, all network segments and routed connectivity must be provided by the underlying network infrastructure. The Management network can be the same network used for your standard vCenter and ESXi VMKernel port functions, or a separate network with full routed connectivity.  Five consecutive IP addresses on the Management network are required to accommodate the Supervisor VMs, and one additional IP is required for the ASX ALB controller. The Workload CIDRs in the table above account for the typical number of IP addresses required to interface with the physical infrastructure and provide IP addresses to Kubernetes clusters for ingress and egress communications. If the CIDR ranges for Workload and Frontend functions are consolidated onto a single segment, they must be different ranges and non-overlapping. 

Additionally, the Workload Management enablement will default the IP address range for Kubernetes pods and internal services to 10.96.0.0/23. This range is used inside the cluster and will be masked behind the load balancer from the system administrators, developers, and app users. This range can be overridden if needed but should remain a minimum of a  /24. 

Tanzu Mission Control (if available): 

TKG cluster components use TCP exclusively (gRPC over HTTP to be specific) to communicate back to Tanzu Mission Control with no specific MTU outbound requirements (TCP supports packet segmentation and reassembly). 

Firewall Requirements 

VMware HIGHLY RECOMMENDS unfiltered traffic between networks for the system. Reference the VMware vSphere 7 with Kubernetes Prerequisites and Preparations Guide for the summary firewall requirements. 

If Tanzu Mission Control (TMC) is available, the platform needs internet access connectivity. 

Storage 

You will need to use vSphere supported shared storage solution. Typically, this is vSAN, NFS, iSCSI or Fibre Channel. Shared storage is required. Presenting storage volumes directly is not. 

Enterprise Service Requirements 

  • DNS: System components require unique resource records and access to domain name servers for forward and reverse resolution  
  • NTP: System management components require access to a stable, common network time source; time skew < 10 seconds  
  • AD/LDAP (Optional): Service bind account, User/Group DNs, Server(s) FQDN, and port required for authentication with external LDAP identity providers 
  • DRS and HA need to be enabled in the vSphere cluster. 

With the above mentioned requirements in mind, after the environment prep has been completed, I like to fill out the following excel with all of the information needed for the deployment and configuration of the NSX-ALB controller.

That covers the basic requirements and prep work for the NSX-ALB controller deployment. In my next blog: vSphere with Tanzu: Deployment of NSX-ALB Controller, I will walk through the basic deployment of the controller.

2 thoughts on “vSphere with Tanzu: NSX-ALB Controller Requirements and Deployment Prep.

Comments are closed.