vSphere with Tanzu on VMware Cloud Foundation/vSphere with NSX-T Requirements

Blog Date: December 1, 2021

VMware Cloud Foundation 4.3.1 Used During Deployment:

Summary requirements:

– For VMware Cloud Foundation 4.3.1 software requirements, please refer to the Cloud Foundation Bill of Materials (BOM)
– For vSphere with Tanzu, it is recommended to have at least three hosts with 16 CPU Cores per host (Intel), 128GB per host, 2x 10GBe Nics per host, and a shared datastore of at least 2.5 TB.
– This post assumes VCF 4.3.1 has already been deployed following recommended practices.

System Requirements for Setting Up vSphere with Tanzu with NSX-T Data Center

The following link to VMware’s documentation should be reviewed prior to installation of Tanzu. This link includes networking requirements.
Tanzu with NSX-T Data Center Requirements

Specific Tanzu Call-outs:

IP Network

POD CIDR

Services CIDR

Ingress CIDR

Egress CIDR

Management Network

IPv4 CIDR Prefix

Greater than or equal to /20

Greater than or equal to /22

Greater than or equal to /24

Greater than or equal to /24

5 consecutive IPs

Justification:

POD CIDR: Dedicate a /20 subnet for pod networking. Private IP space behind a NAT that you can use in multiple Supervisor Clusters.Note: when creating TKG clusters, the IP addresses used for the K8s nodes are also allocated from this IP block.
Services CIDR: Dedicate a /22 subnet for services. Private IP space behind a NAT that you can use in multiple Supervisor Clusters.
Ingress CIDR: TKGS sources the ingress CIDR for allocating routable addresses to the Kubernetes clusters’ API VIP, ingress controller VIP, and service-type Load-Balancer VIP(s). Note: This subnet must be routable to the rest of the corporate network.
Egress CIDR: TKGS sources the egress CIDR to enable outbound communication from namespace pods. NSX-T automatically creates a source network address translation (SNAT) entry for each namespace, mapping the pod network to the routable IP address, respectively. Note: This subnet must be routable to the rest of the corporate network
Management Network: Five consecutive IP addresses on the Management network are required to accommodate the Supervisor VMs.
MTU: Greater than or equal to 1600 on all networks that will carry Tanzu traffic i.e. management network, NSX Tunnel (Host TEP, Edge TEP) networks, and the external network.


Note:  Kubernetes cluster requires identifying private IPv4 CIDR blocks for internal pod networks and service IP addresses. The Pods CIDR and Service CIDR blocks cannot overlap with IPs of Workload Management components (vCenter, ESXi hosts, NSX-T components, DNS, and NTP) and other data center IP networks communicating with Kubernetes pods. The minimum Pods CIDR prefix length is /23, and the minimum Service CIDR prefix length is /24.

Also: If using Tanzu Mission Control, TKG cluster components use TCP exclusively (gRPC over HTTP to be specific) to communicate back to Tanzu Mission Control with no specific MTU outbound requirements (TCP supports packet segmentation and reassembly).

Deploy an NSX-T Edge Cluster

The following link to VMware’s documentation should be reviewed, and can be used to deploy the necessary NSX Edge Cluster prior to the Tanzu deployment.
Deploy an NSX-T Edge Cluster

Enterprise Service Requirements

  • DNS: System components require unique resource records and access to domain name servers for forward and reverse resolution.
  • NTP: System management components require access to a stable, common network time source; time skew < 10 seconds.
  • DRS and HA: Need to be enabled in the vSphere cluster
  • (Useful but Optional): Ubuntu developer VM with docker installed for use while interacting with Tanzu.

In my next post, I’ll go over deploying vSphere with Tanzu on VCF with NSX-T.