nested-vsphere-7-and-kubernetes-lab-deployment-explained

William Lam’s great deployment script triggered the idea of setting up a vSphere 7 and Kubernetes (VCF 4) environment completely nested on top of our development vSphere environment.

While the deployment itself worked very well to deploy the nested vSphere 7 environment, you need to fulfill some important requirements before you can enable the Kubernetes part (VMware Cloud Foundation 4 – VCF). 

This blog post is going to cover these requirements as well as some tips and tricks to work with your fresh VCF 4 environment, the Workload Management and the Tanzu Kubernetes Grid (TKG) cluster.

Automated Deployment

Requirements

This one is the requirements part that William proposes to run his script:

  • vCenter Server running at least vSphere 6.7 or later
    • If your physical storage is vSAN, please ensure you’ve applied the following setting as mentioned here
  • Resource Requirements

    • Compute
      • Ability to provision VMs with up to 8 vCPU
      • Ability to provision up to 116-140 GB of memory
    • Network
      • Single Standard or Distributed Portgroup (Native VLAN) used to deploy all VMs
        • 6 x IP Addresses for VCSA, ESXi, NSX-T UA and Edge VM
        • 5 x Consecutive IP Addresses for Kubernetes Control Plane VMs
        • 1 x IP Address for T0 Static Route
        • 32 x IP Addresses (/27) for Egress CIDR range is the minimum (must not overlap with Ingress CIDR)
        • 32 x IP Addresses (/27) for Ingress CIDR range is the minimum (must not overlap with Egress CIDR)
        • All IP Addresses should be able to communicate with each other
    • Storage

      • Ability to provision up to 1TB of storage

      Note: For detailed requirements, plesae refer to the official document here

  • VMware Cloud Foundation Licenses
  • Desktop (Windows, Mac or Linux) with latest PowerShell Core and PowerCLI 12.0 Core installed. See instructions here for more details
  • vSphere 7 & NSX-T OVAs:

Additional requirements

In case you want to continue enabling Workload Management (VCF) within your nested environment we would further recommend the following – otherwise your SupervisorControlPlaneVM won’t be completely initialized.

  • configure MTU size 1600 or higher on every virtual and physical swith on the communication path from the NSX T0 uplink to your Internet gateway (sNAT). That includes the underlying (where the nested environment is deployed) and the nested vSwitch/dvSwitch!
  • Accept Security Promiscuous mode and Forged transmits setting on your Portgroups
  • Configure a PFsense (or something similar) to act as a Gateway including SNAT for your NSX T0 Uplinks

Accept Promiscuous mode and Forged Transmits for underlying and nested Portgroups

Run the script

Simply is to clone the GitHub repository, change the script according to the README.md and run it using the PowerCLI.

When the script successfully deployed your nested ESXi, the vCSA, the NSX Management and the NSX Edge, you can go ahead and connect to your newly deployed vCSA 7.

Nested environment deployed

Enable the Workload Management

To enable the workload management and deploy the Kubernetes environment by doing so, simply select Workload Management in your vSphere client Menu.

Enable Workload Management

Let’s enable the Workload Management

The wizard guides you through the most important steps and most people can select tiny as a deployment size.

To simplify the guideline we used all the network settings William is using in his script.

IP Settings

You need to make sure that your gateway has an adapter and ip address in the Management and Workload network (Ingress and Egress) to be sure that container images can be downloaded later on.

Furthermore, make sure to enable SNAT in your Gateway and to configure the firewall.

pfsense example

The next important step is to select the correct Datastore for all your images and related data.

After starting the enablement, its time to get a coffee because that process can take some time.

Deployment trouble

The first couple of minutes are pretty boring and you can ignore some of the errors.

But it doesn’t hurt to check the events of the first SuperVisorControlPlaneVM that will be powered on. That one is the Kubernetes Master and if this deployment fails, not much will happen in the future.

Check the Events of the first ControlPlaneVM

If you see some entries like Event

burst of ‘com.vmware.vc.guestOperations.GuestOperation’ started

in the Events of the first powered on SupervisorControlPlaneVM, you can be pretty sure that the deployment is progressing well and the communication between the nested ESXi and the VCF VMs works.

Don’t be impatient!

Some of the errors are absolutely normal and can be ignored if these don’t last for more than 15-20 minutes.

You can also test the environment during the install by pinging the NSX T0 uplinks.

When the master VM is configured, the other 2 SupervisorControlPlane VMs will start as well and all configurations is copied over to them.

Create your first namespace

When you select the Namespaces tab, you can create your first namespace to deploy demo applications or just test a bit.

Let’s create the Namespace test

Add permissions and storage

Connect to the Control Plane

The main tool to work with the Kubernetes platform is kubectl that can be downloaded by visiting the Control Plane IP using your browser or simply click Open under Link to CLI Tools.

Download kubectl and the vsphere plugin, so you can directly connect to your vSphere based Kubernetes deployment:

# Login to Kubernetes
kubectl vsphere login --server=https://controlplane-ip -u administrator@vsphere.local --insecure-skip-tls-verify

# select the test namespace - it safes you typing -n test after each command
kubectl config use-context test

# show all existing resources
kubectl get all

If you haven’t deployed anything yet, you simply see No resources found in test namespace. We’re going to change that.

Run a test Pod

Let’s test, if we can access the internet to pull images by creating a demo nginx pod.

# deploy nginx for testing
kubectl run nginx --image=nginx

# ignore the warning as we're just testing:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

# check the deployment status
kubectl get all

Seems to work!

If it doesn’t work for you, try these commands to get details:

# get information about nginx deployment
kubectl describe deployment.apps/nginx

# get pods with label/value run=nginx and return full configuration
kubectl get pods -l run=nginx -o yaml

You can track all events and the configuration files in your vSphere client as well.

Track all Kubernetes events

Check the deployed configurations – view the YAML for details

# delete the pod
kubectl delete pods -l run=nginx

# check for pods again and you'll find a new pod with a low AGE
kubectl get pods -l run=nginx

# delete the deployment
kubectl delete deployment nginx

# check for pods again and you'll nothing will be shown
kubectl get pods

This little exercise explains a bit how Kubernetes works. The pod will be created based on a replicaset, that is created by the deployment. If we delete the pod the replicaset creates a new one.

Deleting the deployment deletes the pod and the replicaset as well.

Next blog post will be about adding the Tanzu Content Library, creating a TKG cluster and how to monitor logs and performance.

CNIL
Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud
PERFORMANCE MONITORING, LOG ANALYSIS, LICENSE COMPLIANCE!

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials

Goal:

Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).

Implementation:

immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance

Goal:

Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.

Implementation:

While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace

Goal:

No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


Implementation:

immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data

Goal:

IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.

Implementation:

immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence

Goal:

CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.

Implementation:

As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.