VMware Cloud Director Container Service Extension 4.0

VMware Cloud Director Container Service Extension (CSE) 4.0 was announced by VMware last week, CSE now is delivered as an OVA file.

VMware Cloud Director Container Service Extension is a plug-in for VMware Cloud Director™ extension that helps users create and work with Kubernetes clusters.

VMware Cloud Director Container Service Extension brings Kubernetes as a service to VMware Cloud Director by deploying and managing fully functional VMware Cloud Director provisioned VMware Cloud Director clusters. By using VMware Cloud Director Container Service Extension, development teams can focus on application development, and simplifies infrastructure management.

The following diagram illustrates the architecture of VMware Cloud Director Container Service Extension 4.0, and the workflow of service providers and tenant users.

Architecture of VMware Cloud Director Container Service Extension 4.0

New features

  • You can now perform cluster life cycle management tasks such as create, upgrade, resize, and delete Kubernetes clusters in Kubernetes Container Clusters UI plug-in of VMware Cloud Director.
  • CSE Management tab: A new service provider persona workflow in the Kubernetes Container Clusters UI plug-in. This workflow guides service providers through the VMware Cloud Director Container Service Extension set up in the UI plug-in, and prepares the environment to allow tenant users to create Kubernetes clusters.
  • Multi-node control plane UI for Tanzu Kubernetes Grid clusters, allowing high availability of the Kubernetes control plane.
  • Heterogeneous clusters with custom sized nodes to build clusters that can accommodate memory or CPU intensive containers.
  • Pre-installation of Tanzu core packages in Tanzu Kubernetes Grid clusters at creation time, that reduces additional configuration by containerized applications.
  • GPU support for Tanzu Kubernetes Grid clusters to allow for AI / ML applications.
  • The VMware Cloud Director Container Service Extension UI is localized to the following languages: German (de_DE), French (fr_FR), Italian (it_IT), Spanish (es_ES), Brazilian Portuguese (pt_BR), Japanese (ja_JP), Korean (ko_KR), Simplified Chinese (zh_CN), Traditional Chinese (zh_TW).
  • VMware Cloud Director Container Service Extension is packaged as an appliance and uses Photon OS 3.0.
  • VMware Cloud Director Container Service Extension supports HA deployment to allow high availability of cluster management tasks, such as create, upgrade, resize and delete a cluster.
  • Support for the deployment of VMware RabbitMQ using VMware Data Solutions Extension..
  • You can select a specific LB VIP and subnet for the control plane to manage additional network security or for business continuity.
  • Cluster API for VMware Cloud Director, CAPVCD, 1.0.0 is released alongside VMware Cloud Director Container Service Extension 4.0. You can use CAPVCD 1.0.0 independently to lifecycle Kubernetes Clusters.


A virtual data center (VDC) within the organization
An organization (VCD)
NSX Advanced Load Balancer preconfigure 
NSX Cloud preconfigure 
Independent Shared Named Disks
Outbound Internet connectivity.
Network connectivity between the machine where VMware Cloud Director Container Service Extension is installed, and the VMware Cloud Director server. VMware Cloud Director Container Service Extension communicates with VMware Cloud Director using VMware Cloud Director public API endpoint.

Deployment Step

  1. Download OVAs
    VMware Cloud Director Container Service Extension
    Tanzu Kubernetes Grid Templates
  2. Create Catalogs and Upload OVAs
  3. Setting up the Configuration for CSE Server
  4. Add VM Sizing Policies to Organization VDCs
  5. Create a User with CSE Admin Role
  6. Start CSE Server
  7. Download Tanzu Kubernetes Grid Templates
  8. Sharing Tanzu Kubernetes Grid Templates

Upgrading to VMware Cloud Director 10.4

VMware Cloud Director 10.4 was launched almost 3 months ago, if you have an older version is a good time to plan and upgrade to the latest version.

VMware Cloud Director 10.4

Since versions prior to VMware Cloud Director 10.3 reached the end of support it’s a good time to upgrade to version 10.4.

You can check the Lifecycle Matrix here:

Platform Services & Operations​ improvements

  • Enhanced visibility into catalog synchronization steps and progress​
  • Fast cross-VC catalog instantiation with shared storage​
  • Service account API tokens​
  • Consolidated VM console on VCD API URL​
  • High-priority automated test suites run on CDS​
  • Support for all VCD workflows through a proxy between VCD and vSphere (including for automated tests)​
  • Multi-tenancy service account enhancements​
  • CSE / Container enhancements​
  • Extensibility enhancements​
  • Terraform & vRA enhancements

Networking improvements

  • Static Routes​
  • New NSX Advanced Load Balancer Basic Features​
  • New NSX Advanced Load Balancer licensing model​
  • Mitigation for NSX-T vApp fencing limitations (API)

Storage improvements

  • Better IOPS reporting
  • SDRS enhancements​ to save VM placement time and utilize proper storage space

Compliance updates

  • STIG Readiness Guide​
  • Photon OS 3.0

If your Cloud Director is the appliance version, you can directly migrate from version 9.7 and up.

VMware Cloud Director Appliance Upgrade Path

In the case of a VMware Cloud Director Linux-Based Upgrade with the external database, you can go to version 10.4 from

In all of the cases, please check:

Don’t miss VMware EXPLORE 2022

VMware Explore – Barcelona – November 2022

The last time VMware held a massive event was in 2019, after two years and a pandemic in November (From 7 to 10) we will see each other again.

As we already know, the event has changed its name and from this year it will be called “VMware Explore”. The place will continue to be the Fira Gran Via in Barcelona.

With 35+ hours of technology and transformation education, training, and executive insights, I’ll have vast opportunities to gain actionable value through:

• Access to 400+ sessions that will enable me to scale cloud-native platform operations, accelerate cloud transformation, and empower and secure the hybrid workforce.

• Practical insights and best practices from customers who’ve cracked the code on addressing challenges like the ones we face.

• Face time with top experts with tips to improve the use of existing solutions and roadmaps on how to advance our capabilities to conquer new business requirements.

• Opportunities to interact hands-on with the latest multi-cloud solutions; accompanied by product experts right there ready to assist.

• Join with the Cross-Cloud services and open-source communities while engaging with an extensive ecosystem of 90% of the top cloud partners.

Finally, I recommend attending the following session:

Need to Migrate Thousands of Workloads? No problem!
Speakers: Andrea Siviero and Suresh Thirumalapudi

This session got the “VMware Explore People’s Choice Awards” in the US VMware Explore

Registration is still open:

Tanzu for VMware Cloud Director

Container Service Extension (CSE) is a key component for VMware Cloud director to provide Kubernetes as a Service. Since CSE 3.1.2 was launched new features such as “Cluster API Provider for Cloud Director” were released.

This is a quick overview about CSE – TKGm – Cluster API and Ingress Load Balancer

CSE Components
CSE User Personas

CSE Server OS:
Any OS is supported

Minimum resources for CSE Server:
2 vCPU
2GB Memory
10GB Storage

CSE Server Requires access to VMware Cloud Director 

CSE Server requires outbound internet connectivity to install required packages and container service extension

Tanzu Kubernetes Grid with CSE

TKG 1.4 Release Notes

TKG with Container Service Extension

Cluster API for VMware Cloud Director

Provision Production ready TKG Clusters on VMware Cloud Director
Multi-control Plane TKG Clusters 

Bootstrap Cluster is the first cluster which installs management Clusters in customer organization.
This one can be an existing TKG cluster in the organization. 

Second step is installing Cluster API on the bootstrap cluster 

Last configure Management cluster for self-management

TKG Load Balancer

​1x  Service Engine Group is deployed for every TKG cluster (both management and workload)

​A load balancer is automatically deployed to front-end the Kubernetes API server

VMware Cloud Director Created Service Engine Group on NSXT Advanced LB

​Allows simplified scaling for multiple Kubernetes control plane nodes.

Prerequisites for Automated Ingress Load Balancing

Cloud Provider:

​Provision NSX-T Advanced LB, enable LB in customer Organization
Provision rights bundles to allow LB Service management
Allocate External Network IP Pool

​Tenant User:

Upload SSL certificate for Secure Ingress Access
Install Contour or Nginx using helm 

Container Service Extension 3.1.2

CSE 3.2.1 GA announced

Last week Container Service Extension 3.1.2 was released. In this post, I will describe the new features and compatibility matrix.
Also a quick overview for Cloud Providers.

New Features:

  • Cluster API Provider for Cloud Director that offers multi-control plane clusters and cluster upgrades using declarative, Kubernetes-style APIs.
  • Kubernetes External Cloud Provider for VCD has been updated to v1.1.0.
  • Kubernetes Container Storage Interface for VCD has been updated to v1.1.0.
  • Kubernetes Container Clusters plugin has been updated to version 3.1.2. The plugin ships with VCD 10.3.2.
  • Support for injecting proxy information into TKG clusters created by CSE. Learn more about the feature.
  • New command option to forcefully delete clusters that were not fully created and were left in unremovable state.
  • Support for VMware Tanzu packages – Harbor, FluentBit, Prometheus, Grafana in TKG clusters.


CSE – UI Plugin – VMware Cloud Director
CSE – VMware Cloud Director – NSX-T
CSE – Cloud Director – NSX-T / AVI

Tanzu Kubernetes for Cloud Providers

Which TKG flavor is the correct one?

As Service Providers are looking for multi-tenancy TKG-m is the right flavor. Please avoid using TKG-s.

CSE / TKG-m deployment steps

For more information about TKG-m deployment, please check:



VMware / Container Service Extension Platform Architecture

Most Service Providers are looking to extend the existing VMware Cloud Director environment to offer Containers-as-a-Service by implementing VMware Tanzu Kubernetes Grid, which enables deploying and managing containers.

There are several options, the question is which of them is correct regarding your business needs. In this post, I will try to help Service Providers to get that these answers.

Basic or Standard

These are the two different options for cloud providers (for now, this will change in the future)

Tanzu Basic is included for Partners utilising Flex Core Model

Tanzu Basic

Tanzu Basic along with VMware Cloud Director enables Managed Kubernetes Services that helps Cloud Providers expand their services business by targeting DevOps and Developers with current or new customers who are on vSphere – those who want to embrace infrastructure transformation on-premises, as the first step towards modern applications delivery. VMware Tanzu Basic simplifies the adoption of Kubernetes on-premises, putting cloud-native constructs at the VI Admin’s fingertips as part of vSphere.

Tanzu Standard

VMware Tanzu Standard provides an enterprise-grade Kubernetes runtime across both on-premises, public clouds, and at the edge, with a global control plane for consistent and secure management at scale.

It helps customers realize the benefits of multi-cloud, operate all clusters consistently across environments while enhancing security and governance over your entire K8s footprint.

Tanzu Kubernetes grid deployment

Which TKG flavor is the correct one?

As Service Providers are looking for multi-tenancy TKG-m is the right flavor. Please avoid using TKG-s.

Container Service Extension (CSE)

Container Service Extension (CSE) for VMware Cloud Director enables service providers to offer Kubernetes services with Open source and Upstream Kubernetes Templates. In addition, Container Service Extension 3.1.1 introduces a significant enhancement to support Tanzu Kubernetes Grid (Multi-cloud), also known as TKGm. Starting with CSE 3.1.1, providers can use TKG-m(1.4) as runtime. The provider administrator can install CSE with a few updated configurations to facilitate TKGm runtime for the customers.

CSE – TKG-m deployment steps

My Lab Products Versions

These versions are the ones I deployed in my home lab.
*CSE can be installed with any other supported OS.

As you can see, deployment of NSX-T, Advance Load Balancing and, CSE are mandatory for the TKG-m enablement.

There is no OVA version yet, so the deployment should be done manually.

Next time we can take a deeper look into Container Service Extension (CSE) deployment and troubleshooting.

Have a nice 2022!

Becoming a vExpert, why not?

vExpert 2022 applications are now open!

This is a great opportunity to validate your experience, share your knowledge, tell your experiences and be part of a great community in which we help each other.

You have time until February 14


Do you want to know the benefits of the program and how to apply?


If you have any questions, do not hesitate to contact me.

It’s time to be a vExpert!

VMware Cloud Director 10.3.1 GA

VMware Cloud Director brings some new features to Core, Networking and Tanzu.


Clod Director appliance backup and restore, new API.


Migrating to NSX-T now supports certificate authentication for IPsec VPN.

DHCP Relay.

DHCP Binding

L2VPN in the user interface.

Container Services Extension

API Token

Consistent and simpler deployments across all environments 

Reduced configuration drifts and improved server recovery times

Users with vCD 10.3.1 and CSE 3.1.1 can deploy fully supported and secure VMware Tanzu Kubernetes Grid Clusters by importing the TKG OVA on the vCD UI.
CSE 3.1.1 automatically creates vCD AVI based load balancer to secure Layer 4 traffic.

CSE also has a major enhancement on the storage front, K8s services can be created using the Container Storage Interface (CSI) driver to dynamically allocate vCD named independent disk-based Persistent Volumes (PVs) that enforces the tenant’s storage limits.

More info: https://docs.vmware.com/en/VMware-Cloud-Director/index.html

VMware Cloud Director 10.3

Today VMware announced the release of VMware Cloud Director 10.3, including the following:

  • Kubernetes with VMware Cloud Director
    • Tanzu Kubernetes clusters support for NSX-T Data Center group networking. Tanzu Kubernetes clusters are by default only reachable from IP subnets of networks within the same organization virtual data center in which a cluster is created. You can manually configure external access to specific services in a Tanzu Kubernetes cluster. If a Kubernetes cluster is hosted in a VDC that is part of an NSX-T data center group, you can permit access to the cluster’s control plane and to published Kubernetes services from workloads within that data center group.
    • Service providers and tenants can upgrade native and Tanzu Kubernetes clusters by using the VMware Cloud Director UI
    • Service providers can use vRealize Operations Tenant App to chargeback Kubernetes
    • Tenants can use a public single API endpoint for all LCM of both Tanzu Kubernetes Grid Service, Tanzu Kubernetes Grid, and upstream Kubernetes clusters
  • VMware Cloud Director appliance management UI improvements for turning on and off FIPS-compliant mode
  • API support for moving  vApps across vCenter Server instances
  • Catalog management UI improvements
  • VMware Cloud Director Service Library support for vRealize Orchestrator 8.x
    • The Service Library items in VMware Cloud Director are vRealize Orchestrator workflows that expand the cloud management capabilities and make it possible for system administrators and organization administrators to monitor and manipulate different services. If you are using vRealize Orchestrator 7.x, your current functionality and workflows continue to work as expected. 
    • VMware Cloud Director 10.3 ships with a vRealize Orchestrator plug-in that you can use to render vRealize Orchestrator workflows that are published to tenants. You must publish the plug-in to all tenants that you want to run Service Library Workflows based on vRealize Orchestrator. 
  • Streamlined Quick Search and Global Search UI
  • Customizable Keyboard Shortcuts
  • Improvements in the performance of Auto Scaling extension
  • Networking Features
    • vApp network services in organization VDCs backed by NSX-T Data Center. You can use NAT, firewall, and static routing in vApp networks.
    • Distributed Firewall Dynamic Group Membership with NSX-T Data Center Networking. You can create security groups of VMs with a dynamic membership that is based on VM characteristics, such as VM names and VM tags. You use dynamic groups to create distributed firewall rules and edge gateway firewall rules that are applied on a per-VM basis in a data center group networking context. By using dynamic security groups in distributed firewall rules, you can micro-segment network traffic and effectively secure the workloads in your organization.
    • Service providers can create external networks backed by VLAN and overlay NSX-T Data Center segments
    • Service providers can import networks backed by vSphere DVPGs. System administrators can create organization virtual data center networks by importing a distributed port group from a vSphere distributed switch. Imported DVPG networks can be shared across data center groups.
    • VLAN and port-group network pools for VDCs backed by NSX-T Data Center
    • Support for provider VDC creation without associating it with NSX Data Center for vSphere or NSX-T Data Center Update port groups of external networks
    • Avi 20.1.3 and 20.1.4 support
  • Networking UI Enhancements
    • UI support for assigning a primary IP address to an NSX-T edge gateway
    • UI support for DHCPv6 and SLAAC configuration
    • Support for IPv6 static pools creation and management
    • VDC group network list view in the UI
    • Improved Edge Cluster assignment in organization VDCs
    • Added support for DHCP management for isolated networks in organization VDCs backed by NSX-T Data Center
    • Service providers can edit Avi SEG general details
    • New Tier-0 Gateway Networking UI Section in the Service Provider Portal
  • Networking General Enhancements
    • Allocated DHCP IP addresses are visible on VM details screen
    • You can edit and remove DHCP pools from networks backed by NSX-T Data Center
    • Reject action for NSX-T Data Center edge gateway firewall rules. When creating a firewall rule on an NSX-T Data Center edge gateway, you can choose to block traffic from specific sources and notify the blocked client that traffic was rejected.
    • You can change the priority of NAT rules
    • Reflexive NAT support
    • VMware Cloud on AWS support for imported networks
    • Advertise services for internal subnets with route advertisement
    • Support for /32 subnets on external networks backed by NSX-T Data Center
    • Guest VLAN Tagging for networks backed by NSX-T Data Center segments
  • Alpha API availability. The Alpha APIs are enabled by default.System administrators can activate and deactivate VMware Cloud Director Alpha APIs by using the VMware Cloud Director API or by turning Alpha Features on or off in the VMware Cloud Director UI. The following functionalities are available when Alpha APIs are active: 
    • Kubernetes Container Clusters. When Alpha API support is active, you can provision Tanzu Kubernetes Grid Service clusters in addition to native clusters.
    • Legacy API Login. When you specify API version 37.0.0-alpha in your request, the legacy API login endpoints are unavailable. The removal of the /api/sessions API login endpoint is due in the next major VMware Cloud Director release (VMware Cloud Director API version 37.0).

VMware Cloud Provider Lifecycle Manager

VMware Cloud Provider Lifecycle Manager simplifies the operational experience by providing a comprehensive solution to deploy, upgrade, configure, and manage the VMware Cloud Provider Program products.

To enable tasks for VMware Cloud Provider Program products, VMware Cloud Provider Lifecycle Manager provides REST APIs.

To deploy and manage a VMware Cloud Provider Program product, VMware Cloud Provider Lifecycle Manager requires a definition of the REST API request and the product binaries.

VMware Cloud Provider Lifecycle Manager needs access the repository containing the OVA files, upgrade packages, etc.

VMware Cloud Provider Lifecycle Manager is delivered as a Docker image. To run VMware Cloud Provider Lifecycle Manager, you can configure a host running Photon OS on which you start the docker service and run the docker image.

 VMware Cloud Provider Lifecycle Manager Port List

To run the VMware Cloud Provider Lifecycle Manager docker image, you must first configure the host environment. Create repository directories for storing the log files, certificates, and the product OVA files, and the product update files. 

After creating the repository directories, you configure the permissions for every directory. As a result, the files within the directory inherit the permissions you configure on the directory level.

To run VMware Cloud Provider Lifecycle Manager, after uploading the VMware Cloud Provider Lifecycle Manager docker image to the Photon OS virtual machine, you must start the docker container. Use the API version and session ID to authenticate and run requests against VMware Cloud Provider Lifecycle Manager.

To deploy a product, on the VMware Cloud Provider Lifecycle Manager host, first you must create the respective product environment.

VMware Cloud Director prerequisites:

  • The VMware Cloud Director cells must have access to the NFS share. Before deploying VMware Cloud Director, the NFS share must be empty and read/write access must be enabled without authentication.
  • VMware Cloud Director Load Balancer – before deploying the VMware Cloud Director cell, you must configure the dedicated load balancer.
  • To enable forward and reverse lookup of IP addresses and hostnames, you must configure DNS A and PTR records for each VMware Cloud Director cell and load balancer.
  • To deploy VMware Cloud Director with CA-signed certificates, you must generate the required certificates and provide them in the REST API payload.
  • PVDC:
    • vCenter cluster or dedicated resource pool available to be configured for PVDC (if root resource pool of the cluster is used, no resource pool name should be specified)
    • Storage profile preconfigured in vCenter, including compliant datastores
    • NSX-T Manager is accessible and provides overlay transport zone that can be used for network pool.
    • NSX-T tier0 gateway is available and a subnet that is accessible to be used for configuring a VCD external network

vRealize Operations Manager prerequisites:

  • Deploy vRealize Operations Manager and enable the access from vRealize Operations Manager Tenant App to vRealize Operations Manager.
  • DNS A and PTR records have to exist for the vRealize Operations Manager Tenant App appliance to enable forward and reverse lookup of IP addresses and hostnames.

vCloud Usage Meter prerequisites:

  • To automate the reporting, aggregation, and pre-filling of vCloud Usage Meter product consumption data, after deploying vCloud Usage Meter, you must register the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal. You cannot configure the vCloud Usage Meter integration with VMware Cloud Provider Lifecycle Manager before registering the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal.
  • DNS – DNS A and PTR records have to exist for the vCloud Usage Meter appliance to enable forward and reverse lookup of IP addresses and hostnames.

RabbitMQ prerequisites:

To provide access to all RabbitMQ instances, you must configure the RabbitMQ load balancer. DNS – DNS A and PTR records must exist for each RabbitMQ instance and the load balancer to enable forward and reverse lookup of IP addresses and hostnames.

Next time I will be showing how to Upgrade a product using VMware Cloud Provider Lifecycle Manager.