Tanzu for VMware Cloud Director

Container Service Extension (CSE) is a key component for VMware Cloud director to provide Kubernetes as a Service. Since CSE 3.1.2 was launched new features such as “Cluster API Provider for Cloud Director” were released.

This is a quick overview about CSE – TKGm – Cluster API and Ingress Load Balancer

CSE Components
CSE User Personas

CSE Server OS:
Any OS is supported

Minimum resources for CSE Server:
2 vCPU
2GB Memory
10GB Storage

CSE Server Requires access to VMware Cloud Director 

CSE Server requires outbound internet connectivity to install required packages and container service extension

Tanzu Kubernetes Grid with CSE

TKG 1.4 Release Notes

TKG with Container Service Extension

Cluster API for VMware Cloud Director

Provision Production ready TKG Clusters on VMware Cloud Director
Multi-control Plane TKG Clusters 


Bootstrap Cluster is the first cluster which installs management Clusters in customer organization.
This one can be an existing TKG cluster in the organization. 

Second step is installing Cluster API on the bootstrap cluster 

Last configure Management cluster for self-management

TKG Load Balancer

​1x  Service Engine Group is deployed for every TKG cluster (both management and workload)

​A load balancer is automatically deployed to front-end the Kubernetes API server

VMware Cloud Director Created Service Engine Group on NSXT Advanced LB

​Allows simplified scaling for multiple Kubernetes control plane nodes.

Prerequisites for Automated Ingress Load Balancing

Cloud Provider:

​Provision NSX-T Advanced LB, enable LB in customer Organization
Provision rights bundles to allow LB Service management
Allocate External Network IP Pool

​Tenant User:

Upload SSL certificate for Secure Ingress Access
Install Contour or Nginx using helm 

Container Service Extension 3.1.2

CSE 3.2.1 GA announced

Last week Container Service Extension 3.1.2 was released. In this post, I will describe the new features and compatibility matrix.
Also a quick overview for Cloud Providers.

New Features:

  • Cluster API Provider for Cloud Director that offers multi-control plane clusters and cluster upgrades using declarative, Kubernetes-style APIs.
  • Kubernetes External Cloud Provider for VCD has been updated to v1.1.0.
  • Kubernetes Container Storage Interface for VCD has been updated to v1.1.0.
  • Kubernetes Container Clusters plugin has been updated to version 3.1.2. The plugin ships with VCD 10.3.2.
  • Support for injecting proxy information into TKG clusters created by CSE. Learn more about the feature.
  • New command option to forcefully delete clusters that were not fully created and were left in unremovable state.
  • Support for VMware Tanzu packages – Harbor, FluentBit, Prometheus, Grafana in TKG clusters.

Compatibility

CSE – UI Plugin – VMware Cloud Director
CSE – VMware Cloud Director – NSX-T
CSE – Cloud Director – NSX-T / AVI

Tanzu Kubernetes for Cloud Providers

Which TKG flavor is the correct one?

As Service Providers are looking for multi-tenancy TKG-m is the right flavor. Please avoid using TKG-s.

CSE / TKG-m deployment steps

For more information about TKG-m deployment, please check:

https://gnunes.cloud/2022/01/10/tanzu-for-cloud-providers/

TANZU FOR CLOUD PROVIDERS

VMware / Container Service Extension Platform Architecture


Most Service Providers are looking to extend the existing VMware Cloud Director environment to offer Containers-as-a-Service by implementing VMware Tanzu Kubernetes Grid, which enables deploying and managing containers.

There are several options, the question is which of them is correct regarding your business needs. In this post, I will try to help Service Providers to get that these answers.

Basic or Standard

These are the two different options for cloud providers (for now, this will change in the future)

Tanzu Basic is included for Partners utilising Flex Core Model

Tanzu Basic

Tanzu Basic along with VMware Cloud Director enables Managed Kubernetes Services that helps Cloud Providers expand their services business by targeting DevOps and Developers with current or new customers who are on vSphere – those who want to embrace infrastructure transformation on-premises, as the first step towards modern applications delivery. VMware Tanzu Basic simplifies the adoption of Kubernetes on-premises, putting cloud-native constructs at the VI Admin’s fingertips as part of vSphere.

Tanzu Standard

VMware Tanzu Standard provides an enterprise-grade Kubernetes runtime across both on-premises, public clouds, and at the edge, with a global control plane for consistent and secure management at scale.

It helps customers realize the benefits of multi-cloud, operate all clusters consistently across environments while enhancing security and governance over your entire K8s footprint.

Tanzu Kubernetes grid deployment

Which TKG flavor is the correct one?

As Service Providers are looking for multi-tenancy TKG-m is the right flavor. Please avoid using TKG-s.

Container Service Extension (CSE)

Container Service Extension (CSE) for VMware Cloud Director enables service providers to offer Kubernetes services with Open source and Upstream Kubernetes Templates. In addition, Container Service Extension 3.1.1 introduces a significant enhancement to support Tanzu Kubernetes Grid (Multi-cloud), also known as TKGm. Starting with CSE 3.1.1, providers can use TKG-m(1.4) as runtime. The provider administrator can install CSE with a few updated configurations to facilitate TKGm runtime for the customers.

CSE – TKG-m deployment steps

My Lab Products Versions

These versions are the ones I deployed in my home lab.
*CSE can be installed with any other supported OS.

As you can see, deployment of NSX-T, Advance Load Balancing and, CSE are mandatory for the TKG-m enablement.

There is no OVA version yet, so the deployment should be done manually.

Next time we can take a deeper look into Container Service Extension (CSE) deployment and troubleshooting.

Have a nice 2022!

Becoming a vExpert, why not?



vExpert 2022 applications are now open!

This is a great opportunity to validate your experience, share your knowledge, tell your experiences and be part of a great community in which we help each other.

You have time until February 14

Link
https://vexpert.vmware.com

Do you want to know the benefits of the program and how to apply?

https://blogs.vmware.com/vexpert/2021/12/07/vexpert-2022-applications-are-open

If you have any questions, do not hesitate to contact me.

It’s time to be a vExpert!

VMware Cloud Director 10.3.1 GA

VMware Cloud Director brings some new features to Core, Networking and Tanzu.

Core

Clod Director appliance backup and restore, new API.

Networking

Migrating to NSX-T now supports certificate authentication for IPsec VPN.

DHCP Relay.

DHCP Binding

L2VPN in the user interface.

Container Services Extension

API Token

Consistent and simpler deployments across all environments 

Reduced configuration drifts and improved server recovery times

Users with vCD 10.3.1 and CSE 3.1.1 can deploy fully supported and secure VMware Tanzu Kubernetes Grid Clusters by importing the TKG OVA on the vCD UI.
CSE 3.1.1 automatically creates vCD AVI based load balancer to secure Layer 4 traffic.

CSE also has a major enhancement on the storage front, K8s services can be created using the Container Storage Interface (CSI) driver to dynamically allocate vCD named independent disk-based Persistent Volumes (PVs) that enforces the tenant’s storage limits.

More info: https://docs.vmware.com/en/VMware-Cloud-Director/index.html

VMware Cloud Director 10.3

Today VMware announced the release of VMware Cloud Director 10.3, including the following:

  • Kubernetes with VMware Cloud Director
    • Tanzu Kubernetes clusters support for NSX-T Data Center group networking. Tanzu Kubernetes clusters are by default only reachable from IP subnets of networks within the same organization virtual data center in which a cluster is created. You can manually configure external access to specific services in a Tanzu Kubernetes cluster. If a Kubernetes cluster is hosted in a VDC that is part of an NSX-T data center group, you can permit access to the cluster’s control plane and to published Kubernetes services from workloads within that data center group.
    • Service providers and tenants can upgrade native and Tanzu Kubernetes clusters by using the VMware Cloud Director UI
    • Service providers can use vRealize Operations Tenant App to chargeback Kubernetes
    • Tenants can use a public single API endpoint for all LCM of both Tanzu Kubernetes Grid Service, Tanzu Kubernetes Grid, and upstream Kubernetes clusters
  • VMware Cloud Director appliance management UI improvements for turning on and off FIPS-compliant mode
  • API support for moving  vApps across vCenter Server instances
  • Catalog management UI improvements
  • VMware Cloud Director Service Library support for vRealize Orchestrator 8.x
    • The Service Library items in VMware Cloud Director are vRealize Orchestrator workflows that expand the cloud management capabilities and make it possible for system administrators and organization administrators to monitor and manipulate different services. If you are using vRealize Orchestrator 7.x, your current functionality and workflows continue to work as expected. 
    • VMware Cloud Director 10.3 ships with a vRealize Orchestrator plug-in that you can use to render vRealize Orchestrator workflows that are published to tenants. You must publish the plug-in to all tenants that you want to run Service Library Workflows based on vRealize Orchestrator. 
  • Streamlined Quick Search and Global Search UI
  • Customizable Keyboard Shortcuts
  • Improvements in the performance of Auto Scaling extension
  • Networking Features
    • vApp network services in organization VDCs backed by NSX-T Data Center. You can use NAT, firewall, and static routing in vApp networks.
    • Distributed Firewall Dynamic Group Membership with NSX-T Data Center Networking. You can create security groups of VMs with a dynamic membership that is based on VM characteristics, such as VM names and VM tags. You use dynamic groups to create distributed firewall rules and edge gateway firewall rules that are applied on a per-VM basis in a data center group networking context. By using dynamic security groups in distributed firewall rules, you can micro-segment network traffic and effectively secure the workloads in your organization.
    • Service providers can create external networks backed by VLAN and overlay NSX-T Data Center segments
    • Service providers can import networks backed by vSphere DVPGs. System administrators can create organization virtual data center networks by importing a distributed port group from a vSphere distributed switch. Imported DVPG networks can be shared across data center groups.
    • VLAN and port-group network pools for VDCs backed by NSX-T Data Center
    • Support for provider VDC creation without associating it with NSX Data Center for vSphere or NSX-T Data Center Update port groups of external networks
    • Avi 20.1.3 and 20.1.4 support
  • Networking UI Enhancements
    • UI support for assigning a primary IP address to an NSX-T edge gateway
    • UI support for DHCPv6 and SLAAC configuration
    • Support for IPv6 static pools creation and management
    • VDC group network list view in the UI
    • Improved Edge Cluster assignment in organization VDCs
    • Added support for DHCP management for isolated networks in organization VDCs backed by NSX-T Data Center
    • Service providers can edit Avi SEG general details
    • New Tier-0 Gateway Networking UI Section in the Service Provider Portal
  • Networking General Enhancements
    • Allocated DHCP IP addresses are visible on VM details screen
    • You can edit and remove DHCP pools from networks backed by NSX-T Data Center
    • Reject action for NSX-T Data Center edge gateway firewall rules. When creating a firewall rule on an NSX-T Data Center edge gateway, you can choose to block traffic from specific sources and notify the blocked client that traffic was rejected.
    • You can change the priority of NAT rules
    • Reflexive NAT support
    • VMware Cloud on AWS support for imported networks
    • Advertise services for internal subnets with route advertisement
    • Support for /32 subnets on external networks backed by NSX-T Data Center
    • Guest VLAN Tagging for networks backed by NSX-T Data Center segments
  • Alpha API availability. The Alpha APIs are enabled by default.System administrators can activate and deactivate VMware Cloud Director Alpha APIs by using the VMware Cloud Director API or by turning Alpha Features on or off in the VMware Cloud Director UI. The following functionalities are available when Alpha APIs are active: 
    • Kubernetes Container Clusters. When Alpha API support is active, you can provision Tanzu Kubernetes Grid Service clusters in addition to native clusters.
    • Legacy API Login. When you specify API version 37.0.0-alpha in your request, the legacy API login endpoints are unavailable. The removal of the /api/sessions API login endpoint is due in the next major VMware Cloud Director release (VMware Cloud Director API version 37.0).

VMware Cloud Provider Lifecycle Manager

VMware Cloud Provider Lifecycle Manager simplifies the operational experience by providing a comprehensive solution to deploy, upgrade, configure, and manage the VMware Cloud Provider Program products.

To enable tasks for VMware Cloud Provider Program products, VMware Cloud Provider Lifecycle Manager provides REST APIs.

To deploy and manage a VMware Cloud Provider Program product, VMware Cloud Provider Lifecycle Manager requires a definition of the REST API request and the product binaries.

VMware Cloud Provider Lifecycle Manager needs access the repository containing the OVA files, upgrade packages, etc.

VMware Cloud Provider Lifecycle Manager is delivered as a Docker image. To run VMware Cloud Provider Lifecycle Manager, you can configure a host running Photon OS on which you start the docker service and run the docker image.

 VMware Cloud Provider Lifecycle Manager Port List

To run the VMware Cloud Provider Lifecycle Manager docker image, you must first configure the host environment. Create repository directories for storing the log files, certificates, and the product OVA files, and the product update files. 

After creating the repository directories, you configure the permissions for every directory. As a result, the files within the directory inherit the permissions you configure on the directory level.

To run VMware Cloud Provider Lifecycle Manager, after uploading the VMware Cloud Provider Lifecycle Manager docker image to the Photon OS virtual machine, you must start the docker container. Use the API version and session ID to authenticate and run requests against VMware Cloud Provider Lifecycle Manager.

To deploy a product, on the VMware Cloud Provider Lifecycle Manager host, first you must create the respective product environment.

VMware Cloud Director prerequisites:

  • The VMware Cloud Director cells must have access to the NFS share. Before deploying VMware Cloud Director, the NFS share must be empty and read/write access must be enabled without authentication.
  • VMware Cloud Director Load Balancer – before deploying the VMware Cloud Director cell, you must configure the dedicated load balancer.
  • To enable forward and reverse lookup of IP addresses and hostnames, you must configure DNS A and PTR records for each VMware Cloud Director cell and load balancer.
  • To deploy VMware Cloud Director with CA-signed certificates, you must generate the required certificates and provide them in the REST API payload.
  • PVDC:
    • vCenter cluster or dedicated resource pool available to be configured for PVDC (if root resource pool of the cluster is used, no resource pool name should be specified)
    • Storage profile preconfigured in vCenter, including compliant datastores
    • NSX-T Manager is accessible and provides overlay transport zone that can be used for network pool.
    • NSX-T tier0 gateway is available and a subnet that is accessible to be used for configuring a VCD external network

vRealize Operations Manager prerequisites:

  • Deploy vRealize Operations Manager and enable the access from vRealize Operations Manager Tenant App to vRealize Operations Manager.
  • DNS A and PTR records have to exist for the vRealize Operations Manager Tenant App appliance to enable forward and reverse lookup of IP addresses and hostnames.

vCloud Usage Meter prerequisites:

  • To automate the reporting, aggregation, and pre-filling of vCloud Usage Meter product consumption data, after deploying vCloud Usage Meter, you must register the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal. You cannot configure the vCloud Usage Meter integration with VMware Cloud Provider Lifecycle Manager before registering the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal.
  • DNS – DNS A and PTR records have to exist for the vCloud Usage Meter appliance to enable forward and reverse lookup of IP addresses and hostnames.

RabbitMQ prerequisites:

To provide access to all RabbitMQ instances, you must configure the RabbitMQ load balancer. DNS – DNS A and PTR records must exist for each RabbitMQ instance and the load balancer to enable forward and reverse lookup of IP addresses and hostnames.

Next time I will be showing how to Upgrade a product using VMware Cloud Provider Lifecycle Manager.

VMware Cloud Director 10.2.2

Good news, VMware Cloud Director 10.2.2 has been released introducing several new features.

VMware Cloud Director version 10.2.2 includes the following:

  • Tanzu Kubernetes Cluster Tenant Network Isolation – Tanzu Kubernetes clusters are now only reachable from workloads within the same organization virtual data center in which a cluster is created.
  • Tanzu Kubernetes Cluster Pod and Services CIDR selection – During the creation of a Tanzu Kubernetes cluster, you can specify ranges of IP addresses for Kubernetes services and Kubernetes pods.
  • VMware Cloud Director uses its management network for communication with Tanzu Kubernetes Clusters – The VMware Cloud Director management network is a private network that serves the cloud infrastructure and provides access for client systems to perform administrative tasks on VMware Cloud Director. Earlier releases use the Kubernetes service network. 
  • VMware Cloud Director appliance SNMP agent – You can configure the agent to listen for polling requests. If there is a preexisting Net-SNMP agent, during the upgrade, the VMware Cloud Director appliance replaces the Net-SNMP installation with VMware-SNMP. During VMware-SNMP set-up, the VMware Cloud Director appliance configures dynamically the firewall rules required for SNMP operation.  You must remove any existing firewall rules that work with Net-SNMP before the upgrade.
  • Global Placement Policy – Service providers can define placement policies that work effectively across all vCenter Server instances and clusters in a VMware Cloud Director environment. A single placement policy can point to hosts that span multiple clusters in one or more vCenter Server instances. Boundaries in underlying infrastructure are abstracted away behind the global logical construct of a placement policy making for a more logical experience for both service providers and tenants. This change enables capturing the placement policy when a vApp Template is created from a VM; the resulting vApp Template will inherit any placement policy from the original VM even if the VM and vApp Template are in different Provider VDCs. 
  • Guest Customization for Encrypted VMs – VMware Cloud Director 10.2.2 fully supports guest customization of VMs that run on encrypted storage. 
  • Organization Virtual Data Center Templates – You can create and share virtual data center (VDC) templates with tenant organizations so that organization administrators can use the templates to create VDCs. VMware Cloud Director 10.2.2 supports the use of NSX-T based networking with the organization VDC templates.
  • Storage Policy Update – Service providers can use storage policies in VMware Cloud Director to create a tiered storage offering, for example,  Gold, Silver, and Bronze, or even offer dedicated storage to tenants. With the enhancement of storage policies to support VMware Cloud Director entities, you have the flexibility to control how you use the storage policies. You can have not only tiered storage, but isolated storage for running VMs, containers, edge gateways, and so on.
  • FIPS Support  – This release of VMware Cloud Director includes support for the Federal Information Processing StandardsBoth the VMware Cloud Director appliance and Linux binary can run in FIPS-compliant mode. FIPS mode is disabled by default. Enabling FIPS mode might affect the performance of VMware Cloud Director. If metrics collection is configured, verify the configuration of the server and client communication with Cassandra over SSL.
  • Direct VDC Networks Support in Organization VDCs backed by NSX-T Data Center –  Service providers can create direct organization VDC networks in VDCs backed by NSX-T Data Center.
     
  • Autoscaling – Scaling groups are a new top-level object that tenants can use to implement automated horizontal scale-in and scale-out events on a group of workloads. You can configure autoscale groups with a source vApp template, a load balancer network, and a set of rules for growing or shrinking the group based on CPU and memory use. VMware Cloud Director automatically spins up or shuts down VMs in a scaling group.
  • Guided Tours Update – Service providers can publish custom-built guided tours and scope the tours to system administrators or tenants. Starting with VMware Cloud Director 10.2.2 you can download guided tours from a VMware Github repository or a custom Github repository. 
  • Removing Static T-shirt Size – VMware Cloud Director 10.2.2 no longer supports the use of the predefined virtual machine sizes available since vCloud Director for Service Providers 9.0.  You can use the VM sizing policy functionality to provide predefined VM sizing.

For information about system requirements and installation instructions, see VMware Cloud Director 10.2 Release Notes.

For information on appliance configuration and sizing, see the guidelines in VMware Cloud Provider Pod Designer – VMware Validated Designs for Cloud Providers.

Enjoy!

Cloud and App Virtual Event!

Join VMware executives for a 35-minute live event to learn how to unlock the power of any cloud for a new generation of modern apps. Then dive deeper with labs, Cloud City demos and technical experts to turn your digital business vision into reality.
Latest Vision and Technology Preview: join VMware, top cloud providers, press, analysts, customers and fellow prospects as we launch our unique solution built specifically for multi-cloud

Expert-led technical deep dives into key topics from App Modernisation to Cloud Migration to Cloud Operations

Hands on Labs/Trials

Latest solutions overview demos

Technical resources, architectural guides

Get the answers you need—for your unique scenario ranging from getting started to accelerating even faster the future.



Workload Migration for Cloud Providers- Part 2

Last time we were talking about the most common Migration strategies for Cloud Providers (RepurchasingRehosting and Relocating), and migration types (Hot, Warm and Cold). Take a look here:

https://via.vmw.com/EQjT

Today I would like to focus on migration phases, at VMware professional Services we divided it into four:

Plan

Planning is the most important phase for any migration, it’s critical before starting your migration. We need to understand the type of workloads that are going to be migrated. Type of application, how these workloads interact with each other. 
A proper application discovery and dependencies mapping is a Key. vRealize Network Insight is a great tool for this task, but also understanding applications by having several meetings with applications owners.
I believe this is the most important phase, it’s the foundation. 

Build

It’s time to build the new Cloud environment, based on the planning stage. SDDC should be designed, deploy and configure. Also, we need to take care of future decisions and design accordingly. 
For example, It’s TANZU part of your roadmap?, are you planning to add K8’s as a service?, etc

Migrate

We already plan the migration and deployed the new Cloud Environment, so It’s time to start. HCX is VMware recommended tool. Ideally, the migration will be performed in waves (not more than 100 workloads each). It’s a good practice to start with the test or non-production VM’s. Test results should be part of this stage as well.

Operate

Workloads were already migrated into the new Cloud Environment. As usual, it’s time to start with the daily tasks, such as monitoring, licensing, security, logging, backups, disaster recovery, performance, etc.
VMware vRealize® Suite can help at this stage. 

Next time, I will be writing about Migration tools (HCX, vMotion, Advanced Cross vMotion, SRM, Etc)