VMware Cloud Director 10.3

Today VMware announced the release of VMware Cloud Director 10.3, including the following:

  • Kubernetes with VMware Cloud Director
    • Tanzu Kubernetes clusters support for NSX-T Data Center group networking. Tanzu Kubernetes clusters are by default only reachable from IP subnets of networks within the same organization virtual data center in which a cluster is created. You can manually configure external access to specific services in a Tanzu Kubernetes cluster. If a Kubernetes cluster is hosted in a VDC that is part of an NSX-T data center group, you can permit access to the cluster’s control plane and to published Kubernetes services from workloads within that data center group.
    • Service providers and tenants can upgrade native and Tanzu Kubernetes clusters by using the VMware Cloud Director UI
    • Service providers can use vRealize Operations Tenant App to chargeback Kubernetes
    • Tenants can use a public single API endpoint for all LCM of both Tanzu Kubernetes Grid Service, Tanzu Kubernetes Grid, and upstream Kubernetes clusters
  • VMware Cloud Director appliance management UI improvements for turning on and off FIPS-compliant mode
  • API support for moving  vApps across vCenter Server instances
  • Catalog management UI improvements
  • VMware Cloud Director Service Library support for vRealize Orchestrator 8.x
    • The Service Library items in VMware Cloud Director are vRealize Orchestrator workflows that expand the cloud management capabilities and make it possible for system administrators and organization administrators to monitor and manipulate different services. If you are using vRealize Orchestrator 7.x, your current functionality and workflows continue to work as expected. 
    • VMware Cloud Director 10.3 ships with a vRealize Orchestrator plug-in that you can use to render vRealize Orchestrator workflows that are published to tenants. You must publish the plug-in to all tenants that you want to run Service Library Workflows based on vRealize Orchestrator. 
  • Streamlined Quick Search and Global Search UI
  • Customizable Keyboard Shortcuts
  • Improvements in the performance of Auto Scaling extension
  • Networking Features
    • vApp network services in organization VDCs backed by NSX-T Data Center. You can use NAT, firewall, and static routing in vApp networks.
    • Distributed Firewall Dynamic Group Membership with NSX-T Data Center Networking. You can create security groups of VMs with a dynamic membership that is based on VM characteristics, such as VM names and VM tags. You use dynamic groups to create distributed firewall rules and edge gateway firewall rules that are applied on a per-VM basis in a data center group networking context. By using dynamic security groups in distributed firewall rules, you can micro-segment network traffic and effectively secure the workloads in your organization.
    • Service providers can create external networks backed by VLAN and overlay NSX-T Data Center segments
    • Service providers can import networks backed by vSphere DVPGs. System administrators can create organization virtual data center networks by importing a distributed port group from a vSphere distributed switch. Imported DVPG networks can be shared across data center groups.
    • VLAN and port-group network pools for VDCs backed by NSX-T Data Center
    • Support for provider VDC creation without associating it with NSX Data Center for vSphere or NSX-T Data Center Update port groups of external networks
    • Avi 20.1.3 and 20.1.4 support
  • Networking UI Enhancements
    • UI support for assigning a primary IP address to an NSX-T edge gateway
    • UI support for DHCPv6 and SLAAC configuration
    • Support for IPv6 static pools creation and management
    • VDC group network list view in the UI
    • Improved Edge Cluster assignment in organization VDCs
    • Added support for DHCP management for isolated networks in organization VDCs backed by NSX-T Data Center
    • Service providers can edit Avi SEG general details
    • New Tier-0 Gateway Networking UI Section in the Service Provider Portal
  • Networking General Enhancements
    • Allocated DHCP IP addresses are visible on VM details screen
    • You can edit and remove DHCP pools from networks backed by NSX-T Data Center
    • Reject action for NSX-T Data Center edge gateway firewall rules. When creating a firewall rule on an NSX-T Data Center edge gateway, you can choose to block traffic from specific sources and notify the blocked client that traffic was rejected.
    • You can change the priority of NAT rules
    • Reflexive NAT support
    • VMware Cloud on AWS support for imported networks
    • Advertise services for internal subnets with route advertisement
    • Support for /32 subnets on external networks backed by NSX-T Data Center
    • Guest VLAN Tagging for networks backed by NSX-T Data Center segments
  • Alpha API availability. The Alpha APIs are enabled by default.System administrators can activate and deactivate VMware Cloud Director Alpha APIs by using the VMware Cloud Director API or by turning Alpha Features on or off in the VMware Cloud Director UI. The following functionalities are available when Alpha APIs are active: 
    • Kubernetes Container Clusters. When Alpha API support is active, you can provision Tanzu Kubernetes Grid Service clusters in addition to native clusters.
    • Legacy API Login. When you specify API version 37.0.0-alpha in your request, the legacy API login endpoints are unavailable. The removal of the /api/sessions API login endpoint is due in the next major VMware Cloud Director release (VMware Cloud Director API version 37.0).

VMware Cloud Provider Lifecycle Manager

VMware Cloud Provider Lifecycle Manager simplifies the operational experience by providing a comprehensive solution to deploy, upgrade, configure, and manage the VMware Cloud Provider Program products.

To enable tasks for VMware Cloud Provider Program products, VMware Cloud Provider Lifecycle Manager provides REST APIs.

To deploy and manage a VMware Cloud Provider Program product, VMware Cloud Provider Lifecycle Manager requires a definition of the REST API request and the product binaries.

VMware Cloud Provider Lifecycle Manager needs access the repository containing the OVA files, upgrade packages, etc.

VMware Cloud Provider Lifecycle Manager is delivered as a Docker image. To run VMware Cloud Provider Lifecycle Manager, you can configure a host running Photon OS on which you start the docker service and run the docker image.

 VMware Cloud Provider Lifecycle Manager Port List

To run the VMware Cloud Provider Lifecycle Manager docker image, you must first configure the host environment. Create repository directories for storing the log files, certificates, and the product OVA files, and the product update files. 

After creating the repository directories, you configure the permissions for every directory. As a result, the files within the directory inherit the permissions you configure on the directory level.

To run VMware Cloud Provider Lifecycle Manager, after uploading the VMware Cloud Provider Lifecycle Manager docker image to the Photon OS virtual machine, you must start the docker container. Use the API version and session ID to authenticate and run requests against VMware Cloud Provider Lifecycle Manager.

To deploy a product, on the VMware Cloud Provider Lifecycle Manager host, first you must create the respective product environment.

VMware Cloud Director prerequisites:

  • The VMware Cloud Director cells must have access to the NFS share. Before deploying VMware Cloud Director, the NFS share must be empty and read/write access must be enabled without authentication.
  • VMware Cloud Director Load Balancer – before deploying the VMware Cloud Director cell, you must configure the dedicated load balancer.
  • To enable forward and reverse lookup of IP addresses and hostnames, you must configure DNS A and PTR records for each VMware Cloud Director cell and load balancer.
  • To deploy VMware Cloud Director with CA-signed certificates, you must generate the required certificates and provide them in the REST API payload.
  • PVDC:
    • vCenter cluster or dedicated resource pool available to be configured for PVDC (if root resource pool of the cluster is used, no resource pool name should be specified)
    • Storage profile preconfigured in vCenter, including compliant datastores
    • NSX-T Manager is accessible and provides overlay transport zone that can be used for network pool.
    • NSX-T tier0 gateway is available and a subnet that is accessible to be used for configuring a VCD external network

vRealize Operations Manager prerequisites:

  • Deploy vRealize Operations Manager and enable the access from vRealize Operations Manager Tenant App to vRealize Operations Manager.
  • DNS A and PTR records have to exist for the vRealize Operations Manager Tenant App appliance to enable forward and reverse lookup of IP addresses and hostnames.

vCloud Usage Meter prerequisites:

  • To automate the reporting, aggregation, and pre-filling of vCloud Usage Meter product consumption data, after deploying vCloud Usage Meter, you must register the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal. You cannot configure the vCloud Usage Meter integration with VMware Cloud Provider Lifecycle Manager before registering the vCloud Usage Meter instance with VMware Cloud Provider Commerce Portal.
  • DNS – DNS A and PTR records have to exist for the vCloud Usage Meter appliance to enable forward and reverse lookup of IP addresses and hostnames.

RabbitMQ prerequisites:

To provide access to all RabbitMQ instances, you must configure the RabbitMQ load balancer. DNS – DNS A and PTR records must exist for each RabbitMQ instance and the load balancer to enable forward and reverse lookup of IP addresses and hostnames.

Next time I will be showing how to Upgrade a product using VMware Cloud Provider Lifecycle Manager.