Last Updated:

VCF 9 is here!  What's New in NSX?

Andy Schneider
Andy Schneider nsx

Since its announcement at VMware Explore last year, VMware Cloud Foundation (VCF) 9 has generated significant buzz with its innovative features and capabilities. The wait is over—VCF 9 is now generally available!

VCF 9 fundamentally changes how VMware products are deployed and consumed. Prior releases allowed customers to deploy capabilities selectively, which was flexible but led to variability in deployments and unintended uses.

While VCF has plenty of room for creativity, it is a prescriptive architecture.  NSX comes delivered out of the gate, and with that, you can start building and consuming its features immediately!

This is even more evident in VCF 9.  For instance, VCF 9 is "VPC Ready".  This means you can start consuming VPCs out-of-the-box with minimal effort.  This also means VPCs are consumable via many facets - vCenter, VCF Ops, VCF Automation, Supervisor, API, SDK, and, of course, directly in NSX!

The latest version of NSX 9.0 release notes are posted here.  In this blog post, I will list out the new features and comment on the value that I see and how customers will benefit.  Let's begin, shall we?

Licensing


Licensing might not be the most exciting topic, but there are some genuinely useful updates in VCF 9:
  • Centralized license management: Licensing is now handled through the connected vCenter. Simply add your license file to vCenter, connect NSX, and the license key is automatically synced to NSX Manager — no more manual entry.

  • Extended trial period: VCF trials now last 90 days, up from the previous 60 — giving you more time to explore and evaluate the platform.

 

Virtual Private Cloud (VPC) Networking

Virtual Private Cloud (VPC) in vCenter

 
Part of the "VPC Ready" initiative in VCF 9, a new workload domain in VCF 9 now includes a ready-to-go / default NSX Project already populated in vCenter.
 
You may now, very quickly and easily, provision VPCs, Subnets, DHCP settings and External IPs directly to VMs within vCenter.  You no longer need to context-switch over to NSX in order to provision VPCs, Subnets, etc.
 
Of course, if you have a more advanced configuration, you may still want to configure this in NSX or automate it in VCF Automation, Terraform, or via the API.

Virtual Private Cloud (VPC) in VCF Automation

 
Speaking of VCF Automation (that's the new name Aria Automation) - VPCs can now be consumed within VCFA natively.  There is a lot of detail here so I will most definitely dedicate a blog post to this topic.
 
Basically, when a new Org is created in VCFA, a new NSX Project (aka Tenant) is created.  Under that, Namespaces can be defined which are mapped to VPCs.

Virtual Private Cloud (VPC) in Supervisor

 
VMware Supervisor allows self-contained Namespaces to be defined in vCenter. These can be accessed directly via the Kubernetes API or through the UI. In VCF 9, Supervisor can now consume VPCs, enabling workloads such as VMware Kubernetes Service (VKS) clusters, vSphere Pods, or even VMs to be deployed under these mapped VPCs — all through the Kubernetes API.

Virtual Private Cloud (VPC) Simplification


VPCs have received several "level-ups".  For simplification, in particular, how VPCs are connected to the rest of the network has received greater abstraction.  
  • A new construct called the Transit Gateway (TGW) is now available. It acts as the interconnect between multiple VPCs within a single NSX Project and can operate in either Centralized or Distributed mode.
  • External Network Connectivity define how a Transit Gateway connects to the rest of the network northbound.  It also defines IP Blocks that can be consumed by the NSX IPAM services.

Transit Gateways for VPCs


Transit Gateways (TGWs) simplify network consumption in private cloud environments by abstracting inter-VPC and VPC-to-external connectivity.  For example, you can now define VPC subnets using private address ranges scoped to either the VPC itself or the TGW.  And if external connectivity is needed, NAT can also be performed at the VPC or TGW level now.

Transit Gateways with Distributed VLAN Connectivity


One of the coolest things about TGWs, the Distributed TGW (DTGW)!!  The simplest way to think of it is Edgeless NSX.  Northbound connectivity from a VPC is now handed off directly from the ESX host (aka Transport Node) to the Top-of-Rack switch.
 
Essentially, a network administrator can assign a CIDR block of IP addresses, trunk it to all of the ESX hosts that will be serving the VPC, and then NSX can carve it up into smaller subnets within one or more VPCs.
 
East/West traffic is still handled by NSX TEP-to-TEP.  The same kernel-based switching, routing, and security services apply.  The difference is now related to North/South traffic.  Instead of egressing through an NSX Edge, the host itself forwards the traffic Northbound to the ToR where the gateway is hosted.  This use case is best suited for customers who already have a hardware-based Overlay to provide network mobility and can extend VLANs across racks without using NSX.
 
Of course, there are still reasons to continue using the Centralized TGW.  Some examples today include SNAT/DNAT, Gateway Firewall, VKS, some Aria Automation use cases, and likely others.  I will dedicate a blog post to the specifics soon.

VPC-Ready Workload Domains


As I mentioned before, one of the goals of VCF 9 was to be "VPC-Ready" out of the box.  So, after the creation of a new Workload Domain in VCF, vCenter, NSX, and VCF Automation are prepared with all of the prerequisites so that you can start provisioning new VPCs!

Virtual Private Cloud (VPC) DHCP Enhancements


DHCP is super simple with VPCs.  It can be forwarded to an external IPAM or very easily hosted by NSX with a click of a checkbox.  Centralized and Distributed Transit Gateway both support DHCP Server.

Terraform Support for VPC

Our Terraform Provider has had VPC support for a while, but now we bring in all of the new constructs like Transit Gateway.

 

Enhanced Data Path and Performance

Enhanced Datapath Path (EDP) Standard as Default Mode


EDP Standard is the new "standard".  Without any tuning or tweaking, we can support greater throughput at the host and edge level.  It's more efficient on CPU usage and consistent with smaller packet sizes.  There really is little reason why you should use the old mode!  

NSX Switch Port Analyzer (SPAN) in EDP


We've rounded out some of the missing features that was missing with EDP mode -  SPAN being one of them.

Real-Time Monitoring for EDP


Live Traffic Analysis now supports EDP too!

Real-Time Virtual Switch

If you operate in the manufacturing space, check out the Industrial vSwitch, which also supports EDP. It’s specifically designed for industrial automation and SCADA environments.

 

Edge Platform

Edge Host Affinity


In order to minimize downtime during lifecycle management of ESX hosts, we now manage NSX Edge failover through higher level protocols instead of using vMotion to migrate the Edge to a different host.  This results in smoother, more seamless failovers during upgrades.

NSX Edge Platform Usability


You can now optionally install and configure NSX Edges directly through vCenter.  Cool!  Of course, the "old way" of installing via NSX Manager is still there too :)  

Gateway Firewall Disabled by Default

Prior to VCF 9, Gateway Firewall was enabled by default when deploying new T0s or T1 Gateways.  So, even if no rules were implemented, the firewall service was still tracking the state of connections and utilizing resources.  Now, Gateway Firewall is "opt-in" and only turned off if needed.

 

Installation and Upgrade - LCM

NSX Installation with VMware Cloud Foundation 9.0


NSX has always been a part of VCF, so no change here?  I guess we are re-emphasizing that NSX is enabled to ensure we are "VPC Ready" and you know how the saying goes - there is no cloud without "cloud networking".

NSX VIBs Included with ESX and Live Patch Support


Another awesome development.  "Prepping" for NSX just got a lot easier.  Now that we align our product releases, i.e. ESX, NSX, vCenter, etc. versions are all synchronized, the VMware Installation Bundles (VIBs) for NSX now come pre-packaged with the ESX ones.
 
Besides making installation easier, the bigger benefit in my opinion is upgrades.  Formerly, there was a maintenance window (and possible evac of all VMs) to upgrade the ESX image and a separate maintenance window (and possible evac of all VMs) to upgrade the NSX VIBs.  This is now simplified to a single operation.  And if the upgrade supports a Live Patch, there's no impact or evac required whatsoever!

Unified Configuration Management


The NSX cluster-level configuration is now managed alongside ESX and VDS via the vSphere Config Profiles (VCP).

Virtual Networking TEPs on Management VMkernel Interface


To generate the least amount of friction and to reduce IP utilization, customers may now use vmk0 (the traditional ESX Management interface) to also support the NSX TEP functionality.
 
I still think customers would be best served with separate TEP interfaces, but this certainly aids with PoCs or small environments.

Single NSX Manager Support

A single-node (“singleton”) NSX Manager deployment is now fully supported. High availability is provided through vSphere HA, rather than the traditional 3-node NSX Manager cluster (which also leverages vSphere HA).

Like the vmk0 feature mentioned earlier, this setup is best suited for PoCs or very small, resource-constrained environments.

NSX Upgrade Alignment


Since the ESX and NSX VIBs are all delivered by default, the upgrade order has been reordered to align to best facilitate with the vSphere upgrade process.

Hitless NSX Upgrade

You should now be able to continue to use the NSX Manager UI or API during a cluster upgrade without experiencing disruption.

 

Operations

Serviceability Improvements to Grouping and Tagging


System-generated tags no longer count towards tested maximums.  Check out NSX 9 configuration maximums here

Serviceability Improvements to Inventory


There have been some efficiency improvements to the control plane that makes the communication between NSX Manager and the Transport Nodes more efficient and less error prone.  Always a nice addition!

Enhancements to Online Diagnostic System in NSX

ODS runbooks have been a little-known feature for some time, primarily used to execute complex troubleshooting workflows—mostly by the support team, and occasionally by customers. They were triggered manually during troubleshooting to surface useful diagnostic information.

With NSX 9, some ODS runbooks can now be triggered automatically based on specific conditions or errors. This enables real-time debugging and diagnostic logging, helping to accelerate troubleshooting efforts.

Monitoring

Logical Switch IPFIX for VCF Networking only Customers


Some IPFIX features used by VCF Operations for Networks (formerly Aria Operations for Networks, formerly vRealize Network Insight, formerly Arkin ;)) required the vDefend Firewall license to process correctly.  One example is latency tracking.  Now it is built-in to VCF!  VCF Ops will choose the best method for gathering the flow data based on your licensing and entitlement. 

NSX System Health Monitoring Improvement and Integration with VCF Operations


We now have some cool new networking dashboards in VCF Operations - check them out!

NSX Edge Monitoring Documentation Enhancement

The metrics for NSX Edge monitoring and troubleshooting include a detailed description for better understanding of monitoring information. This updated information can be found in the NSX API documentation.

 

Final Thoughts

There are so many new features in VCF 9, it's a bit dizzying! What are your favorites? What are you most excited to try out first? If there’s a specific feature you’d like me to dive deeper into, or if you have questions, feel free to reach out on LinkedIn.

Cheers — and Happy VCF Day!