Job of vSphere Administrators is not so limited to GUI. You should be always available with troubleshoot your issues from command line or CLI.This also applies, when you are dealing with VMware NSX. We have already discussed about Preparing your vSphere CLuster and Host by installing NSX VIBS from Network & Security plugin from vSphere Web Client. It always a situation that the installation of NSX VIB’s may fail due to some reason and we as vSphere admin should have to troubleshoot and fix the installation issues. I have faced one of the issue when i prepare my cluster/ ESXi host for NSX. Let’s take a detailed look at setp by step procedure to manually install NSX VIBs on the ESXi host.
If you extract the downloaded “vxlan.zip”. Below are contents of the vxlan.zip. It Contains the 3 VIB files
One VIB enables the layer 2 VXLAN functionality, another VIB enables the distributed router, and the final VIB enables the distributed firewall.
Extract the vxlan.zip file and Copy the folder into the Shared Datastore or on the local folder of the ESXi host using WinScp. I have copied the folder into my ESXi host in /tmp directory. Let’s install the NSX VIBs one by one in the ESXi host.
Install the “esx-vxlan” vib on the ESxi host using the below command:
esxcli software vib install –no-sig-check -v /tmp/vxlan/vib20/esx-vxlan/VMware_bootbank_esx-vxlan_5.5.0-0.0.2107100.vib
Install the “esx-vsip” vib on the ESXi host using the below command:
esxcli software vib install –no-sig-check -v /tmp/vxlan/vib20/esx-vsip/VMware_bootbank_esx-vsip_5.5.0-0.0.2107100.vib
Install the “esx-dvfilter-switch-security” vib on the ESXi host using the below command:
esxcli software vib install –no-sig-check -v /tmp/vxlan/vib20/esx-dvfilter-switch-security/VMware_bootbank_esx-dvfilter-switch-security_5.5.0-0.0.2107100.vib
That’s it. We are done with manually installing NSX VIBs on ESXi host. This operations don’t require reboot of the ESXi host. Even this can be done when active workloads are running on the ESXi host. I hope this is informative for you. Thanks for reading. Be Social and share it in social media, if you feel worth sharing it.
I recently worked with my NSX Setup and I tried to remove/Delete one of the Logical Switch in my Lab. I am getting the error Message “Resources are still in Use”. We come to know from the error message that some of the resources like VM’s are utilizing this Logical switch. That’s why we are not able to delete this Logical Switch. Yes, It is correct. Let’s discuss in this, How to verify what are all the resources are actively utilizing the NSX Logical Switch.
Login to vSphere Web Client -> Network & Security ->Logical Switch -> Select the Logical Switch which you are attempting to delete.
Double Click the Logical Switch, Which you attempt to delete. Select the Related Objects tab, then click on the Virtual Machines Tab.
If you have any remaining virtual machines connected to the Logical Switch you are attempting to delete, migrate them to another Logical Switch. In our Case, I see the VM named “App-svr-1”.which is still connected to the logical switch “App Tier”. So Migrate this VM to Different port Group by Edit settings.
Ok. We have migrated the VM to different Port Group. I tried to delete the Logical Switch. I was Getting the error message “Resources are still in Use”. Yes. There is one more resource which we need to verify . Which Is whether this Logical Switch is connected to any of the NSX Edge Device or DLR (Distributed Logical Router).
Double click the Logical Router which you are attempting to delete. Click on the Manage tab, then click on the NSX Edges button.
If you have any connections (interfaces) to an NSX Edge you will need to remove them I can see this Logical Switch “App Tier” have active connections(interfaces) to the Logical Router. We need to remove them.
To delete the NSX Logical Router Interface, Click on vSphere Web Client -> Network & Security -> NSX Edges ->Select and Double click the Edge Device, where you have active connections to Logical Switch. -> Click Manage Tab -> Settings -> Interfaces -> Select the Interface which connected to your Logical Switch -> Click on X Symbol to delete the interface.
Once both VM’s and Interface(LIF) which is attached to Edge Devices are removed, our Logical Switch don’t have any resources attached to it. Let’s Delete the Logical Switch by Click on X symbol to delete the Logical Switch.
When comes to infrastructure systems, It is always a question of what will be recovery option. It is very normal that system may get crashed due to some issues. It will be always a question in the mind that how would we recover the system and what will be the backup stratergy. In repsonse to the NSX Manger, We can backup and restore the NSX Manager data from NSX Manager management web page. You can back up and restore your NSX Manager data, which can include system configuration, events, and audit log tables. Configuration tables are included in every backup. Backups are saved to a remote location that must be accessible by the NSX Manager. In this post, We will discuss about how to configure and schedule the NSX Manager data. Let’s take a look at the detailed step by step procedure to configure the NSX Manager backup & restore.
Login to NSX Manager management page using the below URL:
https:<NSX-Manager IP_or Name>
In Home Page of NSX Manager,click Backups & Restore Under Appliance Management
Click on Change to specify the FTP Server Settings to store the NSX Manager Backup files.
Click on Change next to Scheduling to schedule the backup of NSX Manager Data.
Click on Change settings for Exclude Option to exclude any of the data during NSX Manager Backup.
All Backup Settings are configured. Click on Backup to initiate the immediate backup of NSX Manager.
Once Backup is completed, You will be able to see the Last backup information like Filename, date and Size of the backup file.
I can see the same information ,when i browse towards the FTP server backup directory.
To Restore the NSX Manager Data, Select one of the Backup file and click on Restore option to restore the NSX Manager Data.
In the Previous post, We have discussed about creating NSX logical switches and now workloads have L2 adjacency across IP subnets with the help of VXLAN. In this post, we are going to enable routing between multiple Logical switches. So We will build three-tier application with logical isolation provided by network segments. Before We deploy the Distributed Logical router, Let’s create additional logical switches. We have already created a Logical switch called “Web-Tier” in the previous post. Now i am going to create two additional Logical switches called “App-Tier” and “DB-Tier”.
I have created additional logical Switches like (App Tier, DB tier along with Web-Tier). We are going to utilize these Logical switches to enable communicate between them using Distributed Logical Routing in upcoming Section
You can see the list of Logical switches which are created from Web Client -> Network & Security -> Logical SwitchesWhen we create the logical switches, it will create a Distributed Port group on all the respective Distributed Switches.
NSX for vSphere provides L3 routing without leaving the hypervisor Known as the Logical Distributed Router. This advancement sees routing occur within the kernel of each host allowing the routing data plane distributed across the NSX enabled domain. The distributed routing capability in the NSX platform provides an optimized and scalable way of handling East – West traffic within a data center. East-West traffic is a communication between virtual machine or a resource within the datacenter.
In a typical vSphere network model, virtual machines running on a hypervisor want to communicate to the VM connected to different subnets, the communication between these VM’s has to go via Physical Adapter of the ESXi host to Switch and also Physical router is used to provide routing services. Virtual machine communication has to go out to the physical router and get back in to the server after routing decision. This un-optimal traffic flow is sometimes called as “hair pinning”.The distributed routing on the NSX platform prevents the “hair-pinning” by providing hypervisor level routing functionality. Each hypervisor has a routing kernel module that performs routing between the logical interfaces (LIFs) defined on that distributed router instance. LIFs is nothing but the interfaces on the router which connects various networks i.e various Logical switches.
Logical Router can support a large number of LIFs up to 1000 per Logical Distributed Router. This along with the support of dynamic routing protocols such as BGP and OSPF allows for scalable routing topologies. LDR allows for heavy optimization of east – west traffic flows and improves application and network architectures.
Below is my lab Topology. I am going to establish communication between 3 Logical switch “Web-Tier” ,”App-Tier” & “DB-Tier” using Logical Router “LDR-001”To Deploy Logical Router -> Login to Web Client ->Networking & Security -> NSX Edges -> Click on + to add NSX Logical router.
We need to specify the Management interfaces and Logical Interface (LIF).Management Interface is for access with SSH to Control VM. LIF interface needed to be configured in Second Table below “Configure Interfaces of this NSX Edge”. Click on Select Option under Management interface Configuration to select the PortGroup to connect to the Control VM Management Interface and assign the IP address for the Management interface of the Logical Router.Click on + symbol under Configure interfaces of this NSX Edge.Create a interface called “Transit-Network” and Select the type as “Uplink”. Click on Connected To and select the logical switch”Transit-Network” to connect to and Assign the Ip address for this LIF (Logical interface). I am going to use this Transit interface to establish the communication between Logical router to Physical network by connecting it to NSX edge device. Which we will discuss in upcoming posts.Enter the Name for this Logical interface(LIF) as “App-Tier” and Select the type as “Internal” and Click on Connected To and select the Logical Switch “App-Tier” and Enter the IP address for this LIF (Logical Interface) as “172.16.20.1”.Create a interface called”Web-Tier” and click on Connected To and Select the logical switch “Web-Tier” and enter the IP address for this interface.Create a Logical Interface “DB-Tier” and connect to the Logical Switch “DB-Tier” and assign the IP address for this LIF interface and click on Ok.
I have Connected 4 Logical Switches “Transit-Network”, “Web-Tier”, “App-Tier” and “DB-Tier” as the interfaces for this logical ineterface. In Simple terms, This Logical router provides routing between the VM’s connected to this Logical switches.Review the Configured settings for the Distributed Logical Router and Click on Finish.
A cloud deployment or a virtual data center has a variety of applications across multiple tenants. These applications and tenants require isolation from each other for security, fault isolation, and avoiding overlapping IP addressing issues. The NSX logical switch creates logical broadcast domains or segments to which an application or tenant virtual machine can be logically wired. The logical switch is nothing but a distributed port group on the distributed switch. The logical switch can expand distributed switches by being associated with a port group in each distributed switch.The NSX controller is the central control point for all logical switches within a network and maintains information of all virtual machines, hosts, logical switches, and VXLANs. A logical switch is mapped to a unique VXLAN, which encapsulates the virtual machine traffic and carries it over the physical IP network.
Below is my Lab topology for Logical Switching. I am going to create a Logical switch called“Web-Tier” and attach the 2 Virtual Machines “Web-Svr-1” & “Web-Svr-2” into the created logical switch. This Logical Switch will allow the communication between these 2 Virtual Machines in different cluster without having actual physical subnet configured at Physical network layer. For both VM’s , configured IP address is in “172.16.10.x” network and ESXi hosts are in the subnet “192.168.10.x”.
To create the logical Switch , Login to Web Client ->Networking & Security -> Logical Switches -> + symbol to add new logical switch
Provide the Name and Description for New Logical Switch. Select the Transport Zone which we have created in the previous step. Select the replication mode as same which you have configured for “VXLAN-Global-Transport” Transport Zone. I have selected “Unicast” mode. Click on Ok to create the new logical switch.
As we Discussed earlier, Logical switch is nothing but a Distributed Port Group in your DvSwitches. When you create a Logical Switch, It will create DvPortgroup in all the associated dvSwitches which are part of the Clusters connected in the Global Transport Zone. So I have created a Logical Switch Called “Web-Tier”. I can see the PortGroups “VXW-dvs-53-virtualwire-2-sid-5000-web-Tier” is created in my both distributed switches.
Once Logical switches are created, We need to associate the workloads (Virtual machines) with the logical switch created in the previous steps. Click on VM symbol to associate the virtual machines to this Logical Switch “Web-Tier”
Select the Virtual Machines from the list to associate with this logical switch (Web-Tier). I have associated the above 2 VM’s from different cluster into this logical switch. Click on Next.
For Multi-Nic VM’s, You can even select the specific vNic to connect to this Logical Switch (Web-Tier). My both VM’s are having only 1 vNic. Select the vNics and Click on Next.
Review the Settings selected and Click on Finish.
Web-svr-1 – 18.104.22.168 (esxi-comp-01)
Web-svr-2 -172.16.10.12 (esxi-comp-02)
My ping to the VM “Web-svr-2” (172.16.10.12) from the VM “web-svr-1” (22.214.171.124) is success and I am receiving the ICMP reply for the ping request. This both VM’s are running in different hosts/Clusters but still my ping between the VM’s on the same logical switch is working well with the help of VXLAN.
When “web-svr-1” communicates to “web-svr-2”, it communicates over VXLAN transport network. When the VM communicates and the switch looks up the MAC address of Web-svr-2. the host is aware in its ARP/MAC/VTEP tables pushed to it by the NSX Controller where this VM resides. It is forwarded out into the VXLAN transport network. It is encapsulated within a VXLAN header and routed to the destination host based on the knowledge of the source host. Upon reaching the destination host the VXLAN header is stripped of and the preserved internal IP packet and frame continues to the host.
That’s it. We are done with Logical Switching. I hope you are clear with the concepts of NSX Logical Switch. We will discuss about Distributed Logical routing in upcoming posts. I hope this is informative for you. Thanks for Reading!!!. Be Social and share it in social media, if you feel worth sharing it.
In the Previous post, We have discussed about configuring VXLAN on ESXi hosts. We will discuss about creating Segment Id and transport Zones in this post. You must specify a segment ID pool for each NSX Manager to isolate your network traffic.
Segment ID range carves up the large range of VXLANs available for assignment to logical segments. If you have multiple NSX domains or regions you can assign a subset of the larger pool. Segment ID pools are subsequently used by logical segments for the VXLAN Network Identifier (VNI). Create Segment ID by Login to Web CLient ->Networking & Security -> Installation -> Logical Network Preparation -> Segment ID ->Click on Edit
The segment ID range determines the maximum number of logical switches that can be created in your infrastructure. Segment ID is like VLANs for VXLAN but with VXLAN, you can have 16,777,216 of them and VLAN is only limited from 1 to 4094. Segment IDs will form the basis for how you segment traffic within the virtualized network.It is possible to use values between 1 and 16 billion, VMware has decided to start the count at 5000 to avoid any confusion between a VLAN ID (ranges from 1 to 4094) and a VXLAN Segment ID. So your VXLAN ID starts from 5000. Here I use the segment range of 5000-10000. Click on OK.
A transport zone is created to delineate the width of the VXLAN/VTEP replication scope and control plane. This can span one or more vSphere clusters. A NSX environment can contain one or more transport zones based on the requirements.In simple terms, Global trasnport Zone is the boundary for group of clusters. Whatever logical switches you create and assign to the Global transport will become available as Distributed Port Group on your DvSwitch on every single cluster in the transport Zone. So these DVPort groups can be used to provide connectivity Virtual Machines which are attached to it. It’s a way to define which clusters of hosts will be able to see and participate in the virtual network that is being defined and configured.
To create Transport Zone -> Login to Web Client ->Networking & Security -> Installation -> Logical Network Preparation -> Transport Zones ->Click on +
Provide the Below information to create the New Transport Zone:
Name – Provide the name for your transport Zone. I named as “VXLAN-Global-Transport”
Description – Enter Description as per your wish
Replication Mode – This option enables you to choose one replication method that VXLAN will use to distribute information across the control plane. Here are the detailed explanation about each replication mode from VMware:
Clusters – Select the Clusters which you want to be part of this transport zone.
Click on OK to create the Transport Zones. You will be able to see the created Trasnport Zone “VXLAN-Global-Transport” under the Transport Zones. We didn’t created any logical switches , so it displays value “0” under Logical switches tab.
We are done with creating Segment ID and Transport Zone. Next will be creating Logical Switches and attach it to virtual machines to enable the network communication. I hope this is informative for you. Thanks for Reading!!. Be Social and share it in Social media, if you feel worth sharing it.
Once Cluster preparation is completed, It time to configure the VXLAN. Virtual Extensible LAN (VXLAN) enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks. VXLAN transport networks deploy a VMkernel interface for VXLAN on each host. This is the interface that will encapsulate network segments packets if it needs to reach a guest on another host. By encapsulating via a VMkernel interface the workload is totally unaware of this process occurring. As far as the workload is concerned the two guests are adjacent on the same segment when infact they could be spanning many L3 boundaries.
To configure the VXLAN, Login to the Web Client > Networking & Security > Installation > Host Preparation-> Configure . A wizard will ask for VXLAN networking configuration details. This will create a new VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP).
Provide the below options to configure the VTEP VMkernel Port:
Enter the IP Pool Name, Gateway, Prefix Length, Primary DNS,DNS Suffix and Static IP Pool range for this New IP Pool and click on Ok to create the New IP Pool.
Click on Ok to create the new VXLAN vmkernel interface in the ESXi hosts.
Once the VXLAN is configured, You will be able to see the status of the VXLAN is changed to “Enabled” for that particular cluster.
As discussed in previous steps, Configure the VXLAN for other clusters in your vCenter.
You can notice the VXLAN VMkernel interface is created for the ESXi hosts in the Compute clusters. It assigns the IP address for the VXLAN VMKernel interface from the IP Pool which we have created earlier.
You can verify the same from the Networking & Security > Installation > Logical Network Preparation>VXLAN Transport.
We are done with configuring VXLAN for ESXi hosts. We will configure Segment ID and transport Zones in the upcoming posts. I hope this is informative for you. Thanks for Reading!!!. Be Social and share it in social media, if you feel worth sharing it.
In the previous post, we have discussed about preparing cluster and hosts for NSX. Once the installation is completed, The installation status will change with the Green Check Mark along with the NSX Version of code (6.1.0) running in the cluster along with Enabled Status for Firewall. Let us verify the NSX installation from ESXi host and what are the changes made to esxi host after the Host preparation. Successful host preparation on the cluster will do the following:
The user world agent (UWA) is composed of the netcpad and vsfwd daemons on the ESXi host. UWA Uses SSL to communicate with NSX Controller on the control plane. UWA Mediates between NSX Controller and the hypervisor kernel modules,except the distributed firewall. Communication related to NSX between the NSX Manager instance or the NSX Controller instances and the ESXi host happen through the UWA. UWA Retrieves information from NSX Manager through the message bus
we can verify the status of User World agents (UWA) from CLI:
From the ESXtop, You can verify the Deamon called netcpa running:
User World Agents (UWA) maintain the logs at /var/log/netcpa.log
Verify Installation Status of NSX VIBs:
Below are the 3 NSX VIBs that get installed on the ESXi host:
Let’s verify that the all the above VIBs are installed using the below command
esxcli software vib get –vibname esx-vxlan
esxcli software vib get –vibname esx-dvfilter-switch-security
esxcli software vib get –vibname esx-vsip
That’s it. We have verified the status of NSX ViBs installation on ESXi hosts. In the upcoming post, We will take look at configuring VXLAN. I hope this is informative for you. Thanks for reading!!!. Be Social and share it in social media, if feel worth sharing it.
In the Previous Posts, We have talked about NSX Controller Deployment andValidating NSX Control Cluster status. This post we are going to walkthorugh about Preparing our Cluster and Hosts for NSX. We have configured NSX Manager and deployed Three NSX Controller. Now we have established both control and management plane. Next step is to prepare the ESXi hosts for NSX. This step is a simple tasks of few clicks to install required VIBs on the ESXi hosts.This step will install the variety of VIBS – VXLAN, distributed Firewall, Distributed Routing and user world agent into every ESXi host. You must select the entire cluster for the installer. so that it will install NSX bits on all the hosts in the cluster. NSX installs three vSphere Installation Bundles (VIB) that enable NSX functionality to the host.
One VIB enables the layer 2 VXLAN functionality, another VIB enables the distributed router, and the final VIB enables the distributed firewall. After adding the VIBs to a distributed switch, that distributed switch is called VMware NSX Virtual Switch.
Login to vCenter Server using vSphere Web Client and Navigate to Networking & Security > Installation > Host Preparation. Choose your cluster and click the Install link.
Note: The ESXi hosts are not required to place in Maintenance mode for this installation. All my virtual Machines are running on the hosts during this installation process.
During the installation Process, You can watch the installation tasks related to the NSX in Web Client or vSphere client.
Once the installation is completed, The installation status will change with the Green Check Mark along with the NSX Version of code (6.1.0) running in the cluster along with Enabled Status for Firewall. I have prepare only 2 clusters out of 3 cluster during this demo.
Once Cluster Preparation is completed, you can see the vxlan is loaded under custom stacks in TCP/IP configuration of the ESXi hosts.
We are done with Cluster and Host preparation for NSX. We will also verify the NSX VIB’s installation from ESXi in upcoming posts. I hope this is informative for you. Thanks for Reading!!!. Be Social and share it in social media, if you feel worth sharing it.
Other VMware NSX Related Posts: