When I’m downloading some binaries lately from VMware, I found the NSX 6.2.5 has been released some days ago. And on checking out the release notes of it from HERE, I immediately upgrade and convince my customer to upgrade it from their existing version. Because from this version, you can obtain some very important bug fixes, and those are covering the following aspects:
- Logical Network
- Edge Service Gateway
And I believe the fixes enable one to have an even more robust and stable Virtual Networking environment. And this is why I would recommend you performing the upgrade as soon as possible. While as a good practices, do check the VMware Interoperability to ensure your integrated solutions are supported.
While the upgrade is just same trivial as the other NSX upgrade, and I would like to share quickly here on the steps I performed, and in high level procedures, one would have to upgrade:
- NSX Manager
- NSX Controllers
- NSX ViB files on Prepared ESXi Hosts
- NSX Edges
So, let’s get started. First, Download the Upgrade Bundle from VMware.com. Remember to download the “NSX for vSphere Upgrade bundle”
Login the NSX Manger admin page, I didn’t troubleshoot a lot here. But seems I cannot get it working thru’ Firefox or Chrome, so every time I use Internet Explorer
Trivially, click the Upgrade button on login. You can see from the screen the previous version in my environment is 6.2.4.
The click the Upgrade button again for upload the downloaded upgrade bundle
Select the upgrade bundle by browse and click continue to start the uploading. The wizard will upload and verify the package for you.
After the verification, you can click continue to proceed and following message will be visible to continue the upgrade. From the wizard message, you can see i’m upgrading the environment from 6.2.4 to 6.2.5. I would recommend you taking a snapshot before confirming the upgrade
Let the upgrade run and you would need to close the browser and religion the NSX manager portal later, since the upgrade process will reboot the NSX manager.
So on re-login, you can see my NSX manager is on it’s 6.2.5 version already. The NSX Manager has been successfully upgraded.
Afterwards, you need to continue the upgrade task at your vSphere Web Client to upgrade the other components i.e. NSX Controllers, NSX edge and NSX related VIBs on ESXi Hosts.
Go to the “Networking & Security” After login
Go to the “Installation” menu and “management” tab, you can see “Upgrade Available” beside the NSX Manager Instance
Click on it and confirm the upgrade, it will proceed in upgrading your NSX Controller. Yes… You can see I only have 1 controller which is NOT supported. But it’s good enough for my lab test, for you case you would upgrade 3 of them. No down time will be incurred if you have 3 controllers as they will load sharing themselves.
So wait till the task complete and confirm NSX controllers is in green status.
Then you would need to upgrade the VIB files in the ESXi hosts which serve the NSX features. You can go to the “Host Preparation” tab from the “installation” menu, again click the “Upgrade Available” to start the upgrade.
More than often, when you are doing the upgrade, you will NOT be able to perform the above steps successfully. As in removing the VIB, host reboot is required… and you would have to manually putting ESXi hosts into maintenance mode and reboot one by one to complete the VIB upgrade.
Yet, this would not be very difficult to complete the upgrade as following
So finally, we have to upgrade the ESG from the “NSX Edges” menu. You can spot the Blue Up Arrow besides the “Deployed” Status column indicates upgrade is available for a specific host
This is easy by right click it and press “Upgrade Version”. But the important point I would like to state here is that upgrade ESG would incur some interruption of services as the Edge is a VM, it will be replaced by another newly deployed VM. And during the take over step from the Old VM to New VM, network packets are expected to be dropped.
So after upgrade, you can see the blue arrow has gone and you have completed the whole upgrade of NSX and now the NSX is of Version 6.2.5!
Great! All done, you have upgraded your environment already! Wish you enjoy the guide and wish this is helpful for you again!
After configuring the platform for the auto deploy as in the part 1 of this blog series, then we need to configure the Auto Deploy in the lab environment. As mentioned before, we have to setup the followings Auto Deploy items for provisioning our ESXi Hosts.
- Preparing ESXi Images for Auto Deploy
- Setting up Host Profile to be used
- Setting up Auto Deploy Rule
- Provisioning ESXi Hosts
- Remediating Host Profile with Host Specific Input
So in my lab, I would like to use Auto Deploy to scale out my VSAN cluster. Well, I think this is legitimate than VSAN Nodes are going to be the most standard hardware Models in your environment and likely when you buying the concept of HCI (Hyper converged infrastructure), you would like to make your environment scaling out as easy as possible? And I believe Auto Deploy is a correct solution to be used. Let See how I’ve performed the above items in my lab step by step:
Day 0 Configuration
So before I’m provisioning new ESXi hosts in my environment, I have built a two node VSAN based on ROBO deployment topology. The existing hosts are running ESXi which is installed with disc and on local hard disk. I’m targeting to use Auto Deploy for stageful configuration deployment and be very careful of the following statement!
Stateless auto deploy is not supported with vSAN
Follow is my ROBO VSAN setup with all the stuffs healthy:
Scaling out the Environment
In order to scale out the environment, you can setup the hosts one by one. I have another blog post HERE which talks about the scaling method of a VSAN cluster. But as said, I would like to deploy my new VSAN Nodes with Auto Deploy which can let me scaling out my VSAN environment easily and more consistently.
In order to achieve this, I have already set up the Nested ESXi VMs which are empty actually to be the target new VSAN Nodes. And I have configured the DHCP Server to serve DHCP function with DHCP option 66 and 67 for distributing the TFTP server IP information and iPXE boot firmware. You can refer to Part 1 of this blog series for what we have already done.
After that, we can start configuring the Auto Deploy thru the vSphere Web Client.
STEP1: Preparing ESXi Images for Auto Deploy
So in order to provision a machine, you need a proper image. Of course, in easy approach, you can download the offline bundle image (not the ISO) from VMware.com for the use of Auto Deploy. The images can be native one from VMware or the 3rd Party Vendor Supplied ones (HP, IBM, Dell… images) which have been embedded with more drivers or softwares from the Vendors.
But actually Auto Deploy allows for creating custom ESXi images since version 5.0, you can download the base image provided by VMware and Inject whatever Drivers and Software (in vib format) on top of the image to create a tailored ESXi image. Now, in vSphere 6.5 version, you can do this from the vSphere Web GUI instead of using PowerCLI way. Effectively, if you are having different physical machines with different vendor or model, you can prepare numbers of images corresponding to each deploy you want to have. You can even have more than one image for one machine model. We can design the scope of image with Auto Deploy rule creation which will be covered soon.
If you are instead testing the Auto Deploy on nested ESXi, the default GA image from VMware will work great. Yet, as a caution if your are using a Nested ESXi 6.5 with EFI on a ESXi 6.5 host, do remember to check off the default setting of “sercure boot” on your nested ESXi hosts… As said, this takes me a really while to fix and enable the Auto Deploy working finally in my environment.
STEP2: Setting up Host Profile to be used
So actually this step is not mandatory, I mean after preparing the ESXi images and DHCP Server, you can actually make your ESXi host booting with PXE (network boot) and streaming the ESXi binary from the Auto Deploy Server and finally obtaining the IP from the DHCP server. You can actually then start using and further configuring the ESXi host afterwards.
The beauty of Auto Deploy is that, it also allows you configuring the initial settings thru Host Profile during the boot up phase of ESXi. Of course, you would need further supply the supplementary information e.g. vMotion IP, for remediation. But anyway, this helps a lot in comparing with the installation method through scripting.
From Web Client, you can create a Host Profile by “Copy setting from Host”
And what you have to do is simple, given if you are going to deploy 100 hosts in your environment and you want those 100 hosts being configured in the same way, then you just have to setup one of them in the traditional way and fully configured it. When all the configuration is done, you just extract the Host Profile from the Host. Certainly, if of the same model of machine and you would like to have multiple configurations, you need to manual setup a few for Host Profile extraction. This will come helpful when you are building different clusters say, DMZ, internal and management cluster, but you want to have different configurations among the clusters for adapting to your environment.
While steps are rather simple as following:
STEP3: Setting up Auto Deploy Rule
So after we have prepared the ESXi images and configuration from Host Profile, what to be done next? It would be defining the scope of those, that’s mapping which Target machine should use which ESXi images and which Host Profile. Not until we have this kind of mapping, the Auto Deploy won’t be functioning. We call this Auto Deploy Rules, which let us creation the scope and governance of images and configuration mapping to a certain target hosts according to some specific criteria.
Much simpler than what you may think of, Auto Deploy provides a Wizard for your ease in managing the Rules Sets. The step is as trivial as above, your just need to:
- Define Criteria of Target Host to screen hosts
- Choose the Auto Deploy Image to be provisioned
- Choose the Host Profile to be configured
Again, trust me that you would love the new Auto Deploy UI in the vSphere Web Client. As in old days, you would have to perform all of the above through PowerCLI commands.
And here is my steps performed:
Go to the Auto Deploy page and Select the “Deploy hosts”, choose “New Deploy Rule” for creating new Auto Deploy Rules, as the first step of the Wizard you would have to select the Scope. While I use “All Hosts” in my test, but you can actually define the Scope base on Mac Address prefix, Vendor or IP… etc.
Then you need to choose the image for provisioning into your iPXE boot ESXi. As said, I will choose the Default VMware ESXi images provided by VMware
Then you can select the Host Profile we extracted in the previous step, which is the host configuration of a VSAN node in our existing environment.
Lastly, choose where you would like to add your host into. I would choose the VSAN ROBO cluster which I would like to scale out.
Confirm the setting and press Finish to complete the Rule Setup
STEP4: Provisioning ESXi Hosts
After creating the Auto Deploy Rules, you still need one last step to enable the Auto Deploy based provisioning. You have to select the rule(s) which you would like to “Activate”, after that, you can start you mechanics which you would like to provision with the ESXi. Of course, after provisioning the machine, I would recommend you “De-Activiate” the Auto Deploy Rules to avoid any accidents of unexpected provisioning on other hosts in the network.
So, let see what should be going on in an Auto Deploy provisioning:
Booting with PXE boot file from the TFTP server
ESXi binary being streamed from the Auto Deploy Server
Host Profile Applying the basic configurations
ESXi Booted up with Management IP obtained from the DHCP server
STEP5: Remediating Host Profile with Host Specific Input
You would see the host being added into the designated cluster after the provisioning and basic configuration should be in place according to the host profile selected. In order to have the full configuration in place, you need to remediate the Host profile by input more details like the vMotion IP, VSAN IP stuffs, remember that we are trying to scale out our vSAN with auto deploy and this is why VSAN IP is needed. If you are just scaling out a standard ESXi cluster, you can skip it for sure.
So following you can see my steps in performing the remediation
Go to the Host Profile and Remediate the Incompliant Configuration, you would need to input the Required fields. I do this by Filtering “Yes”.
Afterwards, I have to create a Disk Group on the New host, as I was using Manual Disk Claim in my VSAN Cluster
BOOM!!! You see, you environment has just got Scaled out. Nice right?
So after the remediation, you can check the result from the vSAN monitor and management page. And… yeah! You got it scaled out!
Auto Deploy indeed provides a very intuitive way for setting up ESXi hosts easily and consistently, this is very useful actually for large environment deployment and also fit for the web scaling infrastructure provided by solutions like hyper converged infrastructure (HCI). As of the vSphere’s new features and support, Auto Deploy finally becomes user friendly and production ready solution. Although there are still limitations like LACP and iSCSI software boot… etc., for many of the scenario, that should be a very great solution for simplifying the ESXi provisioning. I wish you enjoy the blog and wish it is helpful for you!
As one of the new feature of vSphere 6.5, Auto Deploy has been enhanced both in UI management and deployment support. In management aspect, you don’t need to perform the tasks in PowerCLI and more, but you can do all now on the vSphere Web Client. We will walk thru’ a bit on this. More important, in vSphere 6.5 Auto Deploy, finally it supports EFI based servers. Honestly, I have been waiting this long as actually I can barely see any BIOS based servers anymore in these 4-5 years… Last but not the least, now we have a supported way to backup and restore the configuration of the Auto Deploy related configurations and images. All of these stuffs together, makes Auto Deploy more useful for production environments.
So, just a very high level introduction about Auto Deploy in case you don’t realise it. It is not a new function is vSphere 6.5, it is there since 5.0. This function targets to provide an alternative for deploying ESXi in a much more efficient and managed way. In old days, you may use a physical disc to perform the GSX, ESX or ESXi installation. Yes, you can of course turn the disc into an ISO or USB thumb to do the same stuff. But the nature doesn’t change, right? You still need to perform the installation and setup of ESXi hosts one by one. To speed this up, you can use scripted install or putting the ESXi images into a PXE server which let you setup ESXi further easier. But this requires a lot of manual scripting and planning. So VMware provides a better approaches for image management and batch deployment in large environments through auto deploy. While auto deploy is a come with function in vCenter, you would have to install or enable it separately. It enables batch ESXi deployment with the following functions:
- Images mangament for preparing customized ESXi images
- Deployment rules for governing the image scope e.g vendor or model of physical hardware
- IPXE boot and install mechanism with application of vSphere Host Profile
While since vSphere 5.0 till 6.0, all of the above have to been handled by VMware PowerCLI, in vSphere 6.5, you can perform these finally in GUI.
My Lab Test Setup
I’ve tested auto deploy from version to version as it really sounds so cool to deploy ESXi in such an efficient approach. However, as mentioned, I’ve been kept rejected by customers that we did not support their EFI based hardwares and the CLI based management is a killer. That’s why I felt so excited when vSphere 6.5 is announced. And in this lab, I will test the Auto deploy 6.5 with EFI based machine and let you know how it can be done through the vSphere Web Client GUI. As some pre-requisites, you would need the followings:
- vCenter Server 6.5 – windows or appliance based both Okay
- ESXi 6.5 – for carrying the vCenter appliance and the nested ESXi 6.5 be tested for Auto Deploy
- DHCP server – IP distribution with Option 66, 67 supported
- TFTP server – For carrying the iPXE boot binary from Auto Deploy Server
For the point 2, yes you actually can use hardwares for the Auto Deploy tests… however I cannot afford this and VM works great and same. So let see what we need to further configured the items above after your have deployed those.
I am using vCenter Server 6.5 appliance in my lab environment. Because it’s easy to be deployed and appliance is going to be the more emphasized development edition comparing with the Windows based one. You will notice that since vCenter 6.0, auto deploy is not separately installed. Binary and services are installed with the vCenter yet kept disabled by default, this is why in order to unlock it we have to reenable the related services manually. You will see this message under the Auto Deploy menu by default.
Luckily, this can be done directly on the vSphere Web client again and followings are the services which you need to enable:
- Auto deploy services
- ImageBuilder services
And you can perform the configuration under the administration tab in the vSphere web client. Actually from the vCenter Extension menu, you can see all the services supported by vCenter Server. I would make the mentioned items above be Automatically started and of course start it along. The beauty of Auto Deploy 6.5 is its simplicity, you have done the necessary configurations in the vCenter already. While we will come back to the auto deploy configurations after configuring the other items.
Auto Deploy Service is the backend service for the whole mechanism
While Image Builder Service provides the UI in the Web Client you can use to manage the Auto Deploy Setup
vCenter Server Appliance have to be deployed on an ESXi which can be 5.5, 6.0 or 6.5 version. You can choose among any. For the ESXi Servers be tested for provisioning by Auto Deploy, I would pick the version 6.5 version. As mentioned, I don’t have physical EFI servers for the test, I have created nested ESXi 6.5 hosts in my environment for the test.
Be noted that, actually nested ESXi is not officially supported. But it’s good enough for lab.
You can see, if we choose ESXi6.5 the default Firmware Option is EFI, but it’s was BIOS for 6.0, 5.5, 5.1… So, as I want to test it on EFI based machines, do ensure you did that.
Well, it should be trivial as from the Web Client now, you can create nested ESXi 6.5 and all the recommended configuration and setting would be in placed for nested ESXi already. However… I hit into some problem which takes me some time… (4 hours) to fix this. Anyway, let me illustrate the fix in the section I test the Auto Deploy.
All ESXi hosts being provisioned by Auto Deploy has to obtain IP through DHCP, no static IP is required (as compare with the scripted ESXi install method). This is why you would have to have DHCP server in your environment. But do ensure your DHCP server do have the capability to inject DHCP options 66 and 67 which have to be leveraged by the vSphere Auto Deploy.
I see some blogs which is not accurate and I would like to state:
- Option 66: The TFTP server IP, NOT the Auto Deploy Server, a.k.a NOT vCenter Server IP
- Option 67: The iPXE boot file, if you are testing UEFI, use “snponly64.efi.vmw-hardwired”
Actually most of the DHCP servers should support the DHCP options, I’m mentioning this because NSX edge did not support the DHCP options until 6.2.3 version. In my environment, I’m using my AD as the DHCP server directly.
The last item you would need to setup is the TFTP server for carrying the PXE boot binary for Auto Deploy. You have to download the PXE binary from the Auto Deploy tab in the vSphere Web Client and use the TFTP server to stream the binary to the servers which tries to boot from Network.
You can use whatever TFTP server you like and I’m using the Solarwind TFTP server which is free but easy to be used. So followings you would refer to the high level steps I performed in my environment.
Download it from Solarwind:
Setup with default configuration
Download the boot file from the Web Client, thru’ the link “Download TFTP boot Zip”
Unzip it and copy the contents under the C:\TFTP-Root\. From here you can see from the Web Client, the BIOS DHCP File NAMe is undionly.kpxe.vmw-hardwired. You can ignore it, in the DHCP option 66 and 67, we actually ask the iPXE boot to boot to collect the file with filename in Option 67 from the TFTP server address in option 66.
Great, you infra has been prepared for Auto Deploy already! So let’s move on to the Second part of this blog post for configuring the Auto Deploy!!!