vCloud Director 8.10.1, best way to learn cloud AND… – Part 3

So one last thing in this series of blogs, I would like to demo you something easy to setup but perhaps would be very Great in POC. That’s by enabling AutoScaling with vCloud Director. Actually VMware provides an official recommendation HERE on using autoscaling through the use of vRealize Orchestrator with vCloud Director.

So you can implement following with few hundred lines of JavaScripts for VCO workflow Development…

But for POC, the above is just too complicated??? Agree?

So let’s do something Easier!

First of all, I would have like to show you the result of the AutoScaling Mechanism we can achieve.

As you can see from the clip above, the sequence of event we can achieve by following this POC guide for enabling Auto-Scaling in vCloud Director

  1. Stress a VM running on vCloud Director
  2. An alert monitoring Memory is triggered
  3. vCloud Director VM Cloned, Configured with Network and Powered ON

So if this is something you would like to try out and play around, let’s get started!!!

Start the POC Setup

In my setup, of course I have a vCloud Director. But I don’t want to setup a vRO (VCO) just for this POC and making stuffs soooo complicated. Actually if you read into the vCLoud Architecture Framework as mentioned above, the VRO is used for collecting SNMP trap and trigger workflows which help cloning and configuring the VM. This is why I replaced these feature with the following:

  1. vCenter Alarm Definition (Performance Monitor and Trigger)
  2. Trap Receiver (Windows Based SNMP collector)
  3. PowerCLI Script (Execute by Trap Receiver )

Let’s do this quickly!

vCenter Alarm Definition

So, first, you need to define some alert monitor for monitoring a criteria you wanna oversee. In my POC, I setup a alarm definition at vCenter level to monitor the Memory usage.

While 5 mins > 70 will be a warning and 10 mins > 95 is an alert

while the following is more important that we need to ensure a trip is being send out when a warning is triggered

So where does the snmp should send to? We will setup a snmp collector in next step

Trap Receiver 

As mentioned above, we need to have a snmp collector to get the trigger and execute the autoscale out actions. I use a very simple but useful tool which can be installed on any Windows Machine. You can following the steps as below:

Download it from the

Unzip and Execute the Setup binary

Accept the installation path

Complete the setup

Start the Trap Receiver Program and Click the “Configure” button

Add a new “Action” by using “Community” in “Watch” and define the community string under “Equals”. Then we need to select a bat file for it to execute by checking the “Execute” checkbox.

And select the Batch file defined. I will elaborate bit more about the scripts, but anyway you can define an empty bat file like me first

Of course you need to ensure the your vCenter is sending SNMP to this Trap Receiver Equipped Windows Machine

By stressing the VM, you can check if your alert is being triggered and the SNMP trap is being received. So I use “stress” to stress my CentOS machine. Please following this, if you don’t know what’s it.

From the SNMP massage, you should able to read the details

By the way, you can see I define no filter on the SNMP message, so actually every SNMP trap will trigger the action. This is why I have disable all other alert definitions to avoid such. You can do this by PowerCLI easily.

PowerCLI Script

So, as mentioned above, the Trap Receiver will help you collecting the SNMP Trap and Kick off a bat Script. The auto-scaling stuff actually come from the script logic and we need PowerCLI which can help talking to vCloud Director and instruct it working. So followings are my scripts:

As Trap Receiver can only invoke a Batch, so I write a scaleout.bat triggering the PowerShell Script scaleout.ps1.

While the scaleout.bat is as follow:

powershell.exe -ExecutionPolicy Unrestricted -NonInteractive -NoProfile -File C:\autoscale\scaleout.ps1

And the scaleout.ps1 is as follow:

if ( !(Get-Module -Name VMware.VimAutomation.Core -ErrorAction SilentlyContinue) ) {
. “C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Scripts\Initialize-PowerCLIEnvironment.ps1”

connect-ciserver -server https://vcd.vmware.lab -user administrator -password P@ssw0rd

$centos_template = Get-CIVMTemplate -Name centos65
$centos_vapp = Get-CIVApp -Name cloud-vm
$centos_vm_count = ($centos_vapp | get-civm).count + 1
$new_civm_name = “cloud-vm-0$centos_vm_count”
$new_civm_computername = “cloudvm0$centos_vm_count”
$new_civm_network = (get-civm cloud-vm-01 | Get-CINetworkAdapter).vAppNetwork

$new_civm = $centos_vapp | new-civm -Name $new_civm_name -ComputerName $new_civm_computername -VMTemplate $centos_template
$new_civm | Get-CINetworkAdapter | Set-CINetworkAdapter -Connected $true -IPAddressAllocationMode Pool -VAppNetwork $new_civm_network
$new_civm | start-civm

disconnect-ciserver * -Confirm:$false

So you can see from the PowerShell Script logic, there are many hard coded stuffs and do remember to check this for your environment to make this working, these includes the assumptions of:

  1. A vCloud Director VM template Named “centos65”
  2. A target vApp named “cloud-vm” we going to Auto-Scaling out

So after setting this up. You are all done.


Again these setup is totally not for production ready environment, but i think this helps providing a very easy and simple demonstration on Auto-Scaling feature on vCloud Director. Although for the true autoscaling you should also configure the load balancer to take up the new node, I wish this helpful enough for you to start trying it in your environment too. As a result you can keep scaling out the nodes like the following clip!


vCloud Director 8.10.1, best way to learn cloud AND… – Part 2

After installing the vCloud Director 8.10.1, actually the magic and power of the vCD comes not until you have configured the vCD for running your workloads. So in this blog, we will cover how to initialise and configure the vCloud Director for creating a cloud. To recap a bit, we have deployed the vCloud director environment as following diagram already. But we having linked the vCD with the NSX and vSphere yet. And actually the netscaler has not been setup too for load balancing external access. These are what we are going to do in this blog.

Load Balancing the vCloud Director

So, every vCloud Director Node has two service IPs, while one is for http service (Web Portal/API), another is for Console Proxy service (VM Console thru’ VCD UI). Thus, we have to create virtual nodes for these two services correspondingly. Again, I’m using NetScaler not because we need some very complicated load balancing feature from it (actually we just need to load balancing HTTP 80 and TCP 443), but just cause it’s free to download and great for lab as it just need 2 vCPU and 2 GB memory.

So for Load Balancing vCloud Director Cells, we need to setup the followings (for a 2-Cells Config):

  1. 4 “Servers” objects for 2 Service IPs (HTTP, Consoleproxy) Per Cell
  2. 6 “Monitors” objects for 3 Ports (80,443 – HTTP, 443 – Consoleproxy) Per Cell
  3. 3 “Virtual Servers” objects for 3 Ports (80,443 – HTTP, 443 – Consoleproxy)
  4. 1 “Persistent Group” object for All

On completion, you should see similar status as above. And we can access the vCloud Director thru’ the NLB-ed Hostname.

vCloud Director Initial Configuration

To get start, you need to have a flash enabled browser. Perhaps you need to re-enable the flash plugin from later Chrome or Firefox as it would got blocked soon by default. When the browser is in place, you can go to the URL, http://<VCD-VIP-FQDN>. And you will be directed to a initialise wizard.

On pressing Nest to proceed in the previous step you need to accept the UELA

Then you can input the license and press next

You would have to setup the very first super admin here.

Give the vCloud Director Instance a System Name

Confirm the Initialise configuration items and press Finish

On Completion, You can see the vCloud Director Web Portal!

Great You are ready for the detail configuration!

vCloud Director Detail Configuration

After the initial setup, you have to login the vCloud Director with the Super Admin just created in the Initialise Wizard. On login, you can follow the Quick Start steps to perform the Detail Configuration for Integrating your vCenter and NSX with vCloud Director.

Attach a vCenter

Click Step 1 and “Attach a vCenter”, you will be prompted with the UI for adding vCenter and NSX. In the first page, add the vCenter Server Information. You are not necessary to input the “vSphere Web Client URL” actually

In the Second page, you have to input the NSX Manager Information with user credentials.

Confirm the integration on reviewing the input

One Important Step, You need to configure the public address from the administration tab to enable the external access of VCD portal and consoleproxy. Remember the Public URL has to be the VIP if you are using load balancer like me. And the certs, you can just download and input from the browser on opening the VCD portal Directly.

Create a Provider VDC

So you then need to add a Resource Pool from the vCenter as a Provider VDC, actually as a Cluster is a largest Resource Pool thus you can also add a Cluster as a Provider VDC. Yet, different from native vSphere, you would have to create Storage Policies in the vSphere World for vCloud Director base Consumption. You need to do this at your vSphere Web Client ONLY.

If you are not using vSAN or vVOL, you need to create “tag” based Storage Policy as following:

It is actually important as you can govern which storage are consumable from the vCloud Director.

After Creating this, you can add the PVDC as the step 2 in getting start “Add a Provider VDC”. You would have to define the highest supported Hardware Version for your Provider VDC, i’d suggest you using the highest.

Then you would have to select the Resource Pool as the PVDC and the Storage Policy usable in the PVDC. If you have enabled the VXLAN at NSX level, this step will help you creating a new Transport Zone at NSX level. If you would like to use unicast for VXLAN, you need to go to the NSX page to reconfigure the Transport Zone to “unicast”. vCloud Director will by default creating a multicast transport zone.

You can update the Transport Zone at here

Create an External Network

Then we have to select External Networks from the vSphere Environment. These are the networks at the perimeter, DMZ or even public IP for enabling external network access. The NSX transport zone instead provides the IP Pool for provisioning internal networks on demand by vCloud Director Users. Thought an NSX Edge Gateway, we can connect the external and internal networks.

So Creating an External Network is Easy that you just need to select a PortGroup as an external network in vCloud Director

Then you need to define the network information and range you granted to vCloud Director usage. This is important as vCloud Director will grant these IP further to different tenant on demand.

Give the external network a name which your tenant will able to see and use

Press Finish to confirm the setting of the new external network

You should see all the green ticks on the left hand side under the Quick Start. Actually you environment is ready already! You can create your tenant and their Virtual Datacenter for consumption afterwards.

Create a new organization

As the very first tenant to be created, I would recommend to create a “admin” tenant which is used to prepare VM templates for sharing to other tenant for consumption. The nice thing of vCloud Director, for each organization, they got their own URL… and…

Own isolated LDAP configuration. This ensures the multi-tenancy at URL and Login. Of course even you are not using LDAP, you can still use isolated local user service provided by vCloud Director natively.

Allocate resource to an organization

After separating URL and User among different tenants, you definitely have to allocate resources for different tenants and ensure they are not seeing other’s resources. And by vCLoud Director, this is being done using Organisation Virtual Datacenter in vCloud Director which is actually a resource pool in vSphere environment.

So at first, we need to select a Provider Virtual Datacenter to source the resource from

You have to configure an allocation model, which is comparatively trivial. But when you come to the Storage tab, you would have to define how much storage to grant and select a default Storage policy . You would see “Fast Provisioning”, this means linked clone based cloning which let you have super fast speed in VM creation but you would not be able to expand the based disk of this fast clones VM. While thin provisioning is comparatively trivial, right?

Then you can grant the amount of networks you can created in an organization Virtual Datacenter. And you can see I’m using VXLAN as my network pool.

(Optional) You can create network edge (a virtual router) along, this help setting up the basic network for an organization

You need to select which External Network your Edge Gateway is going to connect to, again this is the internet or DMZ facing network

Then you can grant the public/dmz IP for this edge gateway

A few step later, you can create the internal facing network which named organization networks. Definitely, you need to also input the IP information of the internal network

On completing the Wizard, you can have your Organisation Virtual Datacenter configured

Lastly, you can configure the edge service gateway for the NAT, DHCP, firewall functions to enable the network connectivity.

Great! You are all done and you can start provisioning the VM on the Cloud.


After setup and installation of the vCloud Director environment, I wish this blog post gives you a quick start for configuring the VCD to integrate with vCenter and NSX. And more important, this helps you uplifting your vSphere environment to a Public Cloud ready environment already! I think vCloud Director again may not a perfect tool for everyone, say OpenStack could be more modular and highly configurable, but I think from the very basic deployment of vCloud Director, You can learn the basic 101 in setting up a public cloud and this technique is just similar when you are deployment any other cloud tools!

vCloud Director 8.10.1, best way to learn cloud AND… – Part 1


vCloud Director has been chosen by many Service Providers in public cloud building around the world. While vCloud Director is also the engine behind the VMware vCloud Air, a Public Cloud Service from VMware. Personally, I think vCloud Director is an intuitive tool providing an easy way for setup, operation and consuming. Even though there are critics saying vCloud Director is too complicate to use for End User, I think for Technical Users it would always be the finest tool to learn what’s under the hood in a cloud and how should a cloud be composed. I mean, yes AWS, Azure or GCE may be the bigger head in public cloud nowadays, but you won’t know what and how stuffs are working actually and even you know, it would be very impossibly your can build one yourself according to their design.

Instead, vCloud Director is just as simple as the logical architecture above. vCloud Director is a thin but powerful tool let you easily setup on top of vSphere environment and integrate with VMware’s network virtualization engine, NSX. You could then learn the design and methodology on how VMware design a Cloud and how these similar skills and design principles are also being adopted by other Cloud Management Portal or tools. And I also agree that vCloud Director did not cover the Business Logics which would usually needed to be equipped in a Cloud Business and this is why we have 3rd party solution like AirVM and OnApp to make the solution complete.

vCloud Director GUI



So you can actually see the native vCloud Director is more technical focus and this is actually aligned with the direction of VMware in Developing vCloud Director. VMware would like to offer the cloud orchestration engine while leaving the front end and top up solution development to 3rd party solution provider.

Preparation Works

So let’s get started deploying a vCloud Director. As said, you would need to have vSphere and NSX in your environment as prerequisites. While vSAN is optional but it provides a very flexible Software Defined Storage that the VCD can leverage further. There is a VMware blog post discussing this briefly.

OS and IP Preparation

While most of the VMware solutions are being packaged into virtual appliance format, vCloud Director is still an application running on top of traditional OS. You can choose either CentOS or Redhat Linux for vCloud Director 8.10.1, while the detail supported OS version and edition are as following:

  • CentOS 6 (I’m using CentOS 6.5 in this setup)
  • CentOS 7
  • Red Hat Enterprise Linux 5, update 4-10
  • Red Hat Enterprise Linux 6, updates 1-7
  • Red Hat Enterprise Linux 7

We don’t need a lot of customisation in the installation, “Basic Server” option would be good enough. The only requirement is to have two service IPs. Yes, I know that you may have read the What’s New white paper and vCloud Director 8.10 installation guide which telling you can setup 8.10 with one single IP.

From What’s New white paper:

You have to use unattended installation for all the VCD nodes with the following command from the Installation Guide.

From vCloud Director 8.10 installation guide:

You can see from the above single IP unattended installation  command, rather than using port 443 of two service IPs in a normal deployment, you would have to provide two Ports over a shared IP. But in this lab, I am not using this single IP setup, instead I am using the traditional dual IP configuration.

Database Preparation

Meanwhile, vCloud Director depends on an external database for storing the configurations. This can be Oracle or Microsoft SQL Server. I am using MS SQL for this setup. The database scheme and tables will be created during the installation. So we just need to create a empty database.

Create a DB, I name it vCloud (But Actually can be any)

User Mapping like this is good enough

Certificates Preparation (Skip)

While some installation guides ask us to prepare the Certificates before setup, but I would like to skip this until after installing the vCloud Director binary.

vCloud Director (Multi Cells) Setup

On preparing the stuffs above, we can start setting up the vCloud Director. While Single Cell is unlikely a deployment topology in production environment, I’m setting up a 2-node vCloud Director environment as the diagram below:

Again, I need to mention that why I use vCenter 6.0 in the setup because there is no supported NSX version for vCenter 6.5 yet (as of today). You could always refer to the VMware Interoperability Matrix HERE as always.

So, let’s get started! I assume you having setup the two VCD Cells already (with network info) and you can SSH into the VCD nodes to proceed the setup.

  • Install libXdmcp on both of the VCD nodes by:

    yum install libXdmcp

  • Given that you have upload the vCloud Director Binary, you can then run it. Remember do NOT run the configuration script on the prompt “Would you like to run the script now? [y/n]?”

  • After the installation, we can generate the certificates needed for vCloud Director. The point for doing this, because there is a keytool come with the setup locating at /opt/vmware/vcloud-director/jre/bin. Do NOT use the linux native keytool if you are following my guide. We need to generate two Certificates in one key store which is being used by the vCloud Director, after change directory to the above mentioned path

HTTP Certificate Generation

./keytool -keystore /install/certificates.ks -storetype JCEKS -storepass P@ssw0rd -genkey -keyalg RSA -keysize 2048 -alias http

Console Proxy Certificate Generation

./keytool -keystore /install/certificates.ks -storetype JCEKS -storepass P@ssw0rd -genkey -keyalg RSA -keysize 2048 -alias consoleproxy

  • Generate Certificate Request for HTTP and Consoleproxy, you would need to use the same keytool we have used in the previous step:

Use following command for the http certificate

./keytool -keystore /install/certificates.ks -storetype JCEKS -storepass P@ssw0rd -certreq -alias http -file /install/http.csr -keysize 2048

Use following command for the console proxy certificate

./keytool -keystore /install/certificates.ks -storetype JCEKS -storepass P@ssw0rd -certreq -alias consoleproxy -file /install/consoleproxy.csr -keysize 2048

  • Sign the Certificate Requests with the AD (You can skip this for using self signed certificates)

Copy the content of the http.csr and consoleproxy.csr into the cert request page. “Web Server” template would be good enough.

You would have to get the http, consoleproxy and root certificates from the certificate request page. While the certificate can be downloaded thru’ the “Download certificate” link. You need to extract the root certificate from the certificate chain thru’ the “Download certificate chain” link.

So then, upload all the certificates back on to the vCloud Director Cell 1

Input the root certificate into the Node 1 first thru’ the command:

./keytool -alias root -storetype JCEKS -storepass P@ssw0rd -keystore /install/certificates.ks -importcert -file root.cer

Then you can import the http and console proxy certificates

HTTP Certificates

./keytool -storetype JCEKS -storepass P@ssw0rd -keystore /install/certificates.ks -importcert -alias http -file http.cer

Console Proxy Certificates

./keytool -storetype JCEKS -storepass P@ssw0rd -keystore /install/certificates.ks -importcert -alias consoleproxy -file consoleproxy.cer

  • Setup the NFS and Mount up the partition to /opt/vmware/vcloud-director/data/transfer

Setup the NFS mount point on the NFS server, I use a NTP server as the NFS repository

In the VCD node 1, mount the NFS export by editing /etc/fstab

Check if the mount point shows the export being mount successfully

  • Running the configuration script for setting up the vCloud Director Node-01. It is under the path /opt/vmware/vcloud-director/bin/configure. You gonna provide:
    1. IP of HTTP
    2. IP of Consoleproxy
    3. Certificate.ks path
    4. Certificate.ks Key Store Password
    5. Choose Database Type
    6. Provide Database IP, Name, Instance and Password

Choose “Y” to start the vCloud Director after the configuration. You can then verify the status of service with

service vmware-vcd status

For detail start up status, you can tail the log under /opt/vmware/vcloud-director/logs/cells.log

  • Validating the VCD Node 1

So we can then open the link of http://<vcd01-ip-or-fqdn>/ to verify the successful deployment. This is the initialise setup wizard, but you can skip configuring it, as we have to setup the second VCD node.

  • Setup Second VCD Node

You have to repeat some steps as setting up the VCD Node, say:

  1. Install libXDcmp package
  2. Install vCloud Director Binary

But you won’t have to generate the cert again as the same certificate.ks would be shared among the different vCloud Director Cells. But again do NOT run the configuration script after installing the vCloud Director Binary.

You would need to copy the and certificates.ks files to the /opt/vmware/vcloud-director/data/transfer (which is the NFS share such that VCD 02 can see it)

After that, you can run the configuration script at the VCD Node 2.

/opt/vmware/vcloud-director/bin/configure -r /opt/vmware/vcloud-director/data/transfer/

Then you can start the service on prompting and the vCloud Director would be successfully started with multiple cell configuration.



Great, you have done the setup for the vCD already such that the vCD is integrated with the vSphere and NSX already. We can then proceed in initialising the vCloud Director to provide public cloud service. Wish this blog is helpful for your setup and do stay tune for the Part 2 in initial vCD configuration.

Cost Reduction for Home Lab… Instant clone ESXi for vSAN testing :)

Since the release of VMware vSAN (5.5), if you want to test it in your lab, your would need to have 3 hosts. Of course, when come to version 6.1, you can have ROBO vSAN deployment which let you running vSAN on two nodes. Some of my friends, do really buy physical hosts for their lab. Yes, of course you can use VM as your vSAN hosts, but usually the constraints come from the memory you have in your lab. My friends thus upgrade their Intel NUC which supports 16GB Maximum before to SuperMicro servers which support 128GB. Well, I would like to have it too but just I have limited budget and room for such a lab in my home. This is why I got to think my own way to test vSAN, and what I want to test is not ROBO or 3 node VSAN. I want to deploy a production ready vSAN which I think should be composed of 4 nodes or more. So, my solution is by leveraging instant clone.

Instant Clone

Instant Clone technology was named as project Fargo, it is a new cloning technology since vSphere 6.0. This is a hidden feature which has not be used widely as you could not use it from GUI (vSphere c# client or Web Client). But the beauty of the new cloning technology let you save a lot of Memory by using Copy on Write (COW) like mechanism on Memory. So theoretically, when you are cloning 100 child VMs from a 4GB memory parent VM thru’ Instant Clone, if there are no memory block changes in the child VMs, you just need the same 4GB memory (comparing with 404GB memory).

And this is why I would like to try cloning Nested ESXi Hosts in my lab to test the vSAN! But before that I need to clarify that officially this feature is supported and being used in selected solution only, like Horizon View 7, Big Data Extension and vSphere Integrated Container. This is because of the fact that as we are using COW on memory, it may not be suitable for running traditional workloads which may need to be restarted at OS level on and off.

I have a vCenter 6.5 in my lab and I have a Physical Host which is equipped with 32GB memory. I would like to leverage it for building my vSAN cluster. And I would put 7 nested ESXi running on the host in total (counting the parent VM in blue), vSAN 6.2 will be enabled on the 6 child ESXi Hosts.

Following image illustrates the simple architecture I would like to build out:

Getting Start

As mentioned, as we have no GUI for performing the instant clone, we would have to leverage a VMware fling tool for performing instant clone. While you could find more information from the VMware Blog, the tools we used can be download from HERE is actually an additional module for PowerCli. As the prerequisites for forking out an instant clone VM, you would have to prepare the following in your environment:

  1. vSphere 6.0 or later
  2. PowerCLI 6.0 or later

By the way, actually I’m not the first guy trying to instant clone out ESXi and I was trying to follow William Lam’s Blog to do that. You can refer to his blog HERE, but perhaps due to version changes I have to edit some scripts to make it actually working in my environment. And we will following the following steps in deploying it:

  1. Install Instant-Clone PowerCLI Module
  2. Deploy and Edit the Parent Nested ESXi VM
  3. Prepare the Parent Nested ESXi VM for Instant Clone
  4. Instant Clone 6 Child Nodes
  5. Connect Cloned ESXi to vCenter
  6. Configure the ESXi for VSAN Deployment
  7. Create VSAN Cluster

Install Instant-Clone PowerCLI Module

So let’s start preparing your environment for the testing now. As instant clone is build in at vCenter and vSphere side, we don’t need to amend anything from server side to enable it. Instead, we need to prepare the PowerCLI environment and install the fling Instant Clone Module. It is actually named “POWERCLI EXTENSIONS”, do download it HERE.

Actually you can following the instruction from the fling, but I would rather using a simpler method. You can just:

  1. Download the zip file from the URL
  2. Unzip the Package
  3. drag and drop the Modules “VMware.VimAutomation.Extensions” into the following Directory.

C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Modules\

You got your machine equipped with Instant clone cmdlet! Simple enough?

Deploy and Edit the Parent Nested ESXi VM

You can either deploy your own new Nested ESXi which is just by setting up a new VM on top of an ESXi. But an easier approach will be download it from William Lam’s Post HERE, he has prepared us with nested ESXi 5.5, 6.0 and 6.5. I would use the 6.0 Update 2 version for building my vSAN 6.2. I would not copy the steps in deploying OVA file as it is quite trivial. In running the instant clone script later, we need to refer to the name of this Parent VM, so do record it even you can always change it.

Do check the check box for enabling SSH, we need it

But the point is, before you are going to power on the Nested ESXi, you better configure more memory say 8GB Memory to the host. The Default 6GB memory configured would fail the vSAN enablement. As from the OVA prepared by William, the Nested ESXi is equipped with 3 disks, one for installing the ESXi and the other two for vSAN. So we don’t need to add Hard disks to the Nested ESXi we cloned out later for creating the vSAN cluster.

So you got this step done, we don’t need to add the Nested ESXi Parent VM into the vCenter, this is because we are going to remove the networking information from it for instant cloning.

Prepare the Parent Nested ESXi VM for Instant Clone

After the Parent VM has been powered on, do not change the configurations say ESXi Shell and SSH Shell. The network configuration should have been already in place if the OVA deployment has been successfully completed.

Then we need to prepare the Parent VM for Instant Cloning, William has prepared us some sample scripts for instant cloning the ESXi VM. You can download it from the GitHub. In this step, we just need the “” script.

As mentioned a bit, this is used for preparing the Parent VM for instant cloning. The script targets to remove host specific configuration say networking and UUID information. To be specific, the script disable the hostd daemon, unload network and disk driver and remove the VM Kernel adapter. And this is why I mentioned, you don’t have to add the parent VM into the vCenter as it will be disconnected after running this script.

As said, I have amended some scripts to make instant cloning working in my environment and those are the:

  1. vmfork-esxi60.ps1

So you can just upload and run this “” on the parent VM without editing. And be careful that this just works for the ESXi6.0 OVA, for ESXi6.5 the module and method for preparing the host is not very the same.

You would see similar screen after running the script, as vmk0 is removed, you should also realised that you cannot ping this parent VM. Your parent VM is then prepared for instant clone.

Instant Clone 6 Child Nodes

Then, we can instant clone the 6 Child Nodes out with this prepared parent VM. This time you would need the vmfork-esxi60.ps1. As said, originally I would like to use the script directly after editing the file path and vCenter connection information but I could not get it pass. It looks like the cmdlet has been changed due to version update (I mean the fling Powercli module) such that the script has to be modified a bit. Following please refer to my edited script:

While the green columns have to be changed for your environment. The red circles ones are the lines you need to change if you are using latest PowerCli and latest VMware Fling Module.

Changing it from the original (William’s Version) one line script:

$quiesceParentVM = Enable-InstantCloneVM VM $parentvm GuestUser $parentvm_username GuestPassword $parentvm_password PreQuiesceScript $precust_script PostCloneScript $postcust_script Confirm:$false

To two lines:

Enable-InstantCloneVM -VM “$parentvm” -GuestUser “$parentvm_username” -GuestPassword “$parentvm_password” -PreQuiesceScript “$precust_script” -PostCloneScript “$postcust_script” -Confirm:$false

$quiesceParentVM = Get-InstantCloneVM

And from the above script actually you can see the “-PreQuiesceScript” and “-PostCloneScript” attributes, and actually these two attributes define what tasks are being done before and after the instant Clone. We would keep the “-PostCloneScript” running the “” written by William, while again I need to change some lines in the “-PostCloneScript” to make instant cloning working better.

I changed the following part of the script from William

# setups VMK0
localcli ${RESOURCE_GRP} network vswitch standard portgroup add -p “Management Network” -v “vSwitch0”
localcli ${RESOURCE_GRP} network ip interface add -i vmk0 -p “Management Network” -M ${mac}
localcli ${RESOURCE_GRP} network ip interface ipv4 set -i vmk0 -I ${ipaddress} -N ${netmask} -t static
localcli ${RESOURCE_GRP} system hostname set -f ${hostname}
localcli ${RESOURCE_GRP} network ip route ipv4 add -g ${gateway} -n default

To following, so actually William’s script try to recreate the VMK0, but I found it’s working yet not taking up a new MAC address such that the inter host network connection fails. Thus, I have to ignore the VMK0 (which has been deleted already) and create a new VM Kernel Adapter (VMK1) instead.

# setups VMK1
localcli ${RESOURCE_GRP} network vswitch standard portgroup add -p “Management Network” -v “vSwitch0”
localcli ${RESOURCE_GRP} network ip interface add -i vmk1 -p “Management Network” -M ${mac}
localcli ${RESOURCE_GRP} network ip interface ipv4 set -i vmk1 -I ${ipaddress} -N ${netmask} -t static
localcli ${RESOURCE_GRP} system hostname set -f ${hostname}
localcli ${RESOURCE_GRP} network ip route ipv4 add -g ${gateway} -n default

After you edit both scripts. Then you can open a PowerCLI console and run the edited script to instant cloning the Nested ESXi Hosts. On successfully cloning, you should see the following. And 6 Nested ESXi will be cloned out and visible in the vSphere Client.

You should be able to ping to all the nested ESXi and even connect to configure it like normal ESXi. The hostname will be in place either.

Connect Cloned ESXi to vCenter

So of course, we need to add the cloned Nested ESXi into the vCenter for management. You definitely can use the GUI to create cluster and add hosts one by one. But I’m bit lazy to do that so I use PowerCLI to help:

1..6| foreach {$esxi = “vsanesx0$_”; get-cluster -Name VSAN| Add-VMHost -Name $esxi -Force -RunAsync -User root -Password abcd1234}

This will help me adding all the hosts into the vCenter. By the way, you can check the memory consumption of the 32GB ESXi Host, we got 7 x 8GB Nested ESXi Hosts running on top, but you can see the memory usage is far from 56GB Memory. Cool enough?

Configure the ESXi for VSAN Deployment

So one last step before we can deploy the VSAN, we just need to enable VSAN traffic among the Nested ESXi Hosts. To make stuff simple, I will reuse the Management Network for all the vMotion, VSAN traffic. I think good enough for POC? Again, you can use GUI to do that, but I prefer my PowerCLI way

get-vmhost vsan* | Get-VMHostNetworkAdapter -Name vmk1 |Set-VMHostNetworkAdapter -VMotionEnabled $true -VsanTrafficEnabled $true -Confirm:$false

Such then, everything is good to go for vSAN deployment!

Create VSAN Cluster

This is too simple and you just need to enable it at the new VSAN cluster we created and Hosts being connected to.

BOOM! VSAN 6.2 running on 6 Instant Clone ESXi Hosts.


Well, this is definitely not something supported. But again, to test out VSAN it does not have to be $$$, now I got 6 (actually i did a 9 node too) ESXi hosts running on 32GB Memory ESXi Host. I think many laptop of PC is running with 32GB memory already now right? So you can also test it on your machine. I Wish this is helpful for you!

P.S. Actually my 32GB Memory ESXi Host, is another Nested ESXi Host too 🙂

Setup vSphere Integrated Container v0.8.0-rc3

As mentioned in the previous blog, we talked about a few approaches that we can configure container environment on you sphere environment. While three ways out of these are:

  1. Approach 1: vSphere Integrated Container
    • Leverage Existing vSphere Environment
    • Easier Approach
  2. Approach 2: Photon Platform
    • Green Field Deployment
    • NOT vCenter integrated
    • Kubernetes, Mesos and Docker Swarm Provider
    • Multi-tenant
  3. Approach 3: Existing Container Host (Core OS, Photon OS… etc)
    • Managed by vRA 7.2
    • Managed by Admiral

Most of the items mentioned above are open source projects either from VMware or other 3rd party provider, such that you can actually test the deployment in your environment to enjoy embracing the deployment of Container over ESXi or vSphere Environment. As these projects are still evolving by time to time, please refer to the URL for the latest direction and information for each project. It did happened that some projects have been changed radically in function and feature with the same code name.

In this blog, I would like to talk and illustrate the deployment considerations for vSphere integrate Container (VIC). As from vSphere 6.5 What’s New, you would know that VIC would be come with the vCenter soon.



So what if you are running vSphere 5.5 or 6.0? Well, you can use the vSphere Integrated Container (VIC) from the VMware open source project directly.

Don’t think VIC is too difficult a solution, even we mentioned that VIC provides a single solution for both Developer and Operation team at the same time. VIC provides a command for you to create a Virtual Container Host VM on a ESXi Host. Then this Virtual Container Host VM will provide the Docker API interface for the Developer to consume. And each Containers provisioned on this Virtual Container Host VM will be deployed on a separate VM, in VMware terminology we call it jeVMs (just enough VMs). These VMs are cloned thru’ Instant Clone technology, imagine it as doing link clone on both Memory and Storage. Such that the OS memory overhead is actually de-dulpicated while each container is running on a dedicated VM for traditional monitoring and operations.


Deploying vSphere integrated Container (VIC)

From the VMware Open Source web site you can learn about the deployment steps and detail. Actually, we can deploy VIC with three approaches

  1. Standalone ESXi Host
  2. Standalone Host Under Management of vCenter
  3. Cluster Under Management of vCenter

While Point 1 and Point 2 are more for development environment, I would demonstrate how Point 3 can be deployed and configured which is much more production ready. To learn more about the pre-requisites for deploying VIC, again you can refer to the Open Source VMware WebSite. But let me try simplifying this.


Every Virtual Container Host being deployed by the vSphere Integrated Container contains 4 type of network. While Container bridge Network is a backbone network for the data traffic between containers and therefore it has to be a dedicated and isolated network for each of the Virtual Container hosts. And the other networks can leverage one single network with one IP Address. I would use this approach to deploy my Virtual Host Container with VIC.

So let me show you what I have deployed in the environment:

  1. Install Virtual Container Hosts thru’ VIC Command
  2. Install VIC Plugin on vCenter Server Appliance
  3. Deploy Some Containers to Check the Backend behaviour

Install Virtual Container Hosts thru’ VIC Command

As mentioned, you can download the VIC from the Open Source Page. Well, the most updated Version I can download is version v0.8.0-rc3. On unzipping, you can find a bunches of scripts and ISO which we leverage to deploy the Virtual Container Hosts. But as mentioned in the Open Source Guidelines, DO NOT run any executables other than the scripts:

  • vic-machine-darwin
  • vic-machine-linux
  • vic-machine-windows.exe

Depends on what OS you are running it for setting up your VIC Virtual Host Container. For my example, I use windows and I use one IP for the 3 networks mentioned while keeping the Container Bridge network on a NSX Logical Switch which is dedicated for the First Virtual Container Host I created. I use the following command:

vic-machine-windows create –target administrator@vra.local:P@ssw0rd@ –no-tls –compute-resource “Cloud Resources” –image-store Cloud-DS01 –bridge-network  vxw-dvs-15-virtualwire-3-sid-5002-vic-bridge-02 –public-network “VM Network” –public-network-ip –public-network-gateway –management-network “VM Network” –client-network “VM Network” –dns-server –name vch2 –no-tls –thumbprint 2B:3E:56:D1:61:9A:2F:91:D5:8D:B2:D1:15:CC:DE:CB:4C:D6:B3:A4


  • “VM Network” is the network port group for the external access
  • “Cloud-DS01” is the Datastore for the Virtual Container Host to place on
  • “Cloud Resources” Cluster for the Virtual Container Host to place on
  • “vxw-dvs-15-virtualwire-3-sid-5002-vic-bridge-02” is a VXLAN based port group for Container Bridge Network

I cannot get the command works for any tls configuration until I put in the –thumbprint 2B:3E:56:D1:61:9A:2F:91:D5:8D:B2:D1:15:CC:DE:CB:4C:D6:B3:A4. You can get this thumbprint actually by executing the command by removing the “–thumbprint xxxxxxx”.


So way till the Virtual Container Host provisioned successfully, by the way, you would need to enable a non default network port in-between the ESXi hosts (but easier way is to disable firewall services in all your hosts within the cluster).

So after running the script you can find a vApp and a VM is being deployed (the other VMs are actually containers we deployed, we will show how these VMs are created in the last section):


You can verify the deployment on the https://<VCH FQDN and IP>:2378, and you can see the status there on the web page


Install VIC Plugin on vCenter Server Appliance

In order to manage the VIC Virtual Container Hosts in a better way, in VIC installable, there are a web client plugin which we can install on the vCenter Appliance to enable a summary text box showing the Docker API URL and relate information. You can install this by:

  1. Enable bash shell for SFTP upload on VCSA
  2. Upload the VIC installable to the VCSA
  3. Configure the Plugin Config file
  4. Setup the VIC Web Client Plugin

So the detail steps are as following:

As we need to upload the VIC binary onto the vCenter Appliance, we need first enable the Bash Shell on the VCSA according to the VMware KB.


Where we can upload the VIC files thru’ WinSCP or SFTP ways


Expand the File on the VCSA


Edit the Plugin Config file at /vic/ui/VCSA/configs


Include the vCenter IP in the configuration file, you just need to edit the attribute VCENTER_IP


You then can run the at /vic/ui/VCSA to setup the Web Client Plugin


On completion, you can check out the new web Client plugin feature by selecting the Virtual Container Host on the Web Client


Deploy Some Containers to Check the Backend behaviour

So lastly, we can try deploying containers on the Virtual Container Hosts. We can do this by using the docker command with remote host attribute as following:

docker -H <Virtual Container Host IP:2375> run -d <Docker Image>


So you can see the docker images being provisioned. And what it looks like in vSphere view? As said, when a new docker image is provisioned, one VM is cloned out per docker image and all those VMs are being grouped under the Virtual Host Container vApp.



I think it’s really easy for deploying the Virtual Container Hosts on a vSphere environment for one to enjoy the container’s benefit on their existing environment. And the great way is that, we gain the visibility of container by the 1 to 1 mapping between container and VM. So from the vSphere Performance tab and the vROPS, we can know about the resource utilisation of each container thru’ monitoring of each jeVM. Sounds good right?

vRealize Automation 7.2 Lab – Blueprints IaaS

So back to basic, what vRA 7.2 natively doing best would be the IaaS based provisioning. It could be a single machine provisioning or multiple machine with network component ones. If you have ever worked on vRA 6.2, 6.1 or VCAC 6.0, 5.5… or even older, you should know that we have different concept and thus blueprint designers for single machine and multi machine. But since 7.0 version, VMware has integrated all the different blueprints into one single new concept which is the converged blueprint. I think the introduction of converged blueprint is one of the best new features so far in vRealize Automation 7.X.


It allows you creating blueprints from different endpoint with same look and feel with one single UI. So what we have to do before we can create a blueprint for vSphere based machine?

  1. vSphere Template
    • Offline Snapshots for Linked Clone Base Provisioning
    • VM Template for Full Clone Base Provisioning
  2. OS Customisation Specification
    • Windows
    • Linux
  3. (Optional) Install VRA Guest Agent
  4. (Optional) Install Software Bootstrap Agent

When you got the above done, then you can create a converged blueprint based on the vSphere Template and OS Customization Specification. While Step 1 and Step 2 is comparatively trivial, I would like to provide you some high light steps in setting up the Step 3 or Step 4.

You do not necessarily need a VRA Guest Agent if your the IaaS blueprint you wanna create is a basic OS cloning and keep the provision stuff default. You only need a VRA Guest Agent if you would like to tweak or customise the default provisioning configuration. You can refer to this guide to check what can you do with the VRA Guest Agent and the Corresponding Custom Property.

Likewise, you don’t need to install the Software BootStrap Agent too if you are not trying to use software component during the machine provisioning in VRA. To clarify a bit, actually you would need the VRA Guest Agent too if you need to provision software component thru’ VRA. But you don’t need to go thru’ Step 3 before going to Step 4 as when you are preparing the OS template with the Software Bootstrap Agent, the VRA Guest Agent will be installed alongside.

Step 3 – Install VRA Guest Agent

On the template prepared in Step 1, you can further download the VRA agent from the URL https://<vra FQDN or IP>. Choose “Guest and software agents page”.


Choose the corresponding Guest Agent for your OS. As my example is a windows, download the Windows guest agent files (64-bit).


Before running the downloaded file, right click and edit the property of the file


Select “UnBlock” and trust the installable


You then can run the installable and click Run to proceed


You can see actually the executable would unzip the agent content in VRMGuestAgent Folder. You have to manually move the Unzipped folder to C:\ which is the expected folder path


Then you need to setup the VRA agent as a service with the following command

WinService.exe -i -h <IaaS Host>:443 -p ssl


Great your template is installed with VRA Guest Agent for blueprints which requires Custom Properties for customization

Step 4 – Install VRA Software Bootstrap Agent

Scroll down the page for downloading guess agent, you could find a powershell script named prepare_vra_template.ps1. I would recommend use this way to prepare your VM template as it’s trivial and easy but require internet access


Before you can run the ps1 script, you may need to configure the powershell Execution Policy thru’ power shell with administrator right

set-executionpolicy -executionpolicy unrestricted

Run the script and it will ask you about the basic information like VRA appliance Address, VRA IaaS Host Address, Username and Password… etc. Then it will help downloading the VRA Agent and Java Run time required to be installed.


VRA Agent will be installed if not found, removed and installed again if being found in the template


After all the tasks are done, you can read the message “INSTALL COMPLETE Ready for shutdown”. Such that you can shut down your VM for creating a snapshot or VM Template for cloning by VRA


Creating VRA Blueprint

Shutdown the VM and take an offline snapshot to be used by VRA blueprint. Also Create a OS customisation Spec for your OS if you do not have one, don’t care about the network part in the customisation Spec as it would be overwrite by VRA action anyway.

After creating the Snapshot or Customisation Spec, remember you need to perform a data collection on the VRA portal. As created in the last blog, you can do it thru’ requesting the XaaS for data collection.


Go back to the VRA and go “Design” tab and Create a new Blueprint.


In the “Build Information”, I use “Linked Clone” for my case and this is why I need to choose a VM which has a snapshot in the “Clone From” and the corresponding Snapshot in the “Clone from Snapshot”. Finally, we also have to supply the Customisation Spec, you need to type rather than select in this field.


Define the Maximum, such that your end user can change the sizing of the VM resources


While the other pages can be skipped. Afterwards, you need to publish the blueprint on complete. Then you need to go “Administration” tab to create a new service for the IaaS based function


You would see something like the following on successful creation


In the Catalog items, you can see the Windows Blueprint we published. Choose the item


Change the blueprint to classify it as an “IaaS” service


Update the Entitlement at the “Entitlements”


Add the IaaS service under the “Entitled Services” and add the entitled Actions for the VM


You then can see the new IaaS base catalog item under the “Catalog” tab


On requesting, you can change the CPU and Memory configuration


Test by confirming the request with OK


Monitor the provisioning status from the “Request” tab


On successful creation, you can find the provisioned item under the “Item” tab


Software Component 

Besides IaaS and XaaS, we can also create Software Components which allow scripts to be kicked off during the provisioning stage. As said, the template has to be installed with the Software BootStrap Agent before this works. To add a new software components, go “Design” tab and “Software Components”.


I create a really easy script with kick off the ipconfig command after the VM provisioning


Skip the properties Page


Input the Scripts to be kicked start during the VM provision


You can definitely use your own scripts for different stages of provisioning


Confirm the configuration by pressing Finish


So to test this, we can use an existing blueprint or creating a new IaaS based Blueprint


NSX configuration is ignorable as Network Component is not yet included


After drag and drop the vSphere Blueprint into the Canvas, drag the software component on top of the IaaS Blueprint. Then save your blueprint.


Publish the Blueprint created


Put it also under IaaS Service in the “Catalog Items” tab


You can see the new item under the “Catalog” item


Request the item for provisioning


Press OK to confirm the request


On Successful Creation, go check out the request item by double clicking it


You can find the Software Component (ipconfig /all) being executed after the VM provisioning


And all the tasks were run successfully


You can Optionally test the “Scale Out” function in the VRA


Then you can select how many number of instances of Windows you would like to scaled out


Confirm the Request by OK


And you can get the Scale out instances of VM soon.


Here you see how the IaaS, Software Components and Custom Property based Provisioning can be performed with the VRA Guest Agent and VRA Software BootStrap Agent. It is important to map your use case with different blueprint design. Either agent may not be useful for you use case, but the holistic view of converged blueprint designer really give user a great way to design their blueprint much easier now and wish you would enjoy it.

vRealize Automation 7.2 Lab – Blueprints XaaS

As the third part of the blog series, I will start demonstrating some XaaS blueprint usage. XaaS is referring to anything as a service, actually I think to be more exact and more technical, I think XaaS refers to any services which can be achievable by using vRealize Orchestrator. So when will we need XaaS? Then think about what you want to enable your user to use on the self service portal, and think about what cannot supported by the native function of vRA? Then the Gap between two could possibly be fulfilled by XaaS. Thus whatever vRO can do can be provisioned onto the vRA Self Service Portal as a Catalog item for consumption.

So, I would like to demonstrate two stuffs here

  1. Enable XaaS on vRA
  2. Configure one XaaS service to Trigger Data Collection from Endpoints

As a result, you don’t need to go to “Infrastructure” tab and “compute resource” and “data collection” and click “start” data collection on each collection object one by one. Sounds good? Let’s get started!

Enable XaaS on vRA

So actually, you don’t need to do a lot to enable the XaaS service. As mentioned in the previous blog, you would need to enable XaaS Designer to enable the “Design” tab in the vRA portal. And also, you better change if the VRO is connectable from the VRA.

Go to the “Administration” tab and “vRO Configuration”


Go to Server Configuration and check the VRO configuration, by default, the option “Use the default Orchestrator server that was configured by the system administrator” should be selected


Press “Test Connection” to ensure the connectivity from the vRealize Automation to the vRealize Orchestrator


Configure one XaaS service to Trigger Data Collection from Endpoints

As said, we are going to make one catelog item which allow user to request for a data collection from all the compute resources. We will leverage two default VRO workflows to achieve this.

So first, we have to add the IaaS Host into the VRO. i.e. to let VRO trigger actions to IaaS Host. This is important as the data collection is actually a function on the IaaS Host rather than the VRA appliance itself. If you are familiar with VRO, you can use the VRO client to perform this step of configuration. But I’m bit lazy to download the JAVA run time to open the VRO Client in my lab, I would configure another XaaS to connect the IaaS Host to the VRO and after this I would setup the XaaS for data collection

For the 1st XaaS, Go to “Design” and “XaaS Blueprints”, click New and you can browse the VRO Workflows available in the embedded VRO. Find the “Add the IaaS host of a vRA host” from the tree and this is the workflow we are going to add as a XaaS for connecting the IaaS Host into the VRO


Click Next and review the base information of the XaaS


In the Next Page, I would define the vCAC Host which is the input of the “Add the IaaS host of a vRA host” workflow. Actually you can skip this if you prefer make this as an input field. But as said, I am too lazy to type in the information later, I would just make this XaaS a catalog item without the necessity to input anything


So you just have to click the vCAC Host object and define a default value with “Constant”, here i select the “Default” as the workflow input


In Constraints tab, I make the field a Read Only field by making a “Yes” for the “Ready Only” Field


In the next page, select this as a “No Provisioning” workflow


Skip the last page of wizard and press Finish to confirm the XaaS blueprint


Click “Publish” to enable the XaaS workflow


But you need to configure the “Catalog Management” under the “Administrator” tab to let your end user seeing the XaaS just published


In Services tab, add a New Service. e.g. I created one named “XaaS”


You should see something similar when done


Then go to the “Catalog Items” and you will see the XaaS Blueprint you just published.


Click on it and select the “service” it should belong to, for my case this belongs to “XaaS” and that’s why I choose XaaS and confirm the setting


Finally, create an entitlement to let your user be able to see the service or particular the catalog XaaS item


In the first page, select the Business Group. In vRA 7.2 we have the “All Users and Groups” button which simplifies the configuration a lot, previously we need to add again all the users or groups we have input already into the business group


In the “Items & Approvals” tab, select XaaS under the “Entitled Services”


Click Finish and you have completed the configuration of the first XaaS


Go back to the “Catalog” item and you can see the XaaS item to add the IaaS host


Try running it by “Request” and confirm the wizard


Click Next to proceed


Enter the username and password to confirm the connection. Remember that you need to user the local administrator user right user of IaaS Host. i.e. local administrator or equivalent


Confirm the request by clicking OK


Check out the status of the request under the “Requests” item


Double clicking the request item let to seeing more detail of the progress


Wait till the request is being done with status “Successful”


GOOD! We can setup our next XaaS item which is the manual data collection. Create a new XaaS item from “Design” tab and “XaaS Blueprint”, this time find a workflow named “Force data collection”.


Confirm the general information


Define the input of this workflow again, point it to the IaaS Host we just added in last step


Also make this a Read Only field  screen-shot-2016-11-30-at-2-51-30-pm

Select “No Provisioning” too


Skip the last page of the wizard and click finish


So you can see your second XaaS, click publish button again to make this item available


Go to the “Administration” tab and “Catalog items” to configure the published catalog item


Make this item a “XaaS” service again


We can see this item at the “Catalog” tab, choose the “Force data collection” item this time


Press Submit to confirm the request


Press OK to confirm the request


Under the “Request” item, you can see the request of the data collection item. On Success, you can check from the “Infrastructure” tab about the data collection status


You can see the data collection is done



Great! You got your XaaS service and service items provisioned. And more than important, you used one catalog item to save a lot of time rather than clicking thru’ the administration tab and compute resource subtab for data collections every time for every objects. Wish this helpful for you!

vRealize Automation 7.2 Lab – Container and vROPS Integration

As the last blog post for the vRA 7.2 lab, I would like to cover some Miscellaneous topics. Again, I have to state again that vRA is a very powerful tool with comprehensive functionalities and great extendibility for different kind of integration. We have done the XaaS provisioning, IaaS provisioning and Software Component Provisioning on top of IaaS objects. So in this blog, we will cover two Miscellaneous items in vRA 7.2 which help making this Cloud Management Portal even more complete. These would be:

  1. Container Integration
  2. Monitoring Integration with vRealize Operation

Container Integration

I have talked about Cloud Native Application Strategy of VMware in a previous blog post. In that post, I mentioned there are multiple Open Source Project VMware is working on the enable Container Integration on your vSphere or ESXi environment. Namely, we provide two options for people who are interested in Container deployment over VMware Environment which are:

  1. Photon Platform – For people who are more familiar with Container and provide a native platform for container based workloads only
  2. vSphere Integrated Container – For people who wanna serve a platform for both traditional VM and Container

So what has vRA 7.2 included to support containers? vRA 7.2 has added the open source project Admiral into it embedded natively. Such that, we have an UI for configure container host, blueprints and applications directly on the vRA 7.2 a.k.a you can manage Cloud VM workloads, Local VM workloads and also the Container based workloads now.

We have to first connect our vRA 7.2 (actually Admiral) to the Container Hosts in you environment. Of course you can also provision container hosts by the VRA 7.2 itself. But the logic applies in the same way that you have to connect those under the management of Admiral in order to start Container based Provisioning.

(Optional) Create a Container Host from Photon OS

You may not need this step, if you already have had any existing docker hosts in your environment. Such can be any ordinary Linux Hosts installed with Docker and with the Remote Access is being enabled.

As I don’t have any Docker Hosts, I would provision a few Photon OS to be my Container Host. If you don’t know what’s Container and Container Host, just imagine it as VM and ESXi correspondingly but those are not Host Virutalization but more OS (or Application) Virtualization Layer.

So you just have to download the Photon OS from the URL and deploy it without customisation.


Login the photon OS VM with default username and password which is root/changeme. You would have to change the password on your first login


If you don’t have a DHCP ready in your environment, you need configure the IP address for the photon OS. Do this by edit the file under /etc/systemd/network. Change the file name from dhcp to static and input the following attributes under [Network]:

  • Address=<Static IP>
  • Gateway=<Gateway IP>
  • DNS=<DNS IP>
  • Domain=<DNS Domain>


You have to restart the Network Service to configure the static IP. Use the command

systemctl restart systemd-networkd.service


Then you can see the new IP address and after that you can enable the SSH demon with

systemctl start sshd


So now you can login the photon OS with SSH now for further configuration


Edit the /etc/default/docker file to include the following line which enable the remote access of the photon OS as a container host


You definitely also need to enable the firewall to allow the remote access of the Container Host from the port 2375


Then you can start the docker service from the photon OS. Remember that, photon OS includes multiple Container technology support, Docker, Rocket.. etc. But vRA 7.2 integrate with Docker as of today.


Configure integration from vRA 7.2 to Photon OS

After the Docker Hosts are ready , we can proceed to the configuration of the VRA 7.2 for integrating with the Docker Hosts. Do this by following the following steps:

Go to the “Containers” Tab for the vRA 7.2


Add a host by connecting to the IP of the Container Host and Add it to a appropriate Placement Zone with correct Login Credential and Deployment Policy. While the Placement Zone and Deployment Policy are alike to the reservation policy which governs how the workloads are being placed on different resource


Verify and Click Add to add the host afterwards


You can see your first Container Host at the “Host” tab


You are already ready for deploying containers from the Template tab


So I tested this by selecting the nginx and click “Provision” to test for deployment


The provision status is at the right hand side panel


On completion, you can check the provisioned app detail and resource consumption, even logs are being visible in single UI


To check out the application, you can click the URL at the Ports to confirm it screen-shot-2016-12-01-at-9-14-39-pm

Of course, you can further make this a Blueprint in the vRA to provision container based application


And you could fine the object under the Application tab after provisioning such a blueprint


And the provisioned item will be listed under “Requests” tab just like any other Resources


Great, then you now can make some new applications containing both docker and traditional VM which making it so called a version 2.5 applications.

vRealize Operation Integration

Finally, this step is easy but critical that, you may need to provide a way for end user to monitor their Workload on VRA too. Well, this can simply be done thru’ the integration between VRA and VROPS.

You can integrate the component at the “Administration” tab, “Reclamation” and “Metric Providers”


Type in the URL with https://<vrops FQDN or IP>/suite-api/ and test the connection. Then hit Save to confirm the setup


This is it, you can now see a familiar VROPS badge under the workload provisioned by the VRA.



Through out this series of blog, I think you would obtained the 101 level of vRealize 7.2. There are far more lot which vRA 7.2 can do, you would discover more when you hit different use case and tests. I wish anyway this can provide you a great way to start learning and playing with vRealize Automation 7.2 and wish this would be helpful to you!

vRealize Automation 7.2 Lab – Initial Configuration

After preparing the vSphere and NSX environment as the base of the virtual infrastructure, we would have to go back to the vRealize Automation 7.2 URL to proceed the configuration of integration. For sure that vRealize Automation can connect to a lot of endpoints for workload provisioning like Amazon EC2, vCloud Air, Hyper-V KVM…etc but in this blog post, we will first focus in vSphere and NSX integration with vRealize Automation 7.2. Again, actually the step is quite the same when you were deploying a vRA 7.0 or 7.1. If you have played around with vRealize Automation Before, you should know there are a lot of concept brought on top of vSphere, those may be business, administration or operation related. And we have to configure each corresponding items before we can provision a VM or other workloads actually. Following please refer to an overview of the deployment steps and I added some lines to explain why you would have to do those:

  1. Create a vCenter Endpoint – This tell vRA what resource it owned, we have to configure the vCenter sdk URL and NSX URL in this step
  2. Create a Orchestrator (VCO/VRO) Endpoint – For NSX integration, we need to have VRO (which is embedded in vRA by default) connected with vRA for item discovery and network provisioning
  3. Add Directory as Identity Source – As there is an Identity Manager inside vRA too, we can leverage it to integrate with AD or LDAP for directory service
  4. Create a Fabric Group – To select which cluster(s) or host(s) you would like to select as the resource pool for workload provisioning.
  5. Setup Data Collection of Compute Resource – Not until you setup a valid Fabric Group, you would not be able to actually pull information from the vSphere environment. Yet, as vRA and vCenter kept an async relationship, we need to setup a schedule data collection.
  6. Create Machine Prefixes – These define the workload naming convention when you are provision VM later
  7. Create Network Profiles – These define what kind of Network the provisioned VM is attached too. More than often, Single Machine Blueprint workload uses an External Network while Multi Machine Blueprint uses NAT ones
  8. Create Business Group – Within a Tenant, you have to grant resources to different teams or department. Business Group is a logical container for that. You can specific a default group machine prefix which notates the naming of VM from the group
  9. Create Reservation – Each Business Group can have multiple reservation say one for public cloud and one for private cloud which is vSphere. You define how much resources you are granting to a Business Group thru’ the use of Reservation
  10. Configure User Roles – After the base configuration above, one have to grant user role for users from different business groups. Particularly, this is for granting administrative user right to user who would help to create blueprints for provisioning

I have skipped some steps which might not be necessary in your environment. Actually we can also create reservation policies to let user knowing there are different tiers of storages and map different blueprint into different compute reservation which can help provisioning in different locations. But I would keep this simple here first

Create a vCenter Endpoint

Go to the “Infrastructure” tab and click “Endpoints”. Select “New” and “vSphere (vCenter)” from the “Virtual” category


If you didn’t change the default value in provisioning, your vSphere Endpoint will be named “vCenter”. Input the address of vCenter with https://<VC FQDN or IP>/sdk, configure the Credentials for accessing the vCenter. Check the “Specify manager for network and security platform”, you then have to specify the NSX Address, do remember the URL you have to specify should be in form of https://<NSX FQDN or IP>. The one in the following screen cap is actually wrong :).


Create the vCenter and NSX credential for connecting to the two components


On finishing, you should see the following result


Create a Orchestrator (VCO/VRO) Endpoint

Add the vRO endpoint, by going to the same page as above step but select “vRealize Orchestrator” from the “Orchestration” category instead


Define the VRO user credential, you should user administrator@vsphere.local by default


In oder to let VRA using VRO, you need to define the Priority of the VRO Endpoint with the attribute “VMware.VCenterOrchestrator.Priority”. Give it the Value “1”


The URL of the VCO is https://<VRA FQDN or IP>/vco


On finishing, you should see two endpoints under the tab, one for vCenter + NSX and another is for VRO


Add Directory as Identity Source

Go to the “Administration” tab from VRA and Select “Directories”. Click “Add Directory” to add identity source. I use “Add Active Director over LDAP/IWA” for integrating my MS Active Directory.


Input the domain name and join domain information if you are selecting “Active Directory (integrated Windows Authentication)”


Input also the user for querying the AD as the “Bind User Details”


Click “Save and Next” and select the Domain you would like to add in


Press Next to proceed


Check the check box and provides a base DN for syncing the user groups


Provide the base DN for syncing the users entry


Click Sync directory and it will start discovering and syncing user from AD to the vRA


Configure the “Sync Setting” after the initial sync


Make it Syncing “Every hour” to ensure the latest change in AD can be synced sooner


You should see the following on completion


Setup Data Collection of Compute Resource

Go to the “Infrastructure” tab and go to “Compute Resources”, click the resource and select “Data Collection”


Configure the frequency of syncing each item


Create Machine Prefixes

Define Machine Prefixes at “Infrastructure” tab and “Machine Prefixes”. Create thru’ “new” button


You can define Machine Prefixes for Business Group Specific purpose or Function Specific or for other reasons


Create Network Profiles

Go to the “Infrastructure” tab and create new Network Profiles under “Network Profiles”


Create External Network first, as other kind of Network profile requires a Valid External Network


Input the information of the External Network


DNS information


Network Range which you would like you VM to provision on


Then you can setup another NAT network which is for blueprint contains multiple VM


DNS information


Network Range you would like the VM to provisioned on


You can see following on successful creation


These network profiles are not tied to anything yet. Not until we are creating blueprint, or define reservations.

Create Business Group

Go to the “Administration” tab and choosing “Business Groups”, there you can create new Business Group thru’ “New” Button


Give the Business Group a Name and fill in the basic information


Next page let you select different user role. In simple words, User role refer to the user who are just able to provision machines for themselves and have day 2 operation for their own VM. While Support and Group manager role are allowed to provision on behalf of others in the Business Group and see their provisioned machines


Give the Business Group a default machine prefix (although not mandatory)


On finish, you should see something like following


Create Reservation

Go to the “Infrastructure” tab and “Reservation” where you can create the resource reservation for the business group created in the compute resource we defined by the fabric group


Choose “vSphere (vCenter)” based reservation and provide the basic information from the first page, do select the business group which the reservation is created for


Select the Compute, Storage and Resource Pool granted for this reservation


Select the Network mapping for the Network profile we created. E.G. my “External Network” vSphere Portgroup is the Network Profile I Created on VRA


Skip the Alerts setting


On successful creation, you can see something like below


Configure User Roles

You can further doing the RBAC for user in the directory. Say I would like to grant more user right for the confugraitonadmin@vsphere.local user to let him work on the blueprint and XaaS stuffs. I need to you “Administration” tab and from “Directory Users and Groups”, I search it from the top right search box


Double click the user allow you to further configure it’s user role, I select all the possible roles to let you see what differs after re-login


You can see more tabs available e.g. “Design” for XaaS configuration



So this basically summarised the steps in configuration the integration between the vSphere + NSX and the vRealize Automation. We now have a valid Endpoint, Fabric Group, Business Group and Reservation which let us further configure the Blueprints. I will have another separate Blog for creating easy Blueprint for IaaS, XaaS and also Software Component Provisioning soon.