Setup vSphere Integrated Container v0.8.0-rc3

As mentioned in the previous blog, we talked about a few approaches that we can configure container environment on you sphere environment. While three ways out of these are:

  1. Approach 1: vSphere Integrated Container
    • Leverage Existing vSphere Environment
    • Easier Approach
  2. Approach 2: Photon Platform
    • Green Field Deployment
    • NOT vCenter integrated
    • Kubernetes, Mesos and Docker Swarm Provider
    • Multi-tenant
  3. Approach 3: Existing Container Host (Core OS, Photon OS… etc)
    • Managed by vRA 7.2
    • Managed by Admiral

Most of the items mentioned above are open source projects either from VMware or other 3rd party provider, such that you can actually test the deployment in your environment to enjoy embracing the deployment of Container over ESXi or vSphere Environment. As these projects are still evolving by time to time, please refer to the URL for the latest direction and information for each project. It did happened that some projects have been changed radically in function and feature with the same code name.

In this blog, I would like to talk and illustrate the deployment considerations for vSphere integrate Container (VIC). As from vSphere 6.5 What’s New, you would know that VIC would be come with the vCenter soon.



So what if you are running vSphere 5.5 or 6.0? Well, you can use the vSphere Integrated Container (VIC) from the VMware open source project directly.

Don’t think VIC is too difficult a solution, even we mentioned that VIC provides a single solution for both Developer and Operation team at the same time. VIC provides a command for you to create a Virtual Container Host VM on a ESXi Host. Then this Virtual Container Host VM will provide the Docker API interface for the Developer to consume. And each Containers provisioned on this Virtual Container Host VM will be deployed on a separate VM, in VMware terminology we call it jeVMs (just enough VMs). These VMs are cloned thru’ Instant Clone technology, imagine it as doing link clone on both Memory and Storage. Such that the OS memory overhead is actually de-dulpicated while each container is running on a dedicated VM for traditional monitoring and operations.


Deploying vSphere integrated Container (VIC)

From the VMware Open Source web site you can learn about the deployment steps and detail. Actually, we can deploy VIC with three approaches

  1. Standalone ESXi Host
  2. Standalone Host Under Management of vCenter
  3. Cluster Under Management of vCenter

While Point 1 and Point 2 are more for development environment, I would demonstrate how Point 3 can be deployed and configured which is much more production ready. To learn more about the pre-requisites for deploying VIC, again you can refer to the Open Source VMware WebSite. But let me try simplifying this.


Every Virtual Container Host being deployed by the vSphere Integrated Container contains 4 type of network. While Container bridge Network is a backbone network for the data traffic between containers and therefore it has to be a dedicated and isolated network for each of the Virtual Container hosts. And the other networks can leverage one single network with one IP Address. I would use this approach to deploy my Virtual Host Container with VIC.

So let me show you what I have deployed in the environment:

  1. Install Virtual Container Hosts thru’ VIC Command
  2. Install VIC Plugin on vCenter Server Appliance
  3. Deploy Some Containers to Check the Backend behaviour

Install Virtual Container Hosts thru’ VIC Command

As mentioned, you can download the VIC from the Open Source Page. Well, the most updated Version I can download is version v0.8.0-rc3. On unzipping, you can find a bunches of scripts and ISO which we leverage to deploy the Virtual Container Hosts. But as mentioned in the Open Source Guidelines, DO NOT run any executables other than the scripts:

  • vic-machine-darwin
  • vic-machine-linux
  • vic-machine-windows.exe

Depends on what OS you are running it for setting up your VIC Virtual Host Container. For my example, I use windows and I use one IP for the 3 networks mentioned while keeping the Container Bridge network on a NSX Logical Switch which is dedicated for the First Virtual Container Host I created. I use the following command:

vic-machine-windows create –target administrator@vra.local:P@ssw0rd@ –no-tls –compute-resource “Cloud Resources” –image-store Cloud-DS01 –bridge-network  vxw-dvs-15-virtualwire-3-sid-5002-vic-bridge-02 –public-network “VM Network” –public-network-ip –public-network-gateway –management-network “VM Network” –client-network “VM Network” –dns-server –name vch2 –no-tls –thumbprint 2B:3E:56:D1:61:9A:2F:91:D5:8D:B2:D1:15:CC:DE:CB:4C:D6:B3:A4


  • “VM Network” is the network port group for the external access
  • “Cloud-DS01” is the Datastore for the Virtual Container Host to place on
  • “Cloud Resources” Cluster for the Virtual Container Host to place on
  • “vxw-dvs-15-virtualwire-3-sid-5002-vic-bridge-02” is a VXLAN based port group for Container Bridge Network

I cannot get the command works for any tls configuration until I put in the –thumbprint 2B:3E:56:D1:61:9A:2F:91:D5:8D:B2:D1:15:CC:DE:CB:4C:D6:B3:A4. You can get this thumbprint actually by executing the command by removing the “–thumbprint xxxxxxx”.


So way till the Virtual Container Host provisioned successfully, by the way, you would need to enable a non default network port in-between the ESXi hosts (but easier way is to disable firewall services in all your hosts within the cluster).

So after running the script you can find a vApp and a VM is being deployed (the other VMs are actually containers we deployed, we will show how these VMs are created in the last section):


You can verify the deployment on the https://<VCH FQDN and IP>:2378, and you can see the status there on the web page


Install VIC Plugin on vCenter Server Appliance

In order to manage the VIC Virtual Container Hosts in a better way, in VIC installable, there are a web client plugin which we can install on the vCenter Appliance to enable a summary text box showing the Docker API URL and relate information. You can install this by:

  1. Enable bash shell for SFTP upload on VCSA
  2. Upload the VIC installable to the VCSA
  3. Configure the Plugin Config file
  4. Setup the VIC Web Client Plugin

So the detail steps are as following:

As we need to upload the VIC binary onto the vCenter Appliance, we need first enable the Bash Shell on the VCSA according to the VMware KB.


Where we can upload the VIC files thru’ WinSCP or SFTP ways


Expand the File on the VCSA


Edit the Plugin Config file at /vic/ui/VCSA/configs


Include the vCenter IP in the configuration file, you just need to edit the attribute VCENTER_IP


You then can run the at /vic/ui/VCSA to setup the Web Client Plugin


On completion, you can check out the new web Client plugin feature by selecting the Virtual Container Host on the Web Client


Deploy Some Containers to Check the Backend behaviour

So lastly, we can try deploying containers on the Virtual Container Hosts. We can do this by using the docker command with remote host attribute as following:

docker -H <Virtual Container Host IP:2375> run -d <Docker Image>


So you can see the docker images being provisioned. And what it looks like in vSphere view? As said, when a new docker image is provisioned, one VM is cloned out per docker image and all those VMs are being grouped under the Virtual Host Container vApp.



I think it’s really easy for deploying the Virtual Container Hosts on a vSphere environment for one to enjoy the container’s benefit on their existing environment. And the great way is that, we gain the visibility of container by the 1 to 1 mapping between container and VM. So from the vSphere Performance tab and the vROPS, we can know about the resource utilisation of each container thru’ monitoring of each jeVM. Sounds good right?

vRealize Automation 7.2 Lab – Blueprints IaaS
Cost Reduction for Home Lab… Instant clone ESXi for vSAN testing :)

Leave a Reply

Your email address will not be published / Required fields are marked *