I’ve used this process multiple times to create quick Docker host VMs running on VMware Workstation in my home lab. It is important to note that although I’m using VMware Workstation, the type 2 hypervisor you use here is fairly unimportant. You could just as well use VirtualBox, or Fusion for this purpose.
Download the latest uBuntu 16.04 LTS server ISO from: http://www.ubuntu.com/download/server (I believe 16.04 comes only in 64-bit, but make sure its 64-bit)
Create a new Virtual Machine for your Docker host using your type 2 hypervisor software (Workstation in my case).
Give the VM following hardware/spec:
OS – Linux/uBuntu 64bit
1 or 2 vCPUs
512 MB RAM
9GB disk
2 x vNICs (1st is set to the default NAT option and the 2nd should be set to Host-only)
Here is my VM’s setup:
Attach the uBuntu ISO and start the VM up.
Install a standard uBuntu OS using the text based installer, and just be sure to also install OpenSSH server when prompted for features to install. After the install completes, reboot, login with your user account you created during install, run ‘ifconfig’ to check the assigned IP address, and then use your favourite SSH client to connect to that IP. Using a PuTTy session will just make copy/pasting commands into your uBuntu VM easier.
Now you’ll install docker – the package includes both the docker server and client.
Run the commands above in sequence, and after the apt-get install docker-engine at the end, run ‘sudo service docker status’ to check that docker is running. You should see it listed as Active (running)
● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2016-07-06 21:20:39 BST; 2h 0min ago
Run a quick ‘docker info’ command to ensure that you get information back from Docker and that everything looks OK.
I was on a vSphere upgrade review engagement recently, and part of this involved checking hardware and existing vSphere VI was compatible with the targeted upgrade.
To help myself along, I created a few PowerCLI scripts to help with information gathering to CSV for the VI parts – such as Host Versions, build numbers, VMware tools and hardware versions, etc… These scripts were built to run once-off, simply either by copy/pasting them into your PowerCLI console, or by running them from the PowerCLI console directly.
They can easily be adapted to collect other information relating to VMs or hosts. To run, just launch PowerCLI, connect to the VC in question (using Connect-VIServer) and then copy/paste these into the console. The output will be saved to CSV in the directory you were in. Just make sure you unblock the zip file once downloaded if you execute them directly from PowerCLI, otherwise the copy/paste option mentioned above will work fine too.
There are three scripts bundled in the zip file:
Gather all hosts under the connected vCenter server and output Host name, Model and Bios version results to PowerCLI window and CSV
Gather all hosts under the connected vCenter server and output Host name, Version and Build version results to PowerCLI window and CSV
Gather all hosts under the specified DC and output VM name and hardware version results to PowerCLI window and CSV
Short and simple scripts, but hopefully they will come in handy to some. As mentioned above, these can easily be extended to fetch other information about items in your environment. Just take a look at the way existing info is fetched, and adapt from there. Also remember that using | gm (get-member) on objects in PowerShell is your friend – you can discover all the properties and methods on PowerShell objects by using this, and use those to enhance your reports/outputs in your scripts.
The other weekend I managed to get some spare time to do another update to my ESXi 5.0 / 5.1 Host Backup & Restore GUI utility, this time it has been updated to version 1.3. I didn’t post up the changes as it was done by special request from one of my blog readers (thanks Flavio!) However, after receiving more comments with a few others having a similiar issue to what Flavio had, I thought I should definitely post the updated version here, which should hopefully solve the issues some people are seeing.
The changes are based on feedback received in the comments I have received about the utility relating to exceptions received when users in some circumstances try to backup their host configurations. Specifically the exception message “Exception caught: Get-VMHost VMHost with name ‘xxx’ was not found using the specified filter(s).”
In a recent blog post, I showed a simple method of outputting a list of hosts with their friendly names, as well as their MoRef (Managed Object Reference) names alongside eachother, enabling you to match up which host belongs to which MoRef. I wanted to take that a little further, with a function that is able to return the friendly name of a host’s MoRef that is input into the function. I have used this is a larger reporting script, where I can only get the MoRef of a host via it’s property within a cluster object. Basically, I look for any Failover hosts (admission control policy), which is an array, and the hosts are listed as indexed objects of this array. They are also only displayed as MoRef names, so at this point, instead of inserting the MoRef into my results, I insert the MoRef into this function, return the friendly name, and input this instead. Which allows the person reading the report to easily identify the host! $cluster.ExtensionData.Configuration.DasConfig.AdmissionControlPolicy.FailoverHosts <- In this example, $cluster is a particular cluster using the “specify a failover host” policy, and “FailoverHosts” is the array, with each object within containing a host MoRef. For example FailoverHosts[0].Value would be one instance, and may equate to “HostSystem-host-28” for example. So here is the function. It takes two mandatory parameters: -MoRef (the MoRef of the host in question of course), and -Cluster (the name of the Cluster to do the lookup in) – the function loops through each host in this cluster to look for a host that matches the input MoRef.
Function Get-VMHostByMoRef() {
<#
.SYNOPSIS
Fetches host name by input MoRef and Cluster to look in
.DESCRIPTION
Fetches host name by input MoRef and Cluster to look in
.PARAMETER MoRef
The MoRef of the host system
.PARAMETER Cluster
The name of the cluster to do the lookup in
.EXAMPLE
PS F:\> Get-VMHostByMoRef -MoRef HostSystem-host-28 -Cluster MyCluster01
.LINK
http://www.shogan.co.uk
.NOTES
Created by: Sean Duffy
Date: 22/02/2013
#>
[CmdletBinding()]
param(
[Parameter(Position=0,Mandatory=$true,HelpMessage="Specify the Host MoRef name you would like to query for it's friendly name.",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)][String]$MoRef,[Parameter(Position=1,Mandatory=$true,HelpMessage="Specify the Cluster to search hosts in.",
ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)][String]$Cluster
)
process {
$AllHosts = Get-Cluster $Cluster | Get-VMHost
$thehost = $AllHosts | Where {$_.ExtensionData.MoRef -match $MoRef} | Select-Object -Property Name
return $thehost
}
}
Here is a quick sample of the output when called directly from the PowerCLI prompt (note the MoRef of “host-22” used to find the real host name of “esxi02.homelab.local”: Hopefully this may be of use to some – add it to your PowerCLI script/function toolkit or throw it into your PowerShell $profile for easy access in the future!
The other day I came across this issue, it was quite late at night so it took me a little longer than I would have liked to realise what the issue actually was.
I had a vCenter 5.0 server which had not been joined to the local Active Directory domain. My goal was to get this added to the rest of the AD domain. After adding the vCenter server to the domain, rebooting, and checking that all the VMware services had started up correctly afterwards, I connected the vSphere client and saw that all the ESXi hosts were in a disconnected state.
At this point I tried right-clicking a host and manually connecting it – this worked, but only 60 seconds or so, and then it disconnected again. Whilst it was connected it was manageable, and of course all the VMs on each host were still fine. I tried restarting management agents on a host and retrying the procedure, but this didn’t help either. My next step was to reboot an ESXi host that didn’t have anything critical running. Still nothing at this point.
So I decided to consult the VMware vpxd log files on the vCenter server. Consult this VMware KB article to see where to find these logs.
Before opening the latest vpxd.log file, I tried the reconnect on a host again using the vSphere client, and watched for the disconnect. At the exact time I noticed the host appear disconnected again, I noted down the time on the system clock, then opened the vpxd logs to navigate to this time and take a look. Here is what I found:
2013-01-04T00:00:22.121Z [02504 warning 'Default'] [VpxdInvtHostSyncHostLRO] Connection not alive for host host-28
2013-01-04T00:00:22.121Z [02504 warning 'Default'] [VpxdInvtHost::FixNotRespondingHost] Returning false since host is already fixed!
2013-01-04T00:00:22.121Z [02504 warning 'Default'] [VpxdInvtHostSyncHostLRO] Failed to fix not responding host host-28
2013-01-04T00:00:22.121Z [02504 warning 'Default'] [VpxdInvtHostSyncHostLRO] Connection not alive for host host-28
2013-01-04T00:00:22.121Z [02504 error 'Default'] [VpxdInvtHostSyncHostLRO] FixNotRespondingHost failed for host host-28, marking host as notResponding
2013-01-04T00:00:22.126Z [02504 warning 'Default'] [VpxdMoHost] host connection state changed to [NO_RESPONSE] for host-28
This clearly shows the issue and points to it being a connectivity issue of some sort. Looking up these specific errors led me over to this VMware KB article, and it was at this point that it suddenly dawned on me – with the late night I had carelessly overlooked the Windows Firewall. Of course, Windows Firewall has settings for Windows Domains too, and of course this server had just joined the domain, so existing Firewall policies in place for vCenter that were previously on “public” settings, were now not enabled for “Domain”.
Timing the issue also revealed that it was 60 seconds before hosts disconnected again. So the issue here was that port 902 used for the host heartbeat between vCenter and the ESXi hosts was being blocked on the vCenter firewall. Unblocking this by simply enabling the rule for “Domain” fixed the issue and as soon as that was applied, all hosts reconnected by themselves. Of course I also took the time to ensure other vCenter firewall exceptions were correctly configured.
Lastly, when examining VMware log files and settings, you may come across references to VMs, Hosts, or other VMware “objects” named as “host-28” or “vm-07” for example. These are VMware’s way of keeping reference of objects by what is called a MoRef, or “Managed object reference”. You may know host-28 as esxi03.yourdomain.local for example, so I thought I would include a handy tip for working out the Managed Object Reference name of an ESXi host to help with those vpxd.log diagnostics. Let’s say you find an interesting error mentioning moref “host-28”. You don’t know which host this is, so you can use PowerCLI to work out the morefs of hosts in a cluster and then match up the reference to the actual host name. Use this bit of script to achieve this:
Get-VMHost | Sort Name | Select Name,@{Name="MoRef";Expression={$_.ExtensionData.MoRef}}