Wednesday, November 27, 2013

Adding License Headers to Meet Apache RAT Requirements


Source code contributions to Apache CloudStack must meet RAT requirements, which stipulate that source files include the Apache license header.


The Apache license header must be applied to each source and configuration file to be checked into the Apache CloudStack repo.  Applying this header to each file is very time consuming.


Create a bash script using ed.

Let's elaborate on what each step involves...

Using ed

The follow bash script uses ed to apply the license header to the start of every file in every folder off of the current folder.

FILES=$( find ./rdpclient-0.21 -type f)

for f in $FILES
        echo "processing $f file"
ed -s $f << EOF
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements.  See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership.  The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License.  You may obtain a copy of the License at
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// KIND, either express or implied.  See the License for the
// specific language governing permissions and limitations
// under the License.

Non-Java files

The commenting style used in the script is suitable for Java and C#.

For bash files, XML and Windows .bat files, a different comment style is requried.

Final Remarks

The script seems to miss files that start with a dot ('.').  To help make sure you've added the license header to all relevant files, use the mvn build's ability to check for RAT compliance.  E.g.

mvn --projects='org.apache.cloudstack:cloudstack' org.apache.rat:apache-rat-plugin:0.10:check

Thursday, September 12, 2013

Hyper-V KVP Data Exchange for CloudStack


Virtual machines created from the same VM template will behave identically unless they are passed additional configuration information.

For instance, CloudStack uses a single, generic VM template as the basis of every system VM.  When a VM is created from this template, it is passed additional information that determines what services it will run.  Depending on the configuration, the VM might become a virtual router, a secondary storage VM, or a proxy for VM console access.


The CloudStack system VM template does not harvest configuration when it is running on Hyper-V 2012.  There needs to be a mechanism whereby this information is passed to the VM and harvested by the VM at start up.


Use the Hyper-V KVP Data Exchange to pass data to VMs at startup.  Update the template with integration services that write KVP data to disk.  Finally, update the System VM start up to harvest KVP data.

Let's elaborate on what each step involves...

Hyper-V KVP Data Exchange using WMI

KVP stands for Key Value Pair.  These key / value pairs originally modeled information that appears in a Windows registry entry: registry entries have a name (a key), and they contain data (a value).  However, KVPs are operating system agnostic.  For example, it is just as easy to store a set of key value pairs on disk in Linux as it is to put them in the registry on Windows.

KVP Data Exchange involves transmitting data between the host and the guest OS over the VMBus.  "VMBus" is the term Hyper-V uses for its inter-partition communication channel.  Specifically, the data exchanged consists of KVPs.  Depending on the source of the KVP, their data is transmitted from host to guest or guest to host.

KVP Data Exchange is accessed through the WMI APIs exposed by Hyper-V.  First, KVP data is placed in an Msvm_KvpExchangeDataItem.  Place the key in the "Name" filed, the value in the "Data" field, and set the "Source" field to 0 to indicate we are pushing KVP from host to guest.  E.g.

KvpExchangeDataItem kvpItem = KvpExchangeDataItem.CreateInstance();
kvpItem.LateBoundObject["Name"] = "cloudstack-vm-userdata";
kvpItem.LateBoundObject["Data"] = "username=root;password=1pass@word1";
kvpItem.LateBoundObject["Source"] = 0;

The example above uses C# WMI wrappers generated using visual Studio.  If you must insist, source is available at.  Another alternative is to use PowerShell commands.

Next, register the KVP object with Hyper-V using the AddKvpItems method of the Msvm_VirtualSystemManagementService class  I.e.

uint32 AddKvpItems(
  [in]   CIM_ComputerSystem REF TargetSystem,
  [in]   string DataItems[],
  [out]  CIM_ConcreteJob REF Job

If you are new to WMI, let me explain further:  The aim of WMI is to provide APIs that are platform agnostic.  Therefore, parameters for the method call have to be serialisable in a cross platform manner.  As a result, object references look more like a URI than a memory address.  E.g. here is a sample CIM_ComputerSystem reference used to call AddKvpItems:


Indeed, WMI objects refer to such a reference as the WMI object path

Likewise, the DataItems parameter is an array of serialised Msvm_KvpExchangeDataItem objects serialised according to a WMI-specific format.  When it comes to serialisation, WMI follows standards set by the DTMF.  Specifically, the format used to encode a WMI object is CIM XML DTD 2.0.  Therefore, the DataItems parameter is an array of XML-serialized data.  E.g.

<INSTANCE CLASSNAME="Msvm_KvpExchangeDataItem">
  <PROPERTY NAME="Caption" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="Data" TYPE="string">
  <PROPERTY NAME="Description" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="ElementName" PROPAGATED="true" TYPE="string"></PROPERTY>
  <PROPERTY NAME="Name" TYPE="string">
  <PROPERTY NAME="Source" TYPE="uint16">

In C#, the System.Management namespace provides objects to facilitate to create this XML for us.  Specifically, ManagementObjectBase.GetText performs the nescessary serialisation.

Following on from the C# above, we call AddKvpItems in this snippet:

System.Management.ManagementBaseObject kvpMgmtObj = kvpItem.LateBoundObject;
System.Management.ManagementPath jobPath;
String kvpStr = kvpMgmtObj.GetText(System.Management.TextFormat.CimDtd20);
uint ret_val = vmMgmtSvc.AddKvpItems(new String[] { kvpStr }, 
                                     vm.Path, out jobPath);

Again, C# wrappers hide much complexity.

Adding a KVP item with a Source '0' item will cause the Hyper-V host to send data; however, the Guest needs to be setup to receive KVP data.

hv_kvp_daemon the KVP Daemon

KVP data is transfered to the file system through the collaboration of a kernel driver and a user mode daemon.

The KVP driver code, hv_kvp.c (, is compiled into the hv_utils kernel module (Source
Since the driver is part of the Linux kernel code, it is provided by default with recent versions of common Linux distributions.  E.g.

[root@centos6-4-hv ~]# cat /etc/*-release
CentOS release 6.4 (Final)
CentOS release 6.4 (Final)
CentOS release 6.4 (Final)
[root@centos6-4-hv ~]# modinfo -F filename hv_utils

However, it is the usermode daemon, hv_kvp_daemon, that copies KVP data to the system.  On startup, hv_kvp_daemon creates files to store kvp data under
/var/lib/hyperv (source
Each file is known as a 'pool', and there is a file for each data pool.  E.g.

[root@centos6-4-hv hyperv]# ls -al /var/lib/hyperv/
total 36
drwxr-xr-x.  2 root root  4096 Sep 11 21:33 .
drwxr-xr-x. 16 root root  4096 Sep 10 13:59 ..
-rw-r--r--.  1 root root  2560 Sep 10 17:05 .kvp_pool_0
-rw-r--r--.  1 root root     0 Sep 10 14:02 .kvp_pool_1
-rw-r--r--.  1 root root     0 Sep 10 14:02 .kvp_pool_2
-rw-r--r--.  1 root root 28160 Sep 10 14:02 .kvp_pool_3
-rw-r--r--.  1 root root     0 Sep 10 14:02 .kvp_pool_4

The prefix of each file is the pool number.  This correspoinds to the KVP source.  E.g. remember that source '0' is used for transmitting data from host to guest?  That means our KVP data is in /var/lib/hyperv/.kvp_pool_0.  E.g.

[root@centos6-4-hv hyperv]# cat /var/lib/hyperv/.kvp_pool_0
cloudstack-vm-userdatausername=root;password=1pass@word1[root@centos6-4-hv hyperv]#

Aside:  there are five pools (source, but only four sources listed for KVP data exchange.
It appears that pool '3' contains predefined KVPs sent by the Host such as the host machine's name.

With this in mind, it is important that hv_kvp_daemon be installed and an init script added.

Microsoft provides the daemon, hv_kvp_daemon, in a package suited for RHEL (

However, it may be easier to search for a package specific to your distro.  E.g. using I found hv_kvp_daemon in the hypervkvpd package.  E.g. for CentOS 6.4 simply call:
yum install hypervkvpd

Unfortunately, I have not found a package for Debian.  An alternative is to compile the source and write an init script.  There is a sample here.

Once you are able to create the file, you need to be able to parse it...

Harvesting Hyper-V KVP Data in Linux

KVP data files contains an array of key / value pairs.  Each is a byte array of a fixed size.  Source 

 * Maximum key size - the registry limit for the length of an entry name
 * is 256 characters, including the null terminator
#define HV_KVP_EXCHANGE_MAX_KEY_SIZE            (512)

 * bytes, including any null terminators
#define HV_KVP_EXCHANGE_MAX_VALUE_SIZE          (2048)

The byte array contains a UTF-8 encoded string, which is padded out to the max size with null characters.  However, Null string termination is not guaranteed (see kvp_send_key).

Provided there is only one key and the key name known, the easiest way to parse the file is to use sed.  To remove null characters and the key name used in our example, you would use the following:

[root@centos6-4-hv hyperv]# cat /var/lib/hyperv/.kvp_pool_0 | sed 's/\x0//g' | sed 's/cloudstack-vm-userdata//g' > userdata
[root@centos6-4-hv hyperv]# more userdata

Wednesday, August 14, 2013

Building Your Microsoft Solution with Mono


The agent for the CloudStack Hyper-V plugin was written in C# using the Microsoft Visual Studio tool chain.

However, Apache CloudStack source should be able to be built using open source tools.


How do you compile an ASP.NET MVC4 web app written in C# using an open source tool chain?


The latest version of Mono include a tools called Xbuild, which can consume the .sln and .csproj files that Visual Studio generates for your solution.

However, you will have to make some updates to the way your project fetches NuGet dependencies, and the projects that get built.

Let's cover all the steps involved one at a time.

Install Mono

First, install the latest release of Mono 3.x.  This version introduces the support for C# 5.0 and ships with the ASP.NET WebStack (Release Notes)

Although a package for this version is not advertised on the Mono Downloads page, the
mono archives include a 3.0.10 Windows .msi

Alternatively, there is a 3.x Debian package available.  For this package, update your apt sources and use apt-get to install the mono-complete package, which contains the tool chain.  E.g.
sed -e "\$adeb /" -i /etc/apt/sources.list
apt-get update
apt-get install mono-complete

Mono ships with an empty certificate store.  The store needs to be populated with common certificates in order for HTTPS to work.  Use the mozroots tool to do this.  E.g.
root@mgmtserver:~/github/cshv3/plugins/hypervisors/hyperv/DotNet# mozroots --import --sync --machine
Mozilla Roots Importer - version
Download and import trusted root certificates from Mozilla's MXR.
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Downloading from ''...
Importing certificates into user store...
Import process completed.

NB: whether you add the certs to your user ( "mozroots --import --sync) or the machine (mozroots --import --sync --machine) depends on what user is used to run web requests.  On Debian 7.0, I found that the machine certificate store had to be updated.

Understand NuGet Packages

NuGet is a package manager for the .NET platform.  These packages consist of assemblies used at compile time and runtime.  The packages are stored in perpetuity on the NuGet website, and fetched by a similarly named command line tool.

Each VisualStudio project lists its NuGet dependencies in the packages.config file, which is in same folder as the project's .csproj file.  E.g.
Administrator@cc-svr10 ~/github/cshv3/plugins/hypervisors/hyperv/DotNet/ServerResource
$ cat ./HypervResource/packages.config
<?xml version="1.0" encoding="utf-8"?>
  <package id="AWSSDK" version="" targetFramework="net45" />
  <package id="DotNetZip" version="" targetFramework="net45" />
  <package id="log4net" version="2.0.0" targetFramework="net45" />
  <package id="Newtonsoft.Json" version="4.5.11" targetFramework="net45" />

By default, your Visual Studio project will search for these assemblies in the packages directory in the folder containing the .sln file.  E.g.
$ cat ./HypervResource/HypervResource.csproj
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="">
    <Reference Include="AWSSDK">

Create a NuGet target

A NuGet target is a build script that downloads packages to the packages folder in advance of compile step of your build.  After this step, the compiler will be able to resolve the assembly dependencies.

Using a NuGet target to automate downloading is a better solution than adding assemblies to your source code assembly or creating a custom script for download (the .sln and .csproj files).  The NuGet target keeps package dependencies at arms length. NuGet carries the legal risk of redistributing other organisations' binaries.  Also, the NuGet target requires much less maintenance work.
Once setup, the NuGet target the list of files to download list the projects being compiled.

To create the target, update your .sln and .csproj files with the NuGet-related tags using Visual
Studio.  Simply open the .sln, and select Project -> Enable NuGet Package Restore.  This creates a .nuget folder with an msbuild task in the NuGet.targets file, its configuration in NuGet.Config, and the command line tool NuGet.exe.  E.g.
Administrator@cc-svr10 ~/github/cshv3/plugins/hypervisors/hyperv/DotNet/ServerResource
$ find ./ | grep NuGet

You will want to add the build script (NuGet.targets) and its configuration (NuGet.Config) to your source code repository.  NuGet.exe should be downloaded separately, as we will explain next.

Update the NuGet target to run with XBuild

Although Mono's Xbuild toll will execute NuGet.targets automatically, Xbuild does not support enough MSBuild tasks to allow it to executed the default NuGet.targets script.

First, you will have to create a script to download NuGet.exe, because Xbuild does not yet support the embedded Task tag (Source).  E.g. if wget is installed, you could use the following instructions:
cp nuget.exe ./.nuget/NuGet.exe
chmod a+x ./.nuget/NuGet.exe
Note the change of case.  The code generated expects NuGet.exe

Secondly, if you plan to build on a Windows OS, you will have to update NuGet.targets
to bypass unsupported tags.  Normally, NuGet.targets uses the OS property to bypass unsupported tags automatically.  However, this only works when Mono is used on Linux.  E.g. in the example below an unsupported property function is avoided when the OS is not "Window_NT".
<PropertyGroup Condition=" '$(OS)' == 'Windows_NT'">
    <!-- Windows specific commands -->
    <NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), ".nuget"))</NuGetToolsPath>
    <PackagesConfig>$([System.IO.Path]::Combine($(ProjectDir), "packages.config"))</PackagesConfig>

<PropertyGroup Condition=" '$(OS)' != 'Windows_NT'">
    <!-- We need to launch nuget.exe with the mono command if we're not on windows -->

To be able to build on Windows, you can either delete the Windows version or revise the conditionals to use a custom property to determine if xbuild is being used.  E.g. in the example below, we
bypass the unsupported function property using a test for the property BuildWithMono.
<PropertyGroup Condition=" '$(OS)' == 'Windows_NT'">
    <!-- Windows specific commands -->
    <!-- <NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), ".nuget"))</NuGetToolsPath> -->
    <!--<PackagesConfig>$([System.IO.Path]::Combine($(ProjectDir), "packages.config"))</PackagesConfig> -->

<PropertyGroup Condition=" '$(OS)' != 'Windows_NT'">
    <!-- We need to launch nuget.exe with the mono command if we're not on windows -->

To trigger the bypass, we set buildWithMono when calling Xbuild.  E.g.
xbuild /p:BuildWithMono="true" ServerResource.sln

What code needs bypassing?

  1. Mono Xbuild cannot interpret property functions.  These are properties whose values are results of executing an inline function call. E.g.
  2. <!-- Windows specific commands -->
    <NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), ".nuget"))</NuGetToolsPath>
    <PackagesConfig>$([System.IO.Path]::Combine($(ProjectDir), "packages.config"))</PackagesConfig>
  3. Mono does not implement all tasks precisely.  E.g. the Exec tag is missing the LogStandardErrorAsError property available with .NET.  Thus,
    <Exec Command="$(RestoreCommand)"
          Condition="'$(OS)' == 'Windows_NT' And Exists('$(PackagesConfig)')" />
    Error executing task Exec: Task does not have property "LogStandardErrorAsError" defined
  4. Xbuild must call NuGet.exe through mono.  E.g.
  5. <NuGetCommand Condition=" '$(OS)' == 'Windows_NT'">"$(NuGetExePath)"</NuGetCommand>
    <NuGetCommand Condition=" '$(OS)' != 'Windows_NT' ">mono --runtime=v4.0.30319 $(NuGetExePath)</NuGetCommand>

Skip unsupported projects

Xbuild cannot compile projects that require additional proprietary assemblies not available through NuGet or the Mono implementation.  For example, unit tests created using Visual Studio Test Tools make use of Visual Studio-specific assemblies.   E.g.
xbuild ServerResource.sln
HypervResourceControllerTest.cs(18,17): error CS0234: The type or namespace name `VisualStudio' does not exist in the namespace `Microsoft'. Are you missing an assembly reference?
In this case, HypervResourceControllerTest.cs(18,17) is a reference to Visual Studio test tools:
using Microsoft.VisualStudio.TestTools.UnitTesting;
To create new configuration in you solution that skips compiling these projects follow these steps:
  1. In Visual Studio, create a new configuration that excludes the projects you're not interested in. -(Build -> Configuration Manager..., select on the Active solution platform: drop down list)
  2. Using the Configuration Manager, remove unwanted solutions from the configuration.
  3. Next, close VisualStudio, which will save changes to the .sln and .csproj The .sln will record which projects are associated with the configuration. The .csproj will record the settings of the configuration, such as whether TRACE or DEBUG is defined.
  4. Finally, when calling xbuild assign your configuration's name to the Configuration property.

xbuild /p:Configuration="NoUnitTests" /p:BuildWithMono="true" ServerResource.sln
The above will build the projects associated with the NoUnitTests configuration. Source

Final Remarks:

Mono's support keeps getting better with every release.  The quirks discussed in this post may have been addressed by the time you read it.

Thursday, August 08, 2013

Using CloudStack's Log Files: XenServer Integration Troubleshooting


CloudStack uses the XenServer plugin model to make extensions to XenAPI aka XAPI.  These extensions are written in a combination of Python and Bash scripts.  Python is the programming language for XenServer plugin model; however, calls to command line tools are more natural to write in a bash script.

These XenServer extensions and the management server plugins that use them generate logging information to assist developers and admins to diagnose problems.


What are the log files are useful, where are they, how do I use them?


The management server uses the log4j logging library, to generate three logs: 
  • the general log, aka FILE
  • the CloudStack API call log, aka APISERVER
  • the AWS API log, aka AWSAPI.  
The files each log writes to are identified in the <appender> section of the log4j config file. Specifically, the file name is in the ActiveFileName parameter.
XenServer logs sit on the Dom0 operating system in /var/log (source).  CloudStack's XAPI extensions write to /var/log/SMlog.  Although all XAPI events are summarised in /var/log/xensource.log, this log is not recommended.  The information available is limited and difficult to find due to the large number of events being logged.

In the following sub sections, we 
  • explain how to find the management server logs for a development CloudStack
  • explain how to find the management server logs for a production CloudStack
  • explain how the XenServer logs are generated
  • walk through an example of using the logs to diagnose a problem

Developer CloudStack Managment Server Logs

To find the exact log file locations, first find the log4j configuration file using grep.  

Developers that launch CloudPlatform in Jetty using should look under the ./client folder for the log4j configuration.  The ./client folder contains the files that execute when mvn -pl :cloud-client-ui jetty:run is executed.  E.g.
 root@mgmtserver:~/github/cshv3# find ./client/ | grep "log4j.*\.xml"  
Next, examine the <appender> the files.  E.g.
 root@mgmtserver:~/github/cshv3# more ./client/target/generated-webapp/WEB-INF/classes/log4j-cloud.xml   
   <!-- ================================= -->   
   <!-- Preserve messages in a local file -->   
   <!-- ================================= -->   
   <!-- A regular appender FIXME implement code that will close/reopen logs on SIGHUP by logrotate FIXME make the paths configurable using the build system -->   
   <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">   
   <param name="Append" value="true"/>   
   <param name="Threshold" value="TRACE"/>   
   <rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">   
    <param name="FileNamePattern" value="vmops.log.%d{yyyy-MM-dd}.gz"/>   
    <param name="ActiveFileName" value="vmops.log"/>   
   <layout class="org.apache.log4j.EnhancedPatternLayout">   
    <param name="ConversionPattern" value="%d{ISO8601} %-5p [%c{1.}] (%t:%x) %m%n"/>   
In this case, vmops.log is the general log file.

Production CloudStack Management Server Logs

With production servers, the process is the same:  look for the log4j config.  

In the case of Citrix CloudPlatform 3.0.x, we would see the following:
  [root@cs1-mgr management]# find / | grep "log4j.*\.xml"   
Next, examine the <appender> the files.  E.g. 
 [root@cs1-mgr management]# more /etc/cloud/management/log4j-cloud.xml  
 <?xml version="1.0" encoding="UTF-8"?>  
 <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">  
 <log4j:configuration xmlns:log4j="" debug="false">  
   <throwableRenderer class=""/>  
   <!-- ================================= -->  
   <!-- Preserve messages in a local file -->  
   <!-- ================================= -->  
   <!-- A regular appender FIXME implement code that will close/reopen logs on SIGHUP by logrotate FIXME make the paths configurable using the build system -->  
   <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">  
    <param name="Append" value="true"/>  
    <param name="Threshold" value="TRACE"/>  
    <rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">  
     <param name="FileNamePattern" value="/var/log/cloud/management/management-server.log.%d{yyyy-MM-dd}.gz"/>  
     <param name="ActiveFileName" value="/var/log/cloud/management/management-server.log"/>  
    <layout class="org.apache.log4j.EnhancedPatternLayout">  
      <param name="ConversionPattern" value="%d{ISO8601} %-5p [%c{3}] (%t:%x) %m%n"/>  
Although we there are two log4j configuration files, they point to the same files.  In both cases the general log is /var/log/cloud/management/management-server.log

CloudStack Logging on XenServer

CloudStack logs are generated by the XAPI extensions that CloudStack adds to the XenServer host.  These extensions are deployed in /etc/xapi.d/plugins folder.  E.g.
 [root@camctxlabs ~]# find /etc/xapi.d/plugins | grep vmops  
The XAPI extensions are written in Python, but they make use of bash shell scripts in the /opt/xensource/bin folder.  E.g.
 [root@camctxlabs ~]# find /opt/xensource/bin | grep secondarystorage  
The XAPI extensions use functions from the sm.util module.  The logging functions in this module write to the SMlog log file.  E.g. in the code below, echo(fn) uses util.SMlog to update the log file /var/log/SMlog.  The @echo declaration in front of copy_vhd_to_secondarystorage causes echo(fn) to be executed in its place. echo is passed the copy_vhd_to_secondarystorage function as a parameter.
 [root@camctxlabs ~]# more /etc/xapi.d/plugins/vmopspremium  
 # Licensed to the Apache Software Foundation (ASF) under one  
 def echo(fn):  
   def wrapped(*v, **k):  
     name = fn.__name__  
     util.SMlog("#### VMOPS enter %s ####" % name )  
     res = fn(*v, **k)  
     util.SMlog("#### VMOPS exit %s ####" % name )  
     return res  
   return wrapped  
 def copy_vhd_to_secondarystorage(session, args):  
   mountpoint = args['mountpoint']  
   vdiuuid = args['vdiuuid']  
   sruuid = args['sruuid']  
     cmd = ["bash", "/opt/xensource/bin/", mou  
     ntpoint, vdiuuid, sruuid]  
     txt = util.pread2(cmd)  
     txt = '10#failed'  
     return txt  
Logging messages are also generated by exception handling code in the sm.utils modules.  E.g. in the code above, util.pread2 is used to execute bash scripts.  If an error occurs, it will be reported to the SMlog file.

Example of Using the CloudStack Logs

In this example, I find out that I accidentally downloaded an out of date vhd-utils to my XenServer.

To start, I followed the instructions on how to build from master.  These told me how to build and run the latest management server.  After configuring a zone with S3 secondary storage, NFS staging storage, and a XenServer cluster, I waited for the system VM template to download and for the management server to start its secondary storage VM.  After a while, I noticed that the secondary storage system VM did not start.  Also, I saw that exceptions were appearing in console I used to launch the management server.  Therefore, I knew I had a problem.

The first step was to look for the exceptions that occurred in the management server log.  This is the vmops.log file, because I'm using a developer environment.

NB:  Look for the first exception and the logs just before it.  

Here's what I found:
 2013-08-05 10:16:01,853 DEBUG [c.c.h.d.HostDaoImpl] (ClusteredAgentManager Timer:null) Completed acquiring hosts for clusters not owned by any management server  
 2013-08-05 10:16:02,403 WARN [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-11:null) destoryVDIbyNameLabel failed due to there are 0 VDIs with name cloud-8e789d62-3062-4a13-8235-35ca49b7b924  
 2013-08-05 10:16:02,404 WARN [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-11:null) can not create vdi in sr 8544bcca-f54d-cbc5-151d-b91c822addba  
 WARN [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-11:null) Catch Exception for template + due to can not create vdi in sr 8544bcca-f54d-cbc5-151d-b91c822addba can not create vdi in sr 8544bcca-f54d-cbc5-151d-b91c822addba  
     at java.util.concurrent.Executors$  
     at java.util.concurrent.FutureTask$Sync.innerRun(  
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(  
     at java.util.concurrent.ScheduledThreadPoolExecutor$  
     at java.util.concurrent.ThreadPoolExecutor.runWorker(  
     at java.util.concurrent.ThreadPoolExecutor$  
 INFO [c.c.v.VirtualMachineManagerImpl] (secstorage-1:ctx-8124c385) Unable to contact resource. Resource [StoragePool:1] is unreachable: Unable to create Vol[1|vm=1|ROOT]:Catch Exception for template + due to can not create vdi in sr 8544bcca-f54d-cbc5-151d-b91c822addba  
Next, look at all the management server code that was executing at the time.  This code ends at the first line named in the stack dump.  E.g.
It starts around the last log message.  E.g.
 root@mgmtserver:~/github/cshv3# grep -n -R "destoryVDIbyNameLabel" * --include *.java  
 plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/        s_logger.warn("destoryVDIbyNameLabel failed due to there are " + vdis.size() + " VDIs with name " + nameLabel);  
 plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/        s_logger.warn("destoryVDIbyNameLabel failed due to there are " + vdis.size() + " VDIs with name " + nameLabel);  
Have a look at the source code in this area.  E.g.
   private String copy_vhd_from_secondarystorage(Connection conn, String mountpoint, String sruuid, int wait) {  
     String nameLabel = "cloud-" + UUID.randomUUID().toString();  
     String results = hypervisorResource.callHostPluginAsync(conn, "vmopspremium", "copy_vhd_from_secondarystorage",  
         wait, "mountpoint", mountpoint, "sruuid", sruuid, "namelabel", nameLabel);  
     String errMsg = null;  
     if (results == null || results.isEmpty()) {  
       errMsg = "copy_vhd_from_secondarystorage return null";  
     } else {  
       String[] tmp = results.split("#");  
       String status = tmp[0];  
       if (status.equals("0")) {  
         return tmp[1];  
       } else {  
         errMsg = tmp[1];  
     String source = mountpoint.substring(mountpoint.lastIndexOf('/') + 1);  
     if( hypervisorResource.killCopyProcess(conn, source) ) {  
       destroyVDIbyNameLabel(conn, nameLabel);  
     throw new CloudRuntimeException(errMsg);  
In the source above, we see that a call was being made to the XenServer extension, which suggests we should look at the SMlog on the XenServer itself for that period of time.  

NB:  The SMlog files is regularly archived.  So make a copy of the file.  If it does not fit the time period, look at the dates of the archived versions to find the useful one.  Make a copy and use gunzip to uncompress the file.

In my case, the SMlog file shows that a call to the copy_vhd_from_secondarystorage extension threw an exception.  E.g.

 [2106] 2013-08-05 10:07:34.249054    #### VMOPS enter copy_vhd_from_secondarystorage ####  
 [2106] 2013-08-05 10:07:34.249197    ['bash', '/opt/xensource/bin/', '', '8544bcca-f54d-cbc5-151d-b91c822addba', 'cloud-8e789d62-3062-4a13-8235-35ca49b7b924']  
 [2127] 2013-08-05 10:07:34.824402    ['/usr/sbin/vhd-util', 'create', '--debug', '-n', '/dev/VG_XenStorage-8544bcca-f54d-cbc5-151d-b91c822addba/VHD-9a7e2a96-511b-4d94-a745-2619abb99919', '-s', '2000', '-S', '2097152']  
 [2127] 2013-08-05 10:07:34.835447    FAILED: (rc 22) stdout: 'options: <-n name> <-s size (MB)> [-r reserve] [-h help]  
 ', stderr: 'create: invalid option -- S  
 [2127] 2013-08-05 10:07:34.835849    lock: released /var/lock/sm/8544bcca-f54d-cbc5-151d-b91c822addba/sr  
 [2127] 2013-08-05 10:07:34.843383    ***** vdi_create: EXCEPTION util.CommandException, 22  
  File "/opt/xensource/sm/", line 94, in run  
   return self._run_locked(sr)  
 [2106] 2013-08-05 10:07:35.021318    #### VMOPS exit copy_vhd_from_secondarystorage ####  
The cause of the exception is calling vhd-util. with an invalid option -S.  Indeed, the version of vhd-util in use on this machine does not provide for the '-S' option.  E.g.
 [root@camctxlabs ~]# ./vhd-util create -help  
 options: <-n name> <-s size (MB)> [-r reserve] [-h help]  
Whereas another XenServer in my data center does not have this problem.  E.g.
 [root@camctxlabs2 ~]# vhd-util create -help  
 options: <-n name> <-s size (MB)> [-r reserve] [-h help] [<-S size (MB) for metadata preallocation (see vhd-util resize)>]  
From this, I learnt that I need to find an updated version of vhd-util.

Final Remarks

The system VMs and web container for the management server have additional log files.  In the case of Citrix CloudPlatform these are listed here.

Thursday, August 01, 2013

Diagnosing Maven Dependency Problems


Maven provides dependency management.  You specify the immediate dependencies of each project, and Maven works out the transitive dependencies.

A transitive dependency list includes indirect dependencies.  E.g. if A depends on B and B depends on C, A's transitive dependency list is B and C.


After regular maintenance, the Maven projects do not satisfy all dependencies at runtime..

In my case, Maven failed to make available of the commons-io package.  E.g. 
 mvn -pl :cloud-client-ui jetty:run  
 INFO [ConfigurationServerImpl] (Timer-2:null) SSL keystore located at /root/github/cshv3/client/target/generated-webapp/WEB-INF/classes/cloud.keystore  
 Exception in thread "Timer-2" java.lang.NoClassDefFoundError: org/apache/commons/io/FileUtils  
 Caused by: java.lang.ClassNotFoundException:  
     at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(  
However, commons-io is a known dependency, which can be demonstrated by using the mvn dependency:tree command.  E.g.
 root@mgmtserver:~/github/cshv3/client# mvn dependency:tree  
 [INFO] Scanning for projects...  
 [INFO] ------------------------------------------------------------------------  
 [INFO] Building Apache CloudStack Client UI 4.2.0-SNAPSHOT  
 [INFO] ------------------------------------------------------------------------  
 [INFO] --- maven-dependency-plugin:2.5.1:tree (default-cli) @ cloud-client-ui ---  
 [INFO] org.apache.cloudstack:cloud-client-ui:war:4.2.0-SNAPSHOT  
 [INFO] +- commons-io:commons-io:jar:1.4:provided  


Inspect the effective POM to see how POM inheritance has modified the dependency specification.

The command mvn help:effective-pom will return the pom.xml as modified by properties inherited from it's parent and other pom.xml files in its inheritance hierarchy   Run the command in the same folder as the pom.xml you are interested.

E.g. for mvn -pl :cloud-client-ui jetty:run, first find the folder for the cloud-client-ui project.
 root@mgmtserver:~/github/cshv3# grep -R cloud-client-ui * --include=pom.xml  
 client/pom.xml: <artifactId>cloud-client-ui</artifactId>  

Then use mvn help:effective-pom to see how the dependency specification for commons-io differs from that of other files.

 root@mgmtserver:~/github/cshv3# cd client  
 root@mgmtserver:~/github/cshv3/client# mvn help:effective-pom  
Notice that commons-io has the <scope> element, which changes the dependency scope.  Dependency scope tells Maven when it is responsible for satisfying a dependency.  A scope of provided tells Maven not to bother adding commons-io to the runtime environment.  Specifically, provided "indicates you expect the JDK or a container to provide the dependency at runtime."

Therefore, when you look at the list of libraries Maven makes available at runtime, commons-configuration is available, but commons-io is not.  E.g.
 root@mgmtserver:~/github/cshv3# ls -al client/target/cloud-client-ui-4.2.0-SNAPSHOT/WEB-INF/lib | grep commons  
 -rw-r--r-- 1 root root  168760 Jul 16 17:23 commons-beanutils-core-1.7.0.jar  
 -rw-r--r-- 1 root root  232771 Jul 16 17:23 commons-codec-1.6.jar  
 -rw-r--r-- 1 root root  571259 Jul 22 10:14 commons-collections-3.2.jar  
 -rw-r--r-- 1 root root  354491 Jul 16 17:23 commons-configuration-1.8.jar  
 -rw-r--r-- 1 root root  24242 Jul 16 20:06 commons-daemon-1.0.10.jar  
 -rw-r--r-- 1 root root  160519 Jul 16 17:23 commons-dbcp-1.4.jar  
 -rw-r--r-- 1 root root  53082 Jul 16 17:23 commons-fileupload-1.2.jar  
 -rw-r--r-- 1 root root  305001 Jul 16 17:24 commons-httpclient-3.1.jar  
 -rw-r--r-- 1 root root  284220 Jul 16 17:23 commons-lang-2.6.jar  
 -rw-r--r-- 1 root root  60686 Jul 16 17:23 commons-logging-1.1.1.jar  
 -rw-r--r-- 1 root root  111119 Jul 16 17:23 commons-pool-1.6.jar  
 -rw-r--r-- 1 root root  34407 Jul 16 20:08 ws-commons-util-1.0.2.jar  
Remove <scope>provided</scope> element from the inherited dependency, and the problem disappears.

Final Remarks:

If you run into a dependency problem, tell the Apache CloudStack mailing list.  Dependency problems are difficult to spot.  Pointing them out greatly helps the project keep the build in good shape.

Monday, July 29, 2013

Using CloudMonkey to Automate CloudStack Operations


The CloudStack GUI does not suit repetitive tasks.  There is no macro mechanism in the GUI to allow an admin to record and replay long workflows.  Multi-step tasks such as the setup of a new zone or the registration of a template must be done by hand and are error prone.

Developers can automate CloudStack workflows with the CloudMonkey tool.  CloudMonkey provides a means of making CloudStack API calls from the command line, and thus from a script.


The GUI does not tell you which API calls and parameters it is using for a task.  This makes it difficult to reproduce the same functionality in a CloudMonkey script.


Parse the management server log file to see the sequence of commands executed during a GUI task.  Once the commands and parameters are known, reconstruct the steps in CloudMonkey.

Parse the CloudStack log file:

The management server logs the beginning and end of all API calls in a log file.  In the case of a development system, the log file is usually the file vmops.log in the root of the source tree. 

Use grep to obtain a list of API call log entries:

 grep 'command=' vmops.log > all_api_logs.txt 

The result is quite raw.  It will require additional clean up.  E.g.:
  root@mgmtserver:~/github/cshv3# grep 'command=' vmops_createtmplt_sh_problem.log > all_api_calls.txt   
  root@mgmtserver:~/github/cshv3# cat all_api_calls.txt   
  2013-07-17 08:59:50,522 DEBUG [cloud.api.ApiServlet] (343904103@qtp-1389504071-7:null) ===START=== -- GET command=listCapabilities&response=json&sessionkey=null&_=1374047990517   
  2013-07-17 08:59:50,540 DEBUG [cloud.api.ApiServlet] (343904103@qtp-1389504071-7:null) ===END=== -- GET command=listCapabilities&response=json&sessionkey=null&_=1374047990517   

Next, remove uninterested log entries using sed:

 sed -e '/^.*command=log/d; /^.*===END===/d; /^.*command=queryAsyncJobResult/d' all_api_logs.txt > ./reqd_api_logs.txt
How does this work?

Using the -e parameter, we pass sed a list of commands separated by a semicolon.  The meaning of each command is as follows:

/^.*command=log/d deletes login and logout commands.

/^.*===END===/d removes the second log message for a call, which is made at the end of the API call.

/^.*command=queryAsyncJobResult/d' removes polling commands that the GUI uses to determine if an asynchronous command has completed.  We will use Monkey in blocking mode, which means it will do the queryAsyncJobResult calls for us.

Next, convert logs entries to commands:

 sed -e 's/^.*command=//; s/&/ /g; s/_=.*//; s/sessionkey=[^ ]*//; s/response=[^ ]*//' ./reqd_api_logs.txt > ./encoded_api_calls.txt

How does this work?

s/^.*command=// removes from start of line to and including "command=".  We want everything after command=, because that is the actual command.

s/&/ /g replaces the '&' used to separate arguments in the API call with a space.  Its more readable, and CloudMonkey wants us to separate commands with a space.

s/_=.*// removes the 'cache buster' that prevents network infrastructure from responding to the HTTP request with a cached result.

s/sessionkey=[^ ]*// removes the session key.  CloudMonkey uses API keys.  Besides, the sessionkey will have expired by now!

s/response=[^ ]*// removes the response encoding parameter from the request.  CloudMonkey will insert a suitable version of this parameter automatically.

Next, enclose parameter values in single and double quotes

 sed -e 's/ \+/ /g; s/=/='"'"'"/g; s/ /"'"'"' /g; s/"'"'"'//' ./encoded_api_calls.txt > delimited_encoded_api_calls.txt  

We want to put double quotes around parameter values before converting from URL encoding to strings.  This will preserve the whitespace after decoding.  We also add single quotes.  The single quotes prevent the bash shell from removing the double quotes when we put these commands in a script.

The sed commands are complex due to a quirk with how bash parses single quotes...

s/ \+/ /g converts one or more spaces to a single space.

s/=/='"'"'"/g converts equals (=) to equals, single quote, double quote ( ='" )

s/ /"'"'"' /g converts all spaces to double quote, single quote ( "' ).

s/"'"'"'// removes the leading double quote, single quote.

Using the command above,
 createPhysicalNetwork zoneid=28444ba3-1405-4872-b23c-015cf5116415 name=Physical%20Network%201 isolationmethods=VLAN  

has all parameters enclosed in '" ... "', e.g.
 createPhysicalNetwork zoneid='"28444ba3-1405-4872-b23c-015cf5116415"' name='"Physical%20Network%201"' isolationmethods='"VLAN"'  

If you don't need the single quotes, just use the command below to insert your quotes.
 sed -e 's/ \+/ /g; s/=/="/g; s/ /" /g; s/"//' ./encoded_api_calls.txt > delimited_encoded_api_calls.txt

Finally, remove URL encoding from the parameters:

The parameters for our commands are URL encoded.  E.g.
 root@mgmtserver:~/github/cshv3# cat delimited_encoded_api_calls.txt  
 addImageStore name="AWS+S3" provider="S3" details%5B0%5D.key="accesskey" details%5B0%5D.value="my_access_key" 
 details%5B1%5D.key="secretkey" details%5B1%5D.value="my_secret_key" details%5B2%5D.key="bucket" details%5B2%5D.value="cshv3eu" details%5B3%5D.key="usehttps"   
 details%5B3%5D.value="true" details%5B4%5D.key="endpoint" details%5B4%5D.value=""  

You can decode them with the following (source):
 sed -e 's/+/ /g; s/\%0[dD]//g' delimited_encoded_api_calls.txt | awk '/%/{while(match($0,/\%[0-9a-fA-F][0-9a-fA-F]/)){$0=substr($0,1,RSTART-1)sprintf("%c",0+("0x"substr($0,RSTART+1,2)))substr($0,RSTART+3);}}{print}' > decoded_api_calls.txt   

This restores whitespace and punctuations E.g.
 root@mgmtserver:~/github/cshv3# cat decoded_api_calls.txt  
 addImageStore name="AWS S3" provider="S3" details[0].key="accesskey" details[0].value="my_access_key" 
 details[1].key="secretkey" details[1].value="my_secret_key" details[2].key="bucket" details[2].value="cshv3eu" details[3].key="usehttps"   
 details[3].value="true" details[4].key="endpoint" details[4].value=""  

Setup CloudMonkey:

Install CloudMonkey

Be careful not to use an out of date community maintained package.  The target version of CloudMonkey is listed at install time.  E.g

 root@mgmtserver:~/github/cshv3# apt-get install python-pip  
 Reading package lists... Done  
 Building dependency tree  
 root@mgmtserver:~/github/cshv3# pip install cloudmonkey  
 Downloading/unpacking cloudmonkey  
  Downloading cloudmonkey-4.1.0-1.tar.gz (60Kb): 60Kb downloaded  
  Running egg_info for package cloudmonkey  
 root@mgmtserver:~/github/cshv3# which cloudmonkey  

If you are a developer, use the instructions on the CloudMonkey wiki to build the latest version.  E.g.
 root@mgmtserver:~/github/cshv3# cd tools/cli  
 root@mgmtserver:~/github/cshv3/tools/cli# mvn clean install -P developer  
 [INFO] Scanning for projects...  
 [INFO] ------------------------------------------------------------------------  
 [INFO] Building Apache CloudStack cloudmonkey cli 4.2.0-SNAPSHOT  
 [INFO] ------------------------------------------------------------------------  
 [INFO] --- maven-install-plugin:2.3.1:install (default-install) @ cloud-cli ---  
 [INFO] Installing /root/github/cshv3/tools/cli/pom.xml to /root/.m2/repository/org/apache/cloudstack/cloud-cli/4.2.0-SNAPSHOT/cloud-cli-4.2.0-SNAPSHOT.pom  
 [INFO] ------------------------------------------------------------------------  
 [INFO] ------------------------------------------------------------------------  
 [INFO] Total time: 5.190s  
 [INFO] Finished at: Mon Jul 22 22:33:01 BST 2013  
 [INFO] Final Memory: 16M/154M  
 [INFO] ------------------------------------------------------------------------  
 root@mgmtserver:~/github/cshv3/tools/cli# python build  
 running build  
 writing manifest file 'cloudmonkey.egg-info/SOURCES.txt'  
 root@mgmtserver:~/github/cshv3/tools/cli# python install  
 running install  
 Finished processing dependencies for cloudmonkey==4.2.0-0  
 root@mgmtserver:~/github/cshv3/tools/cli# which cloudmonkey  

Configure CloudMonkey

As a minimum, CloudMonkey needs the URL for the management server and API keys to authenticate requests to the server. API keys are different from your password / username.  How to obtain API keys is described at 9:07 in this YouTube CloudMonkey overview by DIYCloudComputing.

Also, set CloudMonkey to use JSON output.  The alternative is difficult to parse.

Finally, use sync to tell CloudMonkey to discover the latest API.

These values can be set at the command line.  E.g.
 cloudmonkey set apikey WsiG7tva38gJpl082mBRQEnAic9g_BW15fK5aB4W3ak9GBoBeg0iOz9iGAIJ7eSnHecS1ONffEygi2xTkP4QOw   
 cloudmonkey set secretkey _Ov8DMed8WMWMscWaWX6cCHzF7kWCQU2SVwbQJo4ujL2-ocLdvkC5Mwe0XlrSDZ12ha52ieAtYOJj6viA1SFhQ   
 cloudmonkey set display json   
 cloudmonkey sync  

Now CloudMonkey can make API calls.  E.g.
 root@mgmtserver:~/github/cshv3# cloudmonkey list users  
  "count": 1,  
  "user": [  
    "account": "admin",  
    "accountid": "12a8380c-f2e3-11e2-b495-00155db1030e",  
    "accounttype": 1,  
    "apikey": "WsiG7tva38gJpl082mBRQEnAic9g_BW15fK5aB4W3ak9GBoBeg0iOz9iGAIJ7eSnHecS1ONffEygi2xTkP4QOw",  
    "created": "2013-07-22T17:26:25+0100",  
    "domain": "ROOT",  
    "domainid": "12a7d75c-f2e3-11e2-b495-00155db1030e",  
    "email": "",  
    "firstname": "Admin",  
    "id": "12a8686b-f2e3-11e2-b495-00155db1030e",  
    "iscallerchilddomain": false,  
    "isdefault": true,  
    "lastname": "User",  
    "secretkey": "_Ov8DMed8WMWMscWaWX6cCHzF7kWCQU2SVwbQJo4ujL2-ocLdvkC5Mwe0XlrSDZ12ha52ieAtYOJj6viA1SFhQ",  
    "state": "enabled",  
    "username": "admin"  

Recreate GUI Commands in CloudMonkey

The parsed log file contains a list of API calls.  Pick out the ones you want to use.  I placed them in a file called myscript.

To make CloudMonkey API calls from the command line, simply add cloudmonkey api to the API call.  To save time, you can prepend every command using sed:
 sed 's/^/cloudmonkey api /' myscript > myscript2  

The results of one API call will provide the parameters for the next, so we want to be able to capture the results of our CloudMonkey calls.

Simply enclose your commands in reverse single quotes and assign the result to a bash variable.  To save time, use sed:
 sed -e 's/^/apiresult=`/; s/$/`/' myscript2 > myscript3  

CloudMonkey has strict case sensitivity rules that prevent it from using log file input.  CloudMonkey expects all parameter keys to be in lower case.  E.g. the  addTrafficType command above appears in the log with the parameter trafficType.  However, CloudMonkey expects it to be traffictype (all lower case).

Thus, a a file with the API calls below:
 createZone networktype='"Advanced"' securitygroupenabled='"false"' guestcidraddress='""' name='"HybridZone"' localstorageenabled='"true"' dns1='""' internaldns1='""' internaldns2='""'  
 createPhysicalNetwork zoneid='"28444ba3-1405-4872-b23c-015cf5116415"' name='"Physical Network 1"' isolationmethods='"VLAN"'  

We would get this result:
 apiresult=`cloudmonkey api createphysicalnetwork zoneid='"28444ba3-1405-4872-b23c-015cf5116415"' name='"Physical Network 1"' isolationmethods='"VLAN"' `  
 apiresult=`cloudmonkey api addtrafficType physicalnetworkid='"8ae03f63-efe9-46ea-9c31-f35164ef3dfc"' traffictype='"Management"' `  

Extract results as required

The variable apiresult includes a lot of information not useful for subsequent calls.  E.g.
 root@mgmtserver:~/github/cshv3# apiresult=`cloudmonkey api createZone networktype="Advanced" securitygroupenabled="false" guestcidraddress="" name="HybridZoneA" localstorageenabled="true" dns1="" internaldns1="" internaldns2=""`  
 root@mgmtserver:~/github/cshv3# echo $apiresult  
 { "zone": { "allocationstate": "Disabled", "dhcpprovider": "VirtualRouter", "dns1": "", "guestcidraddress": "", "id": "2347b5c8-378c-4a7e-9977-818bbba4f7ff", "internaldns1": "", "internaldns2": "", "localstorageenabled": true, "name": "HybridZoneA", "networktype": "Advanced", "securitygroupsenabled": false, "zonetoken": "b957e317-d661-30dd-a412-1f76f2736412" } }  

Usually, you will have to add code to extract specific parameters from the result.  For instance, here we extract the identifier of a newly create zone for a createPhysicalNetwork call:
 root@mgmtserver:~/github/cshv3# zoneid=`echo $apiresult | sed -e 's/^.*"id": //; s/,.*$//'`  
 root@mgmtserver:~/github/cshv3# echo $zoneid  
 root@mgmtserver:~/github/cshv3# apiresult=`cloudmonkey api createPhysicalNetwork zoneid=$zoneid name='Physical Network 1' isolationmethods='VLAN'`  
In a script, we can the $zoneid as the value variable of a variable.

Difficulties with this Approach:

CloudStack does not log the parameters of POST requests.  Commands such as addHost are recorded as received, but their parameters are not.  You have to refer to the developers guide to figure them out.  This is down to a lack of explicit support for logging incoming commands in CloudStack.

Final Remarks:

Parsing the GUI commands out of the log file is quite complex.  It would be a lot easier if the management server logged API calls in plain text rather than as URL encoded strings, and if commands sent by HTTP POST commands had their parameters clearly logged. 

Parsing JSON encoded text is poorly supported in bash.  CloudMonkey's 'filter' option would avoid this issue if it were available with the api command.  filter tells CloudMonkey to return only the values of a list of keys.    If the filter were available, code to parse the apiresult would not be required. 

CloudMonkey cannot be used with a clean deployment, because CloudStack initially has no API keys.  This issue can be avoided if username / password could be used to authenticate API calls.  username / passowrd authentication is used for login by the GUI and by tools such as the CloudStack.NET SDK (see the relevant Login method).

Fortunately, developers can disable database encryption and add API keys to the admin user before starting CloudStack.  To disable database encryption, set in your file.  This is done automatically by the Maven project that runs Jetty.  E.g.
 root@mgmtserver:~/github/cshv3# grep -R "db\.cloud\.encrypt" *  

Next, add the desired API keys are set in the user table.  E.g. 
 mysql --user=root --password="" cloud -e "update user set secret_key='_Ov8DMed8WMWMscWaWX6cCHzF7kWCQU2SVwbQJo4ujL2-ocLdvkC5Mwe0XlrSDZ12ha52ieAtYOJj6viA1SFhQ',api_key='WsiG7tva38gJpl082mBRQEnAic9g_BW15fK5aB4W3ak9GBoBeg0iOz9iGAIJ7eSnHecS1ONffEygi2xTkP4QOw' where id=2;"