Building your first Helion Development Platform app

In this post, i’m going to walk through building and deploying a very simple application.

If you have been following along on my previous posts – you might have a running Helion Development Platform (HDP) cluster running in HP public cloud. If you don’t, you can follow along and build a micro-cloud.

First task is to build an application.

Since the application actually doesn’t matter, lets start with a very simple PHP application. 

Index.php:

<html>
   <head>
      <title>My First CF App</title>
   </head>
   <body>
      <H1>Hello</H1>
      <?php phpinfo(); ?>
   </body>
</html>

We could try to push this straight into our cluster, but ideally we want to create an application manifest. The manifest will capture important metadata about our application, such as resource requirements, dependent services etc.

manifest.yml:


applications:
– name: helion-phpinfo
  buildpack: https://github.com/cloudfoundry/php-buildpack#v3.0.4
  mem: 32M
  disk: 1024M
  instances: 3

There are a couple of interesting things about to talk about:

  1. The file format follows a very strict format. 3 -‘s, followed by the applications: tag, then -{space}name: Other tags are inline with name:
  2. The buildpack is a hint (a very strong one) about what frameworks/runtimes our application needs to run. HDP can figure this out for some applications, but its good practice to specify the one you want – including the version!
  3. Instances are 3 – this means there will be 3 copies of the application running behind router (think load balancer for now).

Now we have our app – we can push this out to our HDP cluster. You will need to download and install the helion client – a link to which is on the portal.

helion push -n

Will publish the application and start it. You can view the running apps using:

helion apps

And to navigate to your application, try:

helion open helion-phpinfo

Even though this is a very simple application, we can use it to learn a lot more about deploying and managing applications on the Helion Development Platform. I’ll tackle some of these in the next few posts!

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Adding DEA’s to your microcloud

In a previous post, I showed how you can create a Helion Development Platform microcloud in the HP Public Cloud. In this post, I will show you how to add DEA’s to this in preparation to deploy your applications.

There are 2 types of DEA you can add, Windows or Linux. This post will cover Linux. I will cover Windows in a later post.

There are also 2 ways you can add a node, using the cf-mgmt tool, or by creating a nova instance and performing a manual configuration. Either works, but we will use cf-mgmt. Why? Well a DEA doesn’t need much configuration, so there is little value/flexibility in creating a nova instance and performing the steps manually (although a side project on my side project is to use Ansible to do all of this).

So lets get started. I’m attached to my jump box, and creating a couple of new scripts – add-dea.sh and add-dea.yml.

The first script add-dea.sh is basically the cf-mgmt command wrapped in a script. This allows me to pop it into source control.

#!/bin/bash
~/linux-amd64/cf-mgmt add-role dea –load add-dea.yml

The add-dea.yml file is the configuration file used by the script:

version: 1.2
constructor-image-name: HP Helion Development Platform – Application Lifecycle Service Installer 1.2.0.282
seed-node-image-name: HP Helion Development Platform – Application Lifecycle Service Seed Node 1.2.0.282
cluster-prefix: cluster1
count: “2”
az: az1
constructor-flavor: standard.xsmall
flavor: standard.large
network-id: d190d9ca-6f3c-4fff-9034-f1fc070a3b6b
external-network-id: 122c72de-0924-4b9f-8cf3-b18d5d3d292c
keypair: ookkey
stack: lucid64

Most of the above should be pretty straight forward however:

  • Version: yes its the version
  • constructor-image-name & seed-node-image-name: these are the image names from OpenStack to use to create the cluster. You can use nova image-list | grep Application to view the available images.
  • cluster-prefix: is the prefix you previously used when creating the cluster.
  • count: this is how many instances to create. I recommend only doing a couple at a time otherwise you could hit authentication token timeout issues.
  • az: this is the AZ you wish to create the instances in. See my note below for more.
  • constructor-flavor: this is the constructor image size
  • flavor: this is the DEA node size
  • network-id: this is the internal network id. You can grab this using neutron net-list.
  • external-network-id: this is the network id of the network that contains the floating ips – it’s probably called Ext-Net. See above for the command to list the networks.
  • keypair: the keypair you used to create the cluster in the first place.
  • stack: lucid64 is the ubuntu stack we will use.

AZ’s and high availability of DEA’s

To increase your HA capability, I’d advise you to deploy multiple DEA’s in each AZ. There are 3 AZ’s in public cloud, so i created DEA instances in each. Simply change the az: tag in the configuration file to change the target az. I ended up creating a total of 12 DEAs, which is a good size to give the platform a run for its money.

Once you have the AZ’s defined in OpenStack – you have to do the same in ALS. To do this you need to connect to each DEA and run the following:

kato node availabilityzone AZ1
kato process restart

You can view your Availability Zones from the web console by hitting Settings/DEA then Availability Zones.

Now when an application is deployed with more than one instance (which is always a good idea) ALS will deploy the instances across availability zones.

Although we now have DEA’s across multiple AZ’s there are other parts of the cluster that we will also need to make highly available. I’ll be covering that in later posts.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Keeping your Helion Development Platform ALS cluster patched

As with all software, you will need to keep on top of your patching. There are 3 different things you will need to patch:

  • The base OS for your cluster
  • ALS itself
  • The Docker images

Patching the base OS is easy. You can do this manually, using the standard apt-get update/upgrade commands, or you can enable the automatic patching of security patches. To do this, ssh into each node in the cluster and run:

sudo dpkg-reconfigure -plow unattended-upgrades

If you want to get ahead, run the following to update immediately:

sudo apt-get update
sudo unattended-upgrades -d

Patching ALS

You can view a list of available patches in the cluster by running

kato patch status

You can install patches 1 by 1, using kato patch install <patch> or patch them all using

kato patch install

Updating the Docker Image

The last thing to do is patch the Docker image. This is a little bit more involved, but not much. To do this you need to SSH into a DEA node (and you will need to do it for each DEA node unless you configure your DEA’s to use a Docker Registry – more on that in a later post).

Make sure your DEA node is up to date.
Create a new directory.
Create a file named Dockerfile in the new directory with the following contents:

FROM stackato/stack-alsek:kato-patched
RUN apt-get update
RUN unattended-upgrades -d
RUN apt-get clean && apt-get autoremove

Build the image using the following:

sudo docker build –no-cache=true -rm -t stackato/stack-alsek:upgrade-2015-08-04 .

Note, this will take some time to complete. Grab a coffee. Tip: The . at the end – this tells Docker to use the Dockerfile in the current directory.

Next, tag the docker image as the latest.

sudo docker tag stackato/stack-alsek:upgrade-2014-09-19 stackato/stack-alsek:latest

Repeat on each DEA node.

The final step is to restart each application so they pickup the latest image. You should notify your application administrators to do this.

You should schedule regular maintenance time to perform upgrades and patching, just like you would any other system.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS