Adding DEA’s to your microcloud

In a previous post, I showed how you can create a Helion Development Platform microcloud in the HP Public Cloud. In this post, I will show you how to add DEA’s to this in preparation to deploy your applications.

There are 2 types of DEA you can add, Windows or Linux. This post will cover Linux. I will cover Windows in a later post.

There are also 2 ways you can add a node, using the cf-mgmt tool, or by creating a nova instance and performing a manual configuration. Either works, but we will use cf-mgmt. Why? Well a DEA doesn’t need much configuration, so there is little value/flexibility in creating a nova instance and performing the steps manually (although a side project on my side project is to use Ansible to do all of this).

So lets get started. I’m attached to my jump box, and creating a couple of new scripts – add-dea.sh and add-dea.yml.

The first script add-dea.sh is basically the cf-mgmt command wrapped in a script. This allows me to pop it into source control.

#!/bin/bash
~/linux-amd64/cf-mgmt add-role dea –load add-dea.yml

The add-dea.yml file is the configuration file used by the script:

version: 1.2
constructor-image-name: HP Helion Development Platform – Application Lifecycle Service Installer 1.2.0.282
seed-node-image-name: HP Helion Development Platform – Application Lifecycle Service Seed Node 1.2.0.282
cluster-prefix: cluster1
count: “2”
az: az1
constructor-flavor: standard.xsmall
flavor: standard.large
network-id: d190d9ca-6f3c-4fff-9034-f1fc070a3b6b
external-network-id: 122c72de-0924-4b9f-8cf3-b18d5d3d292c
keypair: ookkey
stack: lucid64

Most of the above should be pretty straight forward however:

  • Version: yes its the version
  • constructor-image-name & seed-node-image-name: these are the image names from OpenStack to use to create the cluster. You can use nova image-list | grep Application to view the available images.
  • cluster-prefix: is the prefix you previously used when creating the cluster.
  • count: this is how many instances to create. I recommend only doing a couple at a time otherwise you could hit authentication token timeout issues.
  • az: this is the AZ you wish to create the instances in. See my note below for more.
  • constructor-flavor: this is the constructor image size
  • flavor: this is the DEA node size
  • network-id: this is the internal network id. You can grab this using neutron net-list.
  • external-network-id: this is the network id of the network that contains the floating ips – it’s probably called Ext-Net. See above for the command to list the networks.
  • keypair: the keypair you used to create the cluster in the first place.
  • stack: lucid64 is the ubuntu stack we will use.

AZ’s and high availability of DEA’s

To increase your HA capability, I’d advise you to deploy multiple DEA’s in each AZ. There are 3 AZ’s in public cloud, so i created DEA instances in each. Simply change the az: tag in the configuration file to change the target az. I ended up creating a total of 12 DEAs, which is a good size to give the platform a run for its money.

Once you have the AZ’s defined in OpenStack – you have to do the same in ALS. To do this you need to connect to each DEA and run the following:

kato node availabilityzone AZ1
kato process restart

You can view your Availability Zones from the web console by hitting Settings/DEA then Availability Zones.

Now when an application is deployed with more than one instance (which is always a good idea) ALS will deploy the instances across availability zones.

Although we now have DEA’s across multiple AZ’s there are other parts of the cluster that we will also need to make highly available. I’ll be covering that in later posts.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Keeping your Helion Development Platform ALS cluster patched

As with all software, you will need to keep on top of your patching. There are 3 different things you will need to patch:

  • The base OS for your cluster
  • ALS itself
  • The Docker images

Patching the base OS is easy. You can do this manually, using the standard apt-get update/upgrade commands, or you can enable the automatic patching of security patches. To do this, ssh into each node in the cluster and run:

sudo dpkg-reconfigure -plow unattended-upgrades

If you want to get ahead, run the following to update immediately:

sudo apt-get update
sudo unattended-upgrades -d

Patching ALS

You can view a list of available patches in the cluster by running

kato patch status

You can install patches 1 by 1, using kato patch install <patch> or patch them all using

kato patch install

Updating the Docker Image

The last thing to do is patch the Docker image. This is a little bit more involved, but not much. To do this you need to SSH into a DEA node (and you will need to do it for each DEA node unless you configure your DEA’s to use a Docker Registry – more on that in a later post).

Make sure your DEA node is up to date.
Create a new directory.
Create a file named Dockerfile in the new directory with the following contents:

FROM stackato/stack-alsek:kato-patched
RUN apt-get update
RUN unattended-upgrades -d
RUN apt-get clean && apt-get autoremove

Build the image using the following:

sudo docker build –no-cache=true -rm -t stackato/stack-alsek:upgrade-2015-08-04 .

Note, this will take some time to complete. Grab a coffee. Tip: The . at the end – this tells Docker to use the Dockerfile in the current directory.

Next, tag the docker image as the latest.

sudo docker tag stackato/stack-alsek:upgrade-2014-09-19 stackato/stack-alsek:latest

Repeat on each DEA node.

The final step is to restart each application so they pickup the latest image. You should notify your application administrators to do this.

You should schedule regular maintenance time to perform upgrades and patching, just like you would any other system.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Creating a Micro Cloud in HP Public Cloud from a Chromebook

I’m back! Kinda. Hopefully.

In my new role at HP, I get to share a little more about what we are up to, along with some general geeky cloud stuff again. For my first dump, I decided that I only needed a Chromebook to do anything in the cloud. Why would you need anything more powerful? 

So first task, can you actually build a cloud from a Chromebook. Specifically, could I spin up and configure our very own HP Helion Development Platform?

For this i’ll use my trusty NSFW, bought with my own money HP Chromebook. 

Since I’m going to deploy this thing into our HP Public Cloud, the first thing to do was setup a subscription. You can grab a 3 month trial account by following the instructions here.

Once you have a subscription its time to create a cluster. The process is documented at the same link about and is pretty straight forward. However, since I’m using my Chromebook to do all this, there were a few things I had to do first.

First thing I needed was an Ubuntu box. I quickly deployed one into my subscription. The smallest size is enough. As part of this process, I created a new key which I downloaded to my Chomebook. I added a floatingip so i could reach it from the internet and added a rule to allow port 22 to the default security group. This will allow you to SSH into the newly created instance. I then added the FireSSH app to Chrome so I could SSH using the key to my box. To recap:

  1. Create a new key pair and download the key.
  2. Create a small Ubuntu instance using the key pair you just created.
  3. Add port 22 to the default rule (or create your own rule and add it to the instance).
  4. Add a floating IP to the instance.
  5. SSH into the instance from your favorite SSH client (FireSSH is ok so far)

Once you are on that machine, it makes checking things easier if you install at least the OpenStack Nova and Keystone clients. You can do this by entering:

sudo apt-get -y install python-novaclient python-keystoneclient

Once you have that, download the OpenStack RC file, which you will find under the Project | Compute | Access & Security | API Access tab. I could not download this directly to my instance, so i simply opened it and pasted the contents into a new file.

I then downloaded the 64 bit Ubuntu cf-mgmt tool, which you can use to create clusters on both public cloud and your own private HP Helion OpenStack clouds. Tip: Use wget on your ubuntu instance to download, then use unzip to extract. You can also add the command to your path.

Once you have the OpenStack RC file along with the cf-mgmt tool you are ready to create the cluster:

First, source your OpenStack RC file

source yourrcfile.sh

Check everything is working by typing

nova list

You should get a list of instances that are running in your project.

Next create the cluster using the following:

~/linux-amd64/cf-mgmt create-cluster –keypair-name <name_of_your_key> –admin-email <admin_email> –admin-password <admin_password> –load http://clients.als.hpcloud.com/1.2/config/trial.yml

where:

 

  • name_of_your_key – is the name of the key pair you created earlier.
  • admin_email – is the email you wish to use when the admin account in the ALS cluster is created.
  • admin_password – is the password for the admin account you wish to create.

 

The url, is a pre-defined config file that will configure a single node with the MySQL service installed. As a default it will use a medium size instance. Feel free to download and examine that file.

It took about 10 minutes for the cluster to get going. Well it might have taken less, I used the time wisely to secure a beverage.

Next up was to connect to the cluster and make sure things were operational. The cluster will be configured with a floating ip, aka public IP. To make life easier for you, it is worth setting up a dns name for the cluster. Since I own the davidaiken.com domain name, I figured I’d setup dev.davidaiken.com.

To do this 2 DNS entries were needed.

  • ANAME – dev.davidaiken.com – pointing to the IP address of the cluster.
  • CNAME – *.dev.davidaiken.com – a wildcard to pickup things like api.dev.davidaiken.com or myapp.dev.davidaiken.com

Once I had these setup, I had to connect to the ALS cluster and update the node name. This will not only change the name, but it will also regenerate the SSL certs. (At some point I’ll add my own instead of the self signed ones, but that is a task for another night).

To change the name, SSH into your ALS cluster node, using the key and use stackato, then run the following command

kato process ready all

This will check everything is up and running. You can then type:

kato node rename dev.davidaiken.com

Obviously substituting my domain name for you own. This will then reconfigure, then restart stuff. Once its all done, you should be able to navigate to https://dev.davidaiken.com.

WHOOP!

Now I have a basic cluster running, I have a little backlog of tasks:

  • Add another DEA node
  • Add a Service Node
  • Make the service “production ready” – things like HA spring to mind.
  • Connect this to my CI/CD tool chain, Jenkins/GItHub?

I did toy with this being post 1 of x type, but i’ve already done that joke in the past, plus there are probably more on the list, but hey – this is a good start right?

PS: For those that care, I used ScribeFire to write this post and…

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS, YEAH!