Introducing HPE Helion Stackato

With this new release of HPE Helion Stackato HPE are offering developers an easy way to get started with exploring the power of Cloud Foundry. We are shipping a VirtualBox microcloud, that you can quickly get up and running, as well as support for deploying docker containers.

Easy to deploy

One of the key things we wanted to do with this release was to make it easy for developers to get started. What could be faster than simply importing a virtual appliance with a preconfigured microcloud into Virtual Box? (Sorry no bosh) You can download the bits from http://bit.ly/hpestackato, watch a quick video or post on how to configure your microcloud. This will work on Mac, Windows or Linux.

Build and deploy your first application

With Cloud Foundry based platforms such as HPE Helion Stackato, application deployment can be as easy as running a single “push” command. The platform will take care of deploying the application, configuring the network, environment, health monitoring and dependencies. If you have stackato running, you can run a couple of commands to deploy a simple Ruby sample application

git clone https://github.com/Pilchuck/scalene

cd scalene
stackato push -n
stackato open scalene

This will deploy the sample application, which will also create a MySQL database bound to this application. You can watch a quick video or post to walk through this.

Scaling your application

Once your application is deployed, you can quickly scale out the application using the stackato scale command.

stackato scale scalene --instances 4

This will take the docker container built during the deployment and runs another 3 instances. HPE Helion Stackato will now round robin the traffic between those instances.

Be careful you don’t try to deploy too many instances as you may break the default quota. Scalene requests 512M (because the dev said so), and the default quota is 2G, 4 instances is the max you can get without changing the quota.

You can view the quota using:

stackato quota show default

Then update the quota configure command:

stackato quota configure --mem 4G default

You can then go and deploy more instances.

Bring your own container

You might have noticed while deploying the scalene application that stackato pulls together docker container for your application. With this release we also added the ability to deploy a docker container directly.

Before you can deploy a container you will need to allow sudo operations or remove the need to allow sudo operations from docker deployments. To do the former use:

stackato quota configure --allow-sudo default

Now you can push a docker container, an example you can use is:

stackato push --docker-image slimypit/stackato-node-hello --as hello-node-docker -n

Now instead of building a container from your application, HPE Helion Stackato will simply deploy your existing docker container.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Building your first Helion Development Platform app

In this post, i’m going to walk through building and deploying a very simple application.

If you have been following along on my previous posts – you might have a running Helion Development Platform (HDP) cluster running in HP public cloud. If you don’t, you can follow along and build a micro-cloud.

First task is to build an application.

Since the application actually doesn’t matter, lets start with a very simple PHP application. 

Index.php:

<html>
   <head>
      <title>My First CF App</title>
   </head>
   <body>
      <H1>Hello</H1>
      <?php phpinfo(); ?>
   </body>
</html>

We could try to push this straight into our cluster, but ideally we want to create an application manifest. The manifest will capture important metadata about our application, such as resource requirements, dependent services etc.

manifest.yml:


applications:
– name: helion-phpinfo
  buildpack: https://github.com/cloudfoundry/php-buildpack#v3.0.4
  mem: 32M
  disk: 1024M
  instances: 3

There are a couple of interesting things about to talk about:

  1. The file format follows a very strict format. 3 -‘s, followed by the applications: tag, then -{space}name: Other tags are inline with name:
  2. The buildpack is a hint (a very strong one) about what frameworks/runtimes our application needs to run. HDP can figure this out for some applications, but its good practice to specify the one you want – including the version!
  3. Instances are 3 – this means there will be 3 copies of the application running behind router (think load balancer for now).

Now we have our app – we can push this out to our HDP cluster. You will need to download and install the helion client – a link to which is on the portal.

helion push -n

Will publish the application and start it. You can view the running apps using:

helion apps

And to navigate to your application, try:

helion open helion-phpinfo

Even though this is a very simple application, we can use it to learn a lot more about deploying and managing applications on the Helion Development Platform. I’ll tackle some of these in the next few posts!

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Adding DEA’s to your microcloud

In a previous post, I showed how you can create a Helion Development Platform microcloud in the HP Public Cloud. In this post, I will show you how to add DEA’s to this in preparation to deploy your applications.

There are 2 types of DEA you can add, Windows or Linux. This post will cover Linux. I will cover Windows in a later post.

There are also 2 ways you can add a node, using the cf-mgmt tool, or by creating a nova instance and performing a manual configuration. Either works, but we will use cf-mgmt. Why? Well a DEA doesn’t need much configuration, so there is little value/flexibility in creating a nova instance and performing the steps manually (although a side project on my side project is to use Ansible to do all of this).

So lets get started. I’m attached to my jump box, and creating a couple of new scripts – add-dea.sh and add-dea.yml.

The first script add-dea.sh is basically the cf-mgmt command wrapped in a script. This allows me to pop it into source control.

#!/bin/bash
~/linux-amd64/cf-mgmt add-role dea –load add-dea.yml

The add-dea.yml file is the configuration file used by the script:

version: 1.2
constructor-image-name: HP Helion Development Platform – Application Lifecycle Service Installer 1.2.0.282
seed-node-image-name: HP Helion Development Platform – Application Lifecycle Service Seed Node 1.2.0.282
cluster-prefix: cluster1
count: “2”
az: az1
constructor-flavor: standard.xsmall
flavor: standard.large
network-id: d190d9ca-6f3c-4fff-9034-f1fc070a3b6b
external-network-id: 122c72de-0924-4b9f-8cf3-b18d5d3d292c
keypair: ookkey
stack: lucid64

Most of the above should be pretty straight forward however:

  • Version: yes its the version
  • constructor-image-name & seed-node-image-name: these are the image names from OpenStack to use to create the cluster. You can use nova image-list | grep Application to view the available images.
  • cluster-prefix: is the prefix you previously used when creating the cluster.
  • count: this is how many instances to create. I recommend only doing a couple at a time otherwise you could hit authentication token timeout issues.
  • az: this is the AZ you wish to create the instances in. See my note below for more.
  • constructor-flavor: this is the constructor image size
  • flavor: this is the DEA node size
  • network-id: this is the internal network id. You can grab this using neutron net-list.
  • external-network-id: this is the network id of the network that contains the floating ips – it’s probably called Ext-Net. See above for the command to list the networks.
  • keypair: the keypair you used to create the cluster in the first place.
  • stack: lucid64 is the ubuntu stack we will use.

AZ’s and high availability of DEA’s

To increase your HA capability, I’d advise you to deploy multiple DEA’s in each AZ. There are 3 AZ’s in public cloud, so i created DEA instances in each. Simply change the az: tag in the configuration file to change the target az. I ended up creating a total of 12 DEAs, which is a good size to give the platform a run for its money.

Once you have the AZ’s defined in OpenStack – you have to do the same in ALS. To do this you need to connect to each DEA and run the following:

kato node availabilityzone AZ1
kato process restart

You can view your Availability Zones from the web console by hitting Settings/DEA then Availability Zones.

Now when an application is deployed with more than one instance (which is always a good idea) ALS will deploy the instances across availability zones.

Although we now have DEA’s across multiple AZ’s there are other parts of the cluster that we will also need to make highly available. I’ll be covering that in later posts.

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS