Success means putting People and Process above Tech

“I don’t care about the tech – what I really want to hear is how this product fits in our processes and helps our people get more done.”

That was the message my co-founder and I heard from an executive at a major bank last week.  For us, it was both a deja vu and a major relief because we’d just presented at the Cablelabs Summer Showcase about the importance of aligning people, process and technology together. The executive was pleased about how RackN had achieved that balance.

It wasn’t always that way: focusing on usability and simplicity first over features is scary.  

One of the most humbling startup lessons is that making great technology is not about the technology. Showing a 10x (or 100x!) improvement in provisioning speed misses the real problem for IT operators. Happily, we had some great early users who got excited about the vision for simple tooling that we built around Digital Rebar Provision v3.  Equally important was a deeply experienced team who insisted in building great tests, docs and support tooling from day 0.

We are thrilled to watch as new users are able to learn, adopt and grow their use of our open technology with minimal help from RackN.  Even without the 10x performance components RackN has added, they have been able to achieve significant time and automation improvements in their existing operational processes.  That means simpler processes, less IT complexity and more time for solving important problems.

The bank executive wanted the people and process benefits: our job with technology was to enable that first and then get out of the way.  It’s a much harder job than “make it faster” but, ultimately, much more rewarding.

If you’re interested in seeing how we’ve found that balance for bare metal automation, please check out our self-service trial at https://portal.RackN.io or contact us directly at info@rackn.com.

Enhancing Digital Rebar Potential with Plugins – Honeycomb Example

The open source Digital Rebar Provision (DRP) solution provides a basic set of features that are enhanced with plugins offering additional services to customers. These plugins are provided by the Digital Rebar community, customers, partners, and RackN delivering significant value over and above the base provisioning capability of DRP.

RackN and Honeycomb developed a unique plugin during SRECon Americas a few weeks back allowing DRP events to be visible within the Honeycomb tool. Offering partners like Honeycomb an opportunity to integrate with DRP provides partners with a methodology to offer their services to the Digital Rebar community. For the community, having a simple plugin capability allows for use of pre-existing infrastructure tools.

In this video example, we install the Honeycomb Plugin into a Digital Rebar Provision endpoint and activate the plugin to record and transfer events to the Honeycomb system. This demonstration also shows the process to add the plugin from the catalog and install it.

We encourage all partners interested in developing a plugin to contact RackN for discussions on joint development.  For operators, register for a new account on the RackN Portal to deploy a DRP endpoint and begin a modern cloud-native approach to provisioning.

For more information:

Immutable Image Deployment from Digital Rebar Mastered Golden Image

Shane Gibson, Sr. Architect and Community Evangelist, RackN created a new Digital Rebar Provision (DRP) video highlighting immutable provisioning from a “golden image” as well as the ability to create that “golden image” from within Digital Rebar Provision.

Highlights:

  • Immutable Image Deployment Solution to 20 Target Bare Metal Machines
  • Creation of a “Golden Image” in Digital Rebar Provision
  • Detailed Overview of the RackN Portal UX to Support this Demo

More information on the Digital Rebar community and Digital Rebar Provision:

Provision Virtual Machines with an Open Source Physical Infrastructure Solution

Rob Hirschfeld, CEO/Co-Founder, RackN created a new Digital Rebar Provision (DRP) video highlighting the creation of virtual machines within the standard automation process. Highlights:

  • Create a New Virtual Machine from the Physical Provisioning Tool – DRP
  • VirtualBox IMPI Plugin – Preview of Pre-Release Tool
  • RackN Portal will inventory virtual machines available on network for management
  • Packet IMPI Plugin – enable creation of VMs on Packet cloud hardware

This expansion to virtual machines allows DRP users to not only provision physical infrastructure but virtual as well both locally and in clouds.

More information on the Digital Rebar community and Digital Rebar Provision:

Terraform Bare Metal – A Leap forward for SDx

Software Defined Infrastructure (SDx) allows operators to manage data centers in a more consistent and controlled way. It allows teams to define their environment as code and use automation to execute that definition in practice. To deliver this capability for physical (aka bare metal) servers, RackN has created a Digital Rebar provider for Terraform. The provider is a simple addition that take just seconds to enable. (Video Demonstrations at End of Blog)

The Terraform Bare Metal provider allows plans to provision and recover servers using a node resource.

The operation of this provider is simple and relies on standard workflow stages in Digital Rebar. Adding the Terraform Content Package installs a new stage that adds Terraform parameters. Including this stage in the global workflow will automatically register machines as available for Terraform. The integration uses two parameters to manage the server pool: Terraform Managed and Terraform Assigned.

When the Terraform provider asks for a node resource, it queries the Digital Rebar API for machines that are managed (true) and not assigned (false) plus whatever additional filters were required in the plan. The provider then uses the API to set assigned true and the requested Stage (e.g. centos-install) and polls until the node enters the Complete stage. The destroy action reverses the action to release the node. Digital Rebar uses the stage changes as a trigger to restart the machine workflow.

Using a Terraform plan with Digital Rebar, operators can manage complex data centers layouts from a single command line.

For users, all of the above steps are completely hidden. Operators can monitor the request using the Digital Rebar UX to ensure the plan is executing. In addition, plan metadata can set user or identification values to the machines when they are reserved to help track allocations. In this way, administrators can easily track and account for machines reserved via Terraform.

For full out-of-band control, users should add the RackN IPMI plugin. This adds the ability to force power states during plan execution. The provider does not require out-of-band management to function. RackN also maintains Packet.net and VirtualBox plugins with the same API as the IPMI plugin. This allows developers to easily test plans against virtual or cloud resources.

RackN customers are making big plans to use this simple and powerful integration to manage their own SDx roadmap. We’re excited to hear about new ways to improve data center operations, especially new edge ideas. Let us know what you are thinking!

Demonstration of Terraform Bare Metal Provisioning with Digital Rebar Provision V3.2

Setting up the Environment to run Digital Rebar Provision V3.2 for Terraform

Got some change? Build a datacenter ops lab on your coffee break [with Packet.net MaaS]

We’re using Packet.net hosted metal to test automation for private metal (video).  You can use discount code “RACKN100” to get a credit on Packet and try it yourself.

At RackN, we’ve been shrinking our scale deployment platform down to run faithfully on a desktop class system. Since we abstract the network and hardware complexity, you can build automation that scales to physical from as little as 16 Gb of RAM (the same size as Packet’s smaller server). That allows the exact same logic we use for an 80 node Ceph or Kubernetes cluster work on my 14” laptop.

In fact, we’ve been getting a bit obsessed with making a clean restart small and fast using containers, VMs and bootstrapping scripts.

Creating a remote test lab is part of this obsession because many rehearsals make great performances.  We wanted to eliminate the setup time and process for users who just want to experiment with a production grade deployment. Using Packet.net hosted metal and some Ansible scripts, we can build a complete HA Kubernetes cluster in about 15 minutes using VMs. This lets us iterate on Kubernetes best practices virtually since the “setup metal part” is handled abstractly by Digital Rebar.

Yawn. You could do the same in AWS. Why is that exciting?

The process for the lab system we build in Packet.net can then be used to provision a complete private infrastructure on metal including RAID, BIOS and server networking. Even though the lab uses VMs, we still do real networking, storage and configuration. For example, we can iterate building real software defined networking (SDN) overlays in this environment and then scale the work up to physical gear.

The provision and deploy time is so fast (generally, under 15 minutes) that we are using it as a clean environment for Dev and QA cycles on new automation. It’s also a very practical demo environment for these platforms because of the fidelity between this environment and an actual pilot. For me, that means spending $0.40 so I don’t have to sweat losing my work in process, battery life or my wifi connection to crank out a demo.

BTW… Packet.net servers are SUPER FAST. Even the small 16 Gb RAM machine is packed with SSDs and great connectivity.

If you are exploring any of the several workloads that we’ve been building (Docker Swarm, Kubernetes, Mesos, CloudFoundry, Ceph and OpenStack) or just playing around with API driven physical provisioning, we just made that work a little easier and a lot faster.

Online Meetup Today (1/13): Build a rock-solid foundation under your OpenStack cloud

Reminder: Online meetup w/ Crowbar + OpenStack DEMO TODAY

HFoundation Rawere’s the notice from the site (with my added Picture)

Building cloud infrastructure requires a rock-solid foundation. 

In this hour, Rob Hirschfeld will demo automated tooling, specifically OpenCrowbar, to prepare and integrate physical infrastructure to ready state and then use PackStack to install OpenStack.

 

The OpenCrowbar project started in 2011 as an OpenStack installer and had grown into a general purpose provisioning and infrastructure orchestration framework that works in parallel with multiple hardware vendors, operating systems and devops tools.  These tools create a fast, durable and repeatable environment to install OpenStack, Ceph, Kubernetes, Hadoop or other scale platforms.

 

Rob will show off the latest features and discuss key concepts from the Crowbar operational model including Ready State, Functional Operations and Late Binding. These concepts, built into Crowbar, can be applied generally to make your operations more robust and scalable.

SUSE Cloud powered by OpenStack > Demo using Crowbar

OpenStack booth at HostingConAs much as I love talking about Crowbar and OpenStack, it’s even more fun to cheer for other people doing it!

SUSE’s be a great development partner for Crowbar and an active member of the OpenStack community.  I’m excited to see them giving a live demo today about their OpenStack technology stack (which includes Crowbar and Ceph).

Register for the Live Demo on Wed 06-26-2013 at 3.00 – 4.00 pm GMT to “learn about SUSE’s OpenStack distribution: SUSE Cloud with Dell Crowbar as the deployment mechanism and advanced features such as Ceph unified storage platform for object, block and file storage in the cloud.”

The presenter, Rick Ashford, lives in Austin and is a regular at the OpenStack Austin Meetups.  He has been working with Linux and open-source software since 1998 and currently specializes in the OpenStack cloud platform and the SUSE ecosystem surrounding it.

OpenStack Swift Retriever Demo Online (with JavaScript xmlhttprequest image retrieval)

This is a follow-up to my earlier post with the addition of WORKING CODE and an ONLINE DEMO. Before you go all demo happy, you need to have your own credentials to either a local OpenStack Swift (object storage) system or RackSpace CloudFiles.

The demo is written entirely using client side JavaScript. That is really important because it allows you to test Swift WITHOUT A WEB SERVER. All the other Swift/Rackspace libraries (there are several) are intended for your server application to connect and then pass the file back to the client. In addition, the API uses meta tags that are not settable from the browser so you can’t just browse into your Swift repos.

Here’s what the demo does:

  1. Login to your CloudFiles site – returns the URL & Token for further requests.
  2. Get a list of your containers
  3. See the files in each container (click on the container)
  4. Retrieve the file (click on the file) to see a preview if it is an image file

The purpose of this demo is to be functional, not esthetic. Little hacks like pumping the config JSON data to the bottom of the page are helpful for debugging and make the action more obvious. Comments and suggestions are welcome.

The demo code is 4 files:

  1. demo.html has all the component UI and javascript to update the UI
  2. demo.js has the Swift interfacing code (I’ll show a snippet below) to interact with Swift in a generic way
  3. demo.css is my lame attempt to make the page readable
  4. jQuery.js is some first class code that I’m using to make my code shorter and more functional.

1-17 update: in testing, we are working out differences with Swift and RackSpace. Please expect updates.

HACK NOTE: This code does something unusual and interesting. It uses the JavaScript XmlHttpRequest object to retrieve and render a BINARY IMAGE file. Doing this required pulling together information from several sources. I have not seen anyone pull together a document for the whole process onto a single page! The key to making this work is overrideMimeType (line G), truncating the 32 bit string to 16 bit ints ( & 0xFF in encode routine), using Base64 encoding (line 8 and encode routine), and then “src=’data:image/jpg;base64,[DATA GOES HERE]'” in the tag (see demo.html file).

Here’s a snippet of the core JavaScript code (full code) to interact with Swift. Fundamentally, the API is very simple: inject the token into the meta data (line E-F), request the /container/file that you want (line D), wait for the results (line H & 2). I made it a little more complex because the same function does EITHER binary or JSON returns. Enjoy!

retrieve : function(config, path, status, binary, results) {

1   xmlhttp = new XMLHttpRequest();

2   xmlhttp.onreadystatechange=function()  //callback

3      {

4         if (xmlhttp.readyState==4 && xmlhttp.status==200) {

5            var out = xmlhttp.responseText;

6            var type = xmlhttp.getResponseHeader("content-type");

7            if (binary)

8               results(Swift.encode(out), type);

9            else

A               results(JSON.parse(out));

B         }

C      }

D   xmlhttp.open('GET',config.site+'/'+path+'?format=json', true)

E   xmlhttp.setRequestHeader('Host', config.host);

F   xmlhttp.setRequestHeader('X-Auth-Token', config.token);

G   if (binary) xmlhttp.overrideMimeType('text/plain; charset=x-user-defined');

H   xmlhttp.send();
}

OpenStack Swift Demo (in a browser)

I’m working on mini-demo project for OpenStack Swift.  To keep things very simple and easy to understand, I decided that the whole demo would work in JavaScript in the browser.  I also choose to use RackSpace’s CloudFiles as a Swift target for testing since they have the same API are are universally available (unlike my lab systems).

One advantage of this approach is that FireBug makes it very nice to debug and check the activity of the code.  Unfortunately, FireBug also seems to eat the headers that I need.  *Let me phrase that in a google friend way so that someone else will not loose the 2 hours I just lost*

“XmlHttpRequest setRequestHeader FireFox Not Respected when using FireBug”

It works great in Safari. So onward and upward.  So far, I’ve got step #1 ready – getting the authorization token back from the cloud site.

Here’s the HTML page (you need jQuery too).  Basically, it uses the username and key from the inputs to set “x-auth-user” and “x-auth-key” header attributes.  These attributes will allow Swift to return a token that you can use on future requests when you want to do useful work.

<!DOCTYPE html>

<html>

<head>

<title>Dell Swift Demo [0.0]</title>

<script src=”jquery.js” type=”text/javascript”></script>

<script type=”text/javascript” charset=”utf-8″>

var xmlhttp = null;

function swiftLogin() {

var usr = $(‘input:text[name=usr]’).val();

var key = $(‘input:text[name=key]’).val();

// code for IE7+, Firefox, Chrome, Opera, Safari (UR SOL IE<7)

xmlhttp = new XMLHttpRequest();

xmlhttp.onreadystatechange=function() //callback

{

if (xmlhttp.readyState==2)

{

$(‘#status’).replaceWith(xmlhttp.getResponseHeader(“X-Auth-Token”));

}

}

xmlhttp.open(‘GET’,’https://auth.api.rackspacecloud.com/v1.0&#8242;, true);

xmlhttp.setRequestHeader(‘Host’, ‘auth.api.rackspacecloud.com’);

xmlhttp.setRequestHeader(‘X-Auth-User’, usr);

xmlhttp.setRequestHeader(‘X-Auth-Key’, key);

xmlhttp.send();

}

</script>

</head>

<body>

<div id=”credentials”>

<fieldset id=”credentials” class=””>

<legend>Swift Login</legend>

<label for=”user”>User: </label><input type=”text” name=”usr” value=”user” id=”user”>

<label for=”key”>Key: </label><input type=”text” name=”key” value=”key” id=”key”>

<input type=”button” name=”Login” value=”login” id=”Login” onclick=”swiftLogin();”>

</fieldset>

</div>

<div id=”status”>[pending]</div>

<div id=”footer”>Time?</div>

<script type=”text/javascript”>

$(‘#footer’).replaceWith((new Date).toString());

swiftLogin();

</script>

</body>

</html>