About Rob H

A Baltimore transplant to Austin, Rob drives an electric car, the RAVolt, and thinks about ways of building scale infrastructure for the clouds using Agile processes. He sits on the OpenStack Foundation board and co-founded RackN enable software that creates hyperscale converged infratructure.

Short lived VM (Mayflies) research yields surprising scheduling benefit

Last semester, Alex Hirschfeld (my son) did a simulation to explore the possible efficiency benefits of the Mayflies concept proposed by Josh McKenty and me.

Mayflies swarming from Wikipedia

In the initial phase of the research, he simulated a data center using load curves designed to oversubscribe the resources (he’s still interesting in actual load data).  This was sufficient to test the theory and find something surprising: mayflies can really improve scheduling.

Alex found an unexpected benefit comes when you force mayflies to have a controlled “die off.”  It allows your scheduler to be much smarter.

Let’s assume that you have a high mayfly ratio (70%), that means every day 10% of your resources would turn over.  If you coordinate the time window and feed that information into your scheduler, then it can make much better load distribution decisions.  Alex’s simulation showed that this approach basically eliminated hot spots and server over-crowding.

Here’s a snippet of his report explaining the effect in his own words:

On a system that is more consistent and does not have a massive virtual machine through put, Mayflies may not help with balancing the systems load, but with the social engineering aspect, it can increase the stability of the system.

Most of the time, the requests for new virtual machines on a cloud are immutable. They came in at a time and need to be fulfilled in the order of their request. Mayflies has the potential to change that. If a request is made, it has the potential to be added to a queue of mayflies that need to be reinitialized. This creates a queue of virtual machine requests that any load balancing algorithm can work with.

Mayflies can make load balancing a system easier. Knowing the exact size of the virtual machine that is going to be added and knowing when it will die makes load balancing for dynamic systems trivial.

OpenStack DefCore Community Review – TWO Sessions April 21 (agenda)

During the DefCore process, we’ve had regular community check points to review and discuss the latest materials from the committee.  With the latest work on the official process and flurry of Guidelines, we’ve got a lot of concrete material to show.

To accommodate global participants, we’ll have TWO sessions (and record both):

  1. April 21 8 am Central (1 pm UTC) https://join.me/874-029-687 
  2. April 21 8 pm Central (9 am Hong Kong) https://join.me/903-179-768 

Eye on OpenStackConsult the call etherpad for call in details and other material.

Planned Agenda:

  • Background on DefCore – very short 10 minutes
    • short description
    • why board process- where community
  •  Interop AND Trademark – why it’s both – 5 minutes
  •  Vendors AND Community – balancing the needs – 5 minutes
  •  Mechanics
    • testing & capabilities – 5 minutes
    • self testing & certification – 5 minutes
    • platform & components & trademark – 5 minutes
  • Quick overview of the the Process (to help w/ reviewers) – 15 minutes
  • How to get involved (Gerrit) – 5 minute

Golang Example JSON REST HTTP Get with Digest Auth

Since I could not find a complete example of a GO REST Call that returned JSON and used Digest Auth (for Crowbar API), I wanted to feed the SEO monster for the next person.

My purpose is to illustrate the pattern, not deliver reference code.  Once I got all the pieces in the right place, the code was wonderfully logical.  The basic workflow is:

  1. define a structure with JSON mapping markup
  2. define an alternate HTTP transport that includes digest auth
  3. enable the client
  4. perform the get request
  5. extract the request body into a stream
  6. decode the stream into the mapped data structure (from step 1)
  7. use the information

Here’s the sample:

package main

import (
“fmt”
digest “code.google.com/p/mlab-ns2/gae/ns/digest”
“encoding/json”
)

// the struct maps to the JSON automatically with the added meta data
type Deployment struct {
ID int `json:”id”`
State int `json:”state”`
Name string `json:”name”`
Description string `json:”description”`
System bool `json:”system”`
ParentID int64 `json:”parent_id”`
CreatedAt string `json:”created_at”`
UpdatedAt string `json:”updated_at”`
}

func main() {

// setup a transport to handle disgest
transport := digest.NewTransport(“crowbar”, “password”)

// initialize the client
client, err := transport.Client()
if err != nil {
return err
}

// make the call (auth will happen)
resp, err := client.Get(“http://127.0.0.1:3000/api/v2/deployments”)
if err != nil {
return err
}
defer resp.Body.Close()

// magic of the structure definition will map automatically
var d []Deployment // it’s an array returned, so we need an array here.
err = json.NewDecoder(resp.Body).Decode(&d)

// print results
fmt.Printf(“Header:%s\n”, resp.Header[“Content-Type”])
fmt.Printf(“Code:%s\n”, resp.Status)
fmt.Printf(“Name:%s\n”, d[0].Name)

}

PS: I’m doing this for a Crowbar Docker Machine driver.

Manage Hardware like a BOSS – latest OpenCrowbar brings API to Physical Gear

A few weeks ago, I posted about VMs being squeezed between containers and metal.   That observation comes from our experience fielding the latest metal provisioning feature sets for OpenCrowbar; consequently, so it’s exciting to see the team has cut the next quarterly release:  OpenCrowbar v2.2 (aka Camshaft).  Even better, you can top it off with official software support.

Camshaft coordinates activity

Dual overhead camshaft housing by Neodarkshadow from Wikimedia Commons

The Camshaft release had two primary objectives: Integrations and Services.  Both build on the unique functional operations and ready state approach in Crowbar v2.

1) For Integrations, we’ve been busy leveraging our ready state API to make physical servers work like a cloud.  It gets especially interesting with the RackN burn-in/tear-down workflows added in.  Our prototype Chef Provisioning driver showed how you can use the Crowbar API to spin servers up and down.  We’re now expanding this cloud-like capability for Saltstack, Docker Machine and Pivotal BOSH.

2) For Services, we’ve taken ops decomposition to a new level.  The “secret sauce” for Crowbar is our ability to interweave ops activity between components in the system.  For example, building a cluster requires setting up pieces on different systems in a very specific sequence.  In Camshaft, we’ve added externally registered services (using Consul) into the orchestration.  That means that Crowbar will either use existing DNS, Database, or NTP services or set it’s own.  Basically, Crowbar can now work FIT YOUR EXISTING OPS ENVIRONMENT without forcing a dedicated Crowbar only services like DHCP or DNS.

In addition to all these features, you can now purchase support for OpenCrowbar from RackN (my company).  The Enterprise version includes additional server life-cycle workflow elements and features like HA and Upgrade as they are available.

There are AMAZING features coming in the next release (“Drill”) including a message bus to broadcast events from the system, more operating systems (ESXi, Xenserver, Debian and Mirantis’ Fuel) and increased integration/flexibility with existing operational environments.  Several of these have already been added to the develop branch.

It’s easy to setup and test OpenCrowbar using containers, VMs or metal.  Want to learn more?  Join our community in Gitteremail list or weekly interactive community meetings (Wednesdays @ 9am PT).

Jazz vs. Symphony: Why micromanaging digital work FAILS.

Third IN AN 8 POST SERIES, BRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

Now that we’ve introduced music as a functional analogy for a stable 21st century leadership model and defined digital work, we’re ready to expose how work actually gets done in the information age.

First, has work really changed?  Yes.  Traditionally there was a distinct difference between organized production and service-based/creative work such as advertising, accounting or medicine.  Solve a problem by looking for clues and coming up with creative solutions to solve it.

Jazz Hands By RevolvingRevolver on DeviantArt http://revolvingrevolver.deviantart.com/

Digital work on the other hand, and more importantly – digital workers, live in a strange limbo of doing creative work but needing business structures and management models that were developed during the industrial age.

In today’s multi-generational workforce, what appears to be a generational divide has transformed into a non-age-specific cultural rift. As Brad and Rob compared notes, we came to believe that what is really happening is a learned difference in the approach to work and work culture.

There is learned difference in the approach to work and work culture that’s more obvious in, but not limited to, digital natives.

In most companies, the executives are traditionalists (Baby Boomers or hand-selected by Boomers).  While previous generations have been trained to follow hierarchy, the new culture values performance, flexibility and teamwork with a less top-down control oriented outlook.

It’s like a symphonic conductor who is used to picking the chair order and directing the tempo is handing out sheet music to a Jazz ensemble.  So how is the traditional manager going to deliver a stellar performance when his performers are Jazz trained?

In traditional concert orchestra, each musician has to go to college, train hard, earn a shot to get into the orchestra, and overtime, work very hard to earn the First Chair position (think earning the corner office).  Once in that position, they stay there until death or retirement.  Anyone who deviates, is fired. Improv is only allowed during certain songs, by a select few.  It’s the workplace equivalent to climbing the corporate ladder.

Most digital workers think they belong to a Jazz ensemble.  

It’s a mistake to believe less organized means less skilled.  Workers in the Jazz model are also talented and trained professionals.  If you look at the careers of Thelonius Monk, Duke Ellington and Dizzy Gillespie, they all had formal training, many started as children.  The same is true for digital workers: many started build job skills as children and then honed their teamwork playing video games.

But can a loosely organized group consistently deliver results? Yes. In fact, they deliver better results!

When a Jazz Improv group plays, they have a rough composition to start with. Each member is given time for a solo.  To the uninitiated there appears to be no leaders in this milieu of talent, but the leader is there.  They just refuse to control the performance; instead, they trust that each member will bring their A Game and perform at 100% of their capacity.

In business, this is scary. Don’t we need someone to check each person’s work? People are just messing around right? I mean, is this actual work? Who is in charge?

In businesss environments that operate more like Jazz, studies have proven that there is a 32% increase in productivity from traditional command and control environments driven by hierarchy.

Age, experience and position are NOT the criteria for the Digital Worker. Output is.  And output is different for each product. Management’s role in this model is to get out of the way and let the musicians create. Instead of conforming to a single style and method, the people producing in the model each bring something unique and also experience a high degree of ownership.

This is a powerful type of workplace diversity: by allowing different ways of problem solving to co-exist, we also make the workplace more inclusive and collaborative.

Sound too good to be true?  In our next post we’ll discuss trust as the critical ingredient for Jazz performance.  (Teaser)

Showing to how others explain Ready State & OpenCrowbar

I’m working on a series for DevOps.com to explain Functional Ops (expect it to start early next week!) and it’s very hard to convey it’s east-west API nature.  So I’m always excited to see how other people explain how OpenCrowbar does ops and ready state.

Ready State PictureThis week I was blown away by the drawing that I’ve recreated for this blog post.  It’s very clear graphic showing the operational complexity of heterogeneous infrastructure AND how OpenCrowbar normalizes it into a ready state.

It’s critical to realize that the height of each component tower varies by vendor and also by location with in the data center topology.  Ready state is not just about normalizing different vendors gear; it’s really about dealing with the complexity that’s inherent in building a functional data center.  It’s “little” things liking knowing how to to enumerate the networking interfaces and uplinks to build the correct teams.

If you think this graphic helps, please let me know.

How Nebula shows why the OpenStack community (and OSS in general) should care about profits

Fancy DogsIn dog piling onto the news about early OpenStack provider Nebula’s demise, I’ll use their news to reinforce my belief that open communities need to ensure that funds are flowing to leaders and contributors.  This is one of the reasons that I invest time in DefCore: having a clear base product definition helps create healthy vendors.

Open source software is not simply free VC funded projects.  The industry needs a stable transparent supply chain of software to build on.

I am not asserting anything about Nebula’s business or model.  To me, it is a single downward data point in my ongoing review of vendor health in the OpenStack community.  OpenStack features and functions will not matter if vendors in the community cannot afford to sustain them.

For enterprises to rely on open source software supply chains, they need to make sure they are paying into the community.  The simplest route to “paying in” is through vendors.