OpenStack Vancouver six observations: partners, metal, tents, defore, brands & breakage

As always, OpenStack conferences/summits are packed with talks and discussions.  Any one of these six points could be a full post; however, I would rather post now and start discussions.  Let me know what you think!

1. Partnering Everywhere – it’s froth, not milk

Everyone is partnering with everyone! It’s a good way to appear to cover more around and appear more open. Right now, I believe these partnerships are for show and very shallow. There will be blood when money is flowing and both partners want the lion’s share.

2. Metal is Hot! attention on Ironic & MaaS

Metal is very hot topic. No surprise, but I do not think that either MaaS or Ironic have the right architecture to deal with the real complexity of automating metal in a generalized way. The consequence is that they are limited and hard to operate.

Container talks were also very hot and I believe are ultimately disruptive.  The very fact that all the container talks were overflowing is an indication of the challenges facing virtualization.

3. DefCore – Just in the Nick of Time

I think that the press and analysts were ready to proclaim that OpenStack was fragmenting and being unable to deliver the “one cloud, multiple vendors” vision. DefCore (presented as Interopability by Jonathan Bryce, DefCore shout out!) came in on the buzzer to buy us more time.

4. Big Tent Concerns – what is ecosystem & release?

Big Tent is shorthand for project governance changes that make it easier for new projects to become OpenStack projects and removes the concept of integrated releases.  The exact definition is still a work in progress.

The top concerns I have are:

  1. We cannot tell difference between community & ecosystem. We’re back to anointed projects because we’re now telling projects they have to join OpenStack to work with OpenStack.
  2. We’re changing the definition of the release but have not defined how it will change. I acknowledge that continuous release is ideal but we’re confusing people again.

5. Brands are battling – will they destroy the city?

OpenStack is hard for startups – I’m working on a full post for next week.  The short version is that big companies are taking up all the air.

While some are leading, others they are learning how to collaborate.  Those new to open source are slow to trust and uncertain about where to invest.  Unfortunately, we’ve created a visible contributions economy that does not reward doing the scut work so it’s no surprise that there are concerns that some of the bigger companies are free riding.

6. OpenStack is broken talks – could we reboot?  no.

It’s a sign of OpenStack’s age that Bias, Termie and others suggested we need clean slate.  Frankly, I think that OpenStack would be irrelevant by the time a rewrite was completed and it not helpful to suggest it.

What would I suggest?  I’d promote a strong core (doing!), ensure big companies collaborate on roadmap (doing!) and stop having a single node install as gate and dev reference (I’d happily help use OCB for this with partners)

PS: Apparently Neutron is not broken.

I’m very excited about the “just give me a network” work to make Neutron duplicate Nova-Net functionality.  Finally.

Management free falling! Why The Zappos & Valve Model is Terrifying [Post 4 of 8]

Forth IN AN 8 POST SERIESBRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

It doesn’t make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do.  Steve Jobs, cofounder of Apple Computers

Trust, not stability, is the new management contract. Digital workers interpret strong management as a lack of trust.

Megaphone!Before we can talk about how to manage digital workers, we have to talk about trusting them to do their jobs. Why? Digital workers largely adopt Millennials’ unwillingness to follow directed leadership. If we want to succeed in managing them then we need to foster mutual respect that is built on bi-directional trust.

20th Century business models were based on “if you can’t measure it, you can’t manage it” thinking that went along with mass manufacturing control, discipline and predictability. Physical goods were sold to a passive marketplace with minimal feedback from markets and internal workers; consequently, decisions could be made by a few leaders as long as workers did what they were told. All out-of-the-box decision had to go through a leader. The bigger the decision, the higher up that decision needs to go for approval. Like in our symphony analogy, control and discipline is the main ideology.

In 21st Century business, there is no script just as there is no score for a Jazz concert. That does not mean it’s a worker free-for-all! We still need to deliver products. But instead of top-down control, we talk about collaboration, shared mission and team work. This change is critical because digital work as so much situational content that it is impossible to proscribe it’s exact results in advance. Like Jazz, you can create a general framework and guidelines but the exact composition has a degree of improvisation because it must reflect the players’ situation in the moment.

Since you have to trust people to make decisions, you’d better create an environment where they want to make the best decisions for your business!

Glassdoor’s multiyear study discovered that the “Best Places to Work” from 2009 to 2014 outperformed the S&P 500 by 115.6% while a similar portfolio named Fortune’s “Best Companies to Work For,” outperformed the S&P 500 by 84.2%! That is impressive.

What does trust look like? Leading gaming software maker Valve has thrown out the traditional employee handbook and replaced it with a 37 page breakdown of what they expect from an employee. The manual tells people their desk is on wheels so they can just roll over to a new team if they want to change jobs. The trust implied in that type of follow-your-passion enablement is unheard of in most workplaces.

Zappos, recognized 6 years in a row by FORTUNE’s 100 Best Companies to Work For®, pays employees $4,500 to quit if not satisfied with the culture. Zappos CEO Tony Hsieh is advocates for a no management whatsoever model called Holacracy. Like Valve, Zappos counts of their hiring process to find workers who thrive in a self-led model where leadership is fluid and people organize themselves to solve problems and deliver value.

If our undirected Jazz model scares you, Holacracy will terrify you.

However, it’s critical to understand that neither Zappos nor Value are “wild west” work environments. Like a top Jazz ensemble they provide trained performers, concrete structure, appropriate tools and clear expectations. By giving your teams the right tools to know what to do when they are working on their own, you will see a different workforce, striving to make your company better than even you thought possible.

Why does this work? In digital work, we have to give up the idea that the knighted leaders make the best decisions.

It’s not just a question of good decisions, we also need to improve quality and speed of action. Check out Navy Submarine Captain David Marquet’s talk on Greatness, based on his book, Turn The Ship Around! He explains quite well why people should be allowed to think and take responsibility for their work. 

In the end, it’s simply physics. Without trust, all decisions must flow downward and the entire organization is limited by the leadership. Our information economy makes it simply impossible for leaders to sufficiently learn and react. When people are trusted to think for themselves and take control for the products they create there is a psychological shift. They take ownership in their work. That means quality and output go through the roof as they get competitive with creating a wow product.

So, how can you create a company culture that taps into the skills sets of a new digital worker yet engages everyone for the long haul? Let’s dig in and instead of giving you rules or regulations, let’s start with a few principles to create the right environment.

Tune it for our next post: Setting direction – how too much freedom is bad too.

Ready State Foundation for OpenStack now includes Ceph Storage

For the Paris summit, the OpenCrowbar team delivered a PackStack demo that leveraged Crowbar’s ability to create a OpenStack ready state environment.  For the Vancouver summit, we did something even bigger: we updated the OpenCrowbar Ceph workload.

Cp_1600_1200_DB2A1582-873B-413B-8F3C-103377203FDC.jpegeph is the leading open source block storage back-end for OpenStack; however, it’s tricky to install and few vendors invest the effort to hardware optimize their configuration.  Like any foundation layer, configuration or performance errors in the storage layer will impact the entire system.  Further, the Ceph infrastructure needs to be built before OpenStack is installed.

OpenCrowbar was designed to deploy platforms like Ceph.  It has detailed knowledge of the physical infrastructure and sufficient orchestration to synchronize Ceph Mon cluster bring-up.

We are only at the start of the Ceph install journey.  Today, you can use the open source components to bring up a Ceph cluster in a reliable way that works across hardware vendors.  Much remains to optimize and tune this configuration to take advantage of SSDs, non-Centos environments and more.

We’d love to work with you to tune and extend this workload!  Please join us in the OpenCrowbar community.

Docker-Machine Crowbar Driver Delivers Metal Containers

I’ve just completed a basic Docker Machine driver for OpenCrowbar.  This enables you to quickly spin-up (and down) remote Docker hosts on bare metal servers from their command line tool.  There are significant cost, simplicity and performance advantages for this approach if you were already planning to dedicate servers to container workloads.

Docker Machine

The basics are pretty simple: using Docker Machine CLI you can “create” and “rm” new Docker hosts on bare metal using the crowbar driver.  Since we’re talking about metal, “create” is really “assign a machine from an available pool.”

Behind the scenes Crowbar is doing a full provision cycle of the system including installing the operating system and injecting the user keys.  Crowbar’s design would allow operators to automatically inject additional steps, add monitoring agents and security, to the provisioning process without changing the driver operation.

Beyond Create, the driver supports the other Machine verbs like remove, stop, start, ssh and inspect.  In the case of remove, the Machine is cleaned up and put back in the pool for the next user [note: work remains on the full remove>recreate process].

Overall, this driver allows Docker Machine to work transparently against metal infrastructure along side whatever cloud services you also choose.

Want to try it out?

  1. You need to setup OpenCrowbar – if you follow the defaults (192.168.124.10 ip, user, password) then the Docker Machine driver defaults will also work. Also, make sure you have the Ubuntu 14.04 ISO available for the Crowbar provisioner
  2. Discover some nodes in Crowbar – you do NOT need metal servers to try this, the tests work fine with virtual machines (tools/kvm-slave &)
  3. Clone my Machine repo (Wde’re looking for feedback before a pull to Docker/Machine)
  4. Compile the code using script/build.
  5. Allocate a Docker Node using  ./docker-machine create –driver crowbar testme
  6. Go to the Crowbar UI to watch the node be provisioned and configured into the Docker-Machines pool
  7. Release the node using ./docker-machine rm testme
  8. Go to the Crowbar UI to watch the node be redeployed back to the System pool
  9. Try to contain your enthusiasm :)

OpenSource.com Interview on DefCore, project management, and the future of OpenStack

Reposted from My Interview with RedHat’s OpenSource.com Jason Baker

Rob Hirschfeld has been involved with OpenStack since before the project was even officially formed, and so he brings a rich perspective as to the project’s history, its organization, and where it may be headed next. Recently, he has focused primarily on the physical infrastructure automation space, working with an an enterprise version of OpenCrowbar, an “API-driven metal” project which started as an OpenStack installer and moved to a generic workload underlay.

Rob is speaking on two panels at the upcoming OpenStack Summit in Vancouver, including DefCore 2015 and the State of OpenStack Project Management. We caught up with Rob to get updates about these two topics and what else lies ahead for OpenStack.

We asked you to help walk us through DefCore as it was being developed last year; just as a reminder, what is DefCore and why should people care about it?

DefCore creates a minimal definition for OpenStack vendors to help ensure interoperability and stability for the user community. While DefCore definitions apply only to vendors asking to use the OpenStack trademark, there are technical impacts on the tests and APIs that we select as required. We’ve worked hard to make sure the that selection process for picking “core” is transparent and fair.

What did the changes approved by the OpenStack Foundation membership earlier this year mean for DefCore?

The by-laws changes approved by the community were important to allow us to use DefCore more granular definition of Core. The previous by-laws were much more project focused. The changes allow us to select specific APIs and code components from a project as required instead of picking everything blindly. That allows projects to have both stable and new innovative components.

What can we expect from OpenStack’s structure and organization as we move forward towards the next release?

There are a lot of changes still to come. The technical leadership is making it easier to become part of the OpenStack code base. I’ve written about this change having potentially both positive and negative impacts on OpenStack to make it appear more like a suite of projects than a tightly integrate product. In many ways, DefCore helps vendors define OpenStack as a product as the community is expanding to include more capabilities. In my discussions, this is a good balance.

Switching gears a bit, you’ve also been heavily involved in the OpenStack project management working group. How has that group been progressing since they convened at the Paris Summit?

This group has made a lot of progress. We’ve seen non-board leadership step in and lead the group. That leadership is more organic and based in the companies that are directly contributing. I think that’s resulted in a lot of good ideas and documentation from the group. We’ll see some excellent results in Vancouver from them. It’s going to come back to the community and technical leadership to leverage that work. I think that’s the real test: we have to share ownership of direction between multiple perspectives. The first step in doing that is writing it down (which is what they have been doing).

Aside from the organization, let’s talk about the software itself. What are you hoping to see from the Liberty release?

I’m hoping to see adoption of Neutron accelerate. Having two network approaches makes it impossible to really have an interoperability story. That means Neutron has to be working technically, but also for operators and users. To be brutally honest, it also has to overcome its own reputation. If Neutron does not become the dominate choice, we are going to effectively have two major flavors of OpenStack. From the DefCore, vendor, or user perspective, that’s a very challenging position.

Anything else you’d like to add?

We’ve accomplished a lot together. In some ways, chasing too many targets is our biggest threat. I think that container workloads and orchestration are already being very disruptive for OpenStack. I’m hoping that we focus on delivering a stable core infrastructure. That’s why I’ve been working so hard on DefCore. Looking forward, there’s an increasing risk of trying to chase too many targets and losing the core of what users want.

This article is part of the Speaker Interview Series for OpenStack Summit Vancouver, a five-day conference for developers, users, and administrators of OpenStack Cloud Software.

As CloudFoundry Builds Ecosystem and Utility, What Challenges Arise? (observations from CFSummit)

I’ve been on the outskirts of the CloudFoundry (CF) universe from the dawn of the project (it’s a little remembered fact that there was a 2011 Crowbar install of CloudFoundry.

openProgress and investment have been substantial and, happily, organic. Like many platforms, it’s success relies on a reasonable balance between strong opinions about “right” patterns and enough flexibility to accommodate exceptions.

From a well patterned foundation, development teams find acceleration.  This seems to be helping CloudFoundry win some high-profile enterprise adopters.

The interesting challenge ahead of the project comes from building more complex autonomous deployments. With the challenge of horizontal scale of arguably behind them, CF users are starting to build more complex architectures.  This includes dynamic provisioning of the providers (like data bases, object stores and other persistent adjacent services) and connecting to containerized “micro-services.”  (see Matt Stine’s preso)

While this is a natural evolution, it adds an order of magnitude more complexity because the contracts between previously isolated layers are suddenly not reliable.

For example, what happens to a CF deployment when the database provider is field upgraded to a new version.  That could introduce breaking changes in dependent applications that are completely opaque to the data provider.  These are hard problems to solve.

Happily, that’s exactly the discussions that we’re starting to have with container orchestration systems.  It’s also part of the dialog that I’ve been trying to drive with Functional Operations (FuncOps Preso) on the physical automation side.  I’m optimistic that CloudFoundry patterns will help make this problem more tractable.

Hidden costs of Cloud? No surprises, it’s still about complexity = people cost

Last week, Forbes and ZDnet posted articles discussing the cost of various cloud (451 source material behind wall) full of dollar per hour costs analysis.  Their analysis talks about private infrastructure being an order of magnitude cheaper (yes, cheaper) to own than public cloud; however, the open source price advantages offered by OpenStack are swallowed by added cost of finding skilled operators and its lack of maturity.

At the end of the day, operational concerns are the differential factor.

The Magic 8 Cube

The Magic 8 Cube

These articles get tied down into trying to normalize clouds to $/vm/hour analysis and buried the lead that the operational decisions about what contributes to cloud operational costs.   I explored this a while back in my “magic 8 cube” series about six added management variations between public and private clouds.

In most cases, operations decisions is not just about cost – they factor in flexibility, stability and organizational readiness.  From that perspective, the additional costs of public clouds and well-known stacks (VMware) are easily justified for smaller operations.  Using alternatives means paying higher salaries and finding talent that requires larger scale to justify.

Operational complexity is a material cost that strongly detracts from new platforms (yes, OpenStack – we need to address this!)

Unfortunately, it’s hard for people building platforms to perceive the complexity experienced by people outside their community.  We need to make sure that stability and operability are top line features because complexity adds a very real cost because it comes directly back to cost of operation.

In my thinking, the winners will be solutions that reduce BOTH cost and complexity.  I’ve talked about that in the past and see the trend accelerating as more and more companies invest in ops automation.