OpenStack DefCore Accelerates & Simplifies with Clear and Timely Guidelines [Feedback?]

Last week, the OpenStack DefCore committee rolled up our collective sleeves and got to work in a serious way.  We had a in-person meeting with great turn out
with 5 board members, Foundation executives/staff and good community engagement.

defcore timelineTL;DR > We think DefCore deliverables should be dated milestone guidelines instead tightly coupled to release events (see graphic).

DefCore has a single goal expressed from two sides: 1) defining the “what is OpenStack” brand for Vendors and 2) driving interoperability between OpenStack installations.  From that perspective, it is not about releases, but about testable stable capabilities.  Over time, these changes should be incremental and, most importantly, trail behind new features that are added.

For those reasons, it was becoming confusing for DefCore to focus on an “Icehouse” definition when most of the capabilities listed are “Havana” ones.  We also created significant time pressure to get the “Kilo DefCore” out quickly after the release even though there were no “Kilo” specific additions covered.

In the face-to-face, we settled on a more incremental approach.  DefCore would regularly post a set of guidelines for approval by the Board.  These Guidelines would include the required, deprecated (leaving) and advisory (coming) capabilities required for Vendors to use the mark (see footnote*).  As part of defining capabilities, we would update which capabilities were included in each component and  which components were required for the OpenStack Platform.  They would also include the relevant designated sections.  These Guidelines would use the open draft and discussion process that we are in the process of outlining for approval in Vancouver.

Since DefCore Guidelines are simple time based lists of capabilities, the vendors and community can simply reference an approved Guideline using the date of approval (for example DefCore 2015.03) and know exactly what was included.  While each Guideline stands alone, it is easy to compare them for incremental changes.

We’ve been getting positive feedback about this change; however, we are still discussing it and appreciate your input and questions.  It is very important for us to make DefCore simple and easy.  For that, your confused looks and WTF? comments are very helpful.

* footnote: the Foundation manages the OpenStack brand and the process includes multiple facets.  The DefCore Guidelines are just one part of the brand process.

Art Fewell and I discuss DevOps, SDN, Containers & OpenStack [video + transcript]

A little while back, Art Fewell and I had two excellent discussions about general trends and challenges in the cloud and scale data center space.  Due to technical difficulties, the first (funnier one) was lost forever to NSA archives, but the second survived!

The video and transcript were just posted to Network World as part of Art’s on going interview series.  It was an action packed hour so I don’t want to re-post the transcript here.  I thought selected quotes (under the video) were worth calling out to whet your appetite for the whole tamale.

My highlights:

  1. .. partnering with a start-up was really hard, but partnering with an open source project actually gave us a lot more influence and control.
  2. Then we got into OpenStack, … we [Dell] wanted to invest our time and that we could be part of and would be sustained and transparent to the community.
  3. Incumbents are starting to be threatened by these new opened technologies … that I think levels of playing field is having an open platform.
  4. …I was pointing at you and laughing… [you’ll have to see the video]
  5. docker and containerization … potentially is disruptive to OpenStack and how OpenStack is operating
  6. You have to turn the crank faster and faster and faster to keep up.
  7. Small things I love about OpenStack … vendors are learning how to work in these open communities. When they don’t do it right they’re told very strongly that they don’t.
  8. It was literally a Power Point of everything that was wrong … [I said,] “Yes, that’s true. You want to help?”
  9. …people aiming missiles at your house right now…
  10. With containers you can sell that same piece of hardware 10 times or more and really pack in the workloads and so you get better performance and over subscription and so the utilization of the infrastructure goes way up.
  11. I’m not as much of a believer in that OpenStack eats the data center phenomena.
  12. First thing is automate. I’ve talked to people a lot about getting ready for OpenStack and what they should do. The bottom line is before you even invest in these technologies, automating your workloads and deployments is a huge component for being successful with that.
  13. Now, all of sudden the SDN layer is connecting these network function virtualization ..  It’s a big mess. It’s really hard, it’s really complex.
  14. The thing that I’m really excited about is the service architecture. We’re in the middle of doing on the RackN and Crowbar side, we’re in the middle of doing an architecture that’s basically turning data center operations into services.
  15. What platform as a service really is about, it’s about how you store the information. What services do you offer around the elastic part? Elastic is time based, it’s where you’re manipulating in the data.
  16. RE RackN: You can’t manufacture infrastructure but you can use it in a much “cloudier way”. It really redefines what you can do in a datacenter.
  17. That abstraction layer means that people can work together and actually share scripts
  18. I definitely think that OpenStack’s legacy will more likely be the community and the governance and what we’ve learned from that than probably the code.

My OpenStack Vancouver Session Promotion Dilemma – please, vote outside your block

We need people to promote their OpenStack Sessions, but how much is too much?

Megaphone!Semi-annually, I choose to be part of the growing dog pile of OpenStack summit submissions.  Looking at the list, I see some truly amazing sessions by committed and smart community members.  There are also a fair share of vendor promotions.

The nature of the crowded OpenStack vendor community is that everyone needs to pick up their social media megaphones (and encourage some internal block voting) to promote their talks.   Consequently, please I need to ask you to consider voting for my list:

  1. DefCore 2015 
  2. The DefCore Show: “is it core or not” feud episode
  3. Mayflies: Improve Cloud Utilization by Forcing Rapid Server Death [Research Analysis] (xref)
  4. It’s all about the Base. If you want stability, start with the underlay [Crowbar] 
  5. State of OpenStack Product Management

Why am I so reluctant to promote these excellent talks?  Because I’m concerned about fanning the “PROMOTE MY TALKS” inferno.

For the community to function, we need for users and operators to be heard.  The challenge is that the twin Conference/Summit venue serves a lot of different audiences.

In my experience, that leads to a lot of contributor navel gazing and vendor-on-vendor celebrations.  That in turn drowns out voices from the critical, but non-block-enabled users and operators.

Yes, please vote those sessions of mine that interest you; however, please take time to vote more broadly too.  The system randomized which talks you see to help distribute voting too.

Thanks.

Want CI Consul Love? OK! Run Consul in Travis-CI [example scripts]

If you are designing an application that uses microservice registration AND continuous integration then this post is for you!  If not, get with the program, you are a fossil.

Inside The EngineSunday night, I posted about the Erlang Consul client I wrote for our Behavior Driven Development (BDD) testing infrastructure.  That exposed a need to run a Consul service in the OpenCrowbar Travis-CI build automation that validates all of our pull requests.  Basically, Travis spins up the full OpenCrowbar API and workers (we call it the annealer) which in turn registers services in Consul.

NOTE: This is pseudo instructions.  In the actual code (here too), I created a script to install consul but this is more illustrative of the changes you need to make in your .travis.yml file.

In the first snippet, we download and unzip consul.  It’s in GO so that’s about all we need for an install.  I added a version check for logging validation.

before_script:
  - wget 'https://dl.bintray.com/mitchellh/consul/0.4.1_linux_amd64.zip'
  - unzip "0.4.1_linux_amd64.zip"
  - ./consul --version

In the second step, we setup the consul service and register it to itself in the background.  That allows the other services to access it.

script: 
  - ../consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul &

After that, the BDD infrastructure can register the fake services that we expect (I created an erlang consul:reg_serv(“name”) routine that makes this super easy).  Once the services are registered, OpenCrowbar will check for the services and continue without trying to instantiate them (which it cannot do in Travis).

Here’s the pull request with the changes.

Erlang Consul Client

OpenCrowbar has been using Consul more and more deeply.  We’ve reached the point where we must register services on Consul to pass automated tests.

Consequently, I had to write a little Consul client in Erlang.

The client is very basic, but it seems to perform all of the required functions.  It relies on some other libraries in OpenCrowbar’s BDD but they are relatively self-contained.  Pulls welcome if you’d like to help build this out.

Here’s the code and the API reference.   Check in OpenCrowbar/BDD for latest updates and dependencies.

All About That Loop. Lessons from the OpenStack Product Mid-Cycle

OpenStack loves to track developer counts and committers, but velocity without A Feedback loop to set direction is unlikely to get us anywhere sustainable.

LoopLast week, I attended the first day of the OpenStack Product Working Group meeting.   My modest expectations (I just wanted them talking) were far exceeded.  The group managed to cover both strategic and tactical items including drafting a charter and discussing pending changes to the incubation process.

OpenStack needs a strong feedback loop from users and operators back to developers and vendors – statement made during the PM meeting.

The most critical wins from last week what the desire for the PM group to work more closely with the OpenStack technical leadership.  I’m excited to see the community continue to expand the scope of collaboration.

Why is this important?  Because developers and product managers need mutual respect to be effective.

The members of the Product team are leaders within their own organization responsible for talking to users and operators.  We rely on them to close the communication loop by both collecting feedback and explaining direction.  To accomplish this difficult job, the Product team must own articulating a vision for the future.

For OpenStack to succeed, we need to be listening intently to feedback about both how we are doing and if we are headed in the right direction.  Both are required to create a feedback loop.

After seeing this group in action, I’m excited to see what’s next.

Want to read more?

Get involved!  Join the discussion on the OpenStack Product mailing list!

too easy to bare metal? Ansible just works with OpenCrowbar

2012-01-15_10-21-12_716I’ve talked before about how OpenCrowbar distributes SSH keys automatically as part of its deployment process.  Now, it’s time to unleash some of the subsequent magic!

If you provision servers with your keys in place, then Ansible will just work with truly minimal configuration (one line in a file!).

Video Demo (steps bellow):

Here are my steps:

  1. Install OpenCrowbar and run some nodes to ready state [videos]
  2. Install Ansible [simple steps]
  3. Add hosts range “192.168.124.[81:83]  ansible_ssh_user=root” to the
    “/etc/ansible/hosts” file
  4. If you are really lazy, add “[Default] // host_key_checking = False” to your “~/.ansible.cfg” file
  5. now ping the hosts, “ansible all -m ping”
  6. pat yourself on the back, you’re done.
  7. to show off:
    1. touch all machines “ansible all -a “/bin/echo hello”
    2. look at types of Linux “ansible all -a “uname -a”

Further integration work can make this even more powerful.  

I’d like to see OpenCrowbar generate the Ansible inventory file from the discovery data and to map Ansible groups from deployments.  Crowbar could also call Ansible directly to use playbooks or even do a direct hand-off to Tower to complete an install without user intervention.

Wow, that would be pretty handy!   If you think so too, please join us in the OpenCrowbar community.