Talking Functional Ops & Bare Metal DevOps with vBrownBag [video]

Last Wednesday (3/11/15), I had the privilege of talking with the vBrownBag crowd about Functional Ops and bare metal deployment.  In this hour, I talk about how functional operations (FuncOps) works as an extension of ready state.  FuncOps is a critical concept for providing abstractions to scale heterogeneous physical operations.

Timing for this was fantastic since we’d just worked out ESXi install capability for OpenCrowbar (it will exposed for work starting on Drill, the next Crowbar release cycle).

Here’s the brown bag:

If you’d like to see a demo, I’ve got hours of them posted:

Video Progression

Crowbar v2.1 demo: Visual Table of Contents [click for playlist]

too easy to bare metal? Ansible just works with OpenCrowbar

2012-01-15_10-21-12_716I’ve talked before about how OpenCrowbar distributes SSH keys automatically as part of its deployment process.  Now, it’s time to unleash some of the subsequent magic!

[5/21 Update: We added the “crowbar-access” role to the Drill release that allows you to inject/remove keys on a per node basis from the API or CLI at any point in the node life-cycle]

If you provision servers with your keys in place, then Ansible will just work with truly minimal configuration (one line in a file!).

Video Demo (steps bellow):

Here are my steps:

  1. Install OpenCrowbar and run some nodes to ready state [videos]
  2. Install Ansible [simple steps]
  3. Add hosts range “192.168.124.[81:83]  ansible_ssh_user=root” to the
    “/etc/ansible/hosts” file
  4. If you are really lazy, add “[Default] // host_key_checking = False” to your “~/.ansible.cfg” file
  5. now ping the hosts, “ansible all -m ping”
  6. pat yourself on the back, you’re done.
  7. to show off:
    1. touch all machines “ansible all -a “/bin/echo hello”
    2. look at types of Linux “ansible all -a “uname -a”

Further integration work can make this even more powerful.  

I’d like to see OpenCrowbar generate the Ansible inventory file from the discovery data and to map Ansible groups from deployments.  Crowbar could also call Ansible directly to use playbooks or even do a direct hand-off to Tower to complete an install without user intervention.

Wow, that would be pretty handy!   If you think so too, please join us in the OpenCrowbar community.

OpenCrowbar Multi-OS deploy from Docker Admin

Last week I talked about OpenCrowbar reaching a critical milestone and this week I’ve posted two videos demonstrating how the new capabilities work.

annealingThe first video highlights the substantial improvements we’ve made testing and developing OpenCrowbar.  By using Docker containers, OpenCrowbar is fast and reliable to setup and test.  We’ve dramatically streamlined the development environment and consolidated the whole code base into logical groups with logical names.

The second video shows off the OpenCrowbar doing it’s deployment work (including setting up Docker nodes!).  This demonstration goes through the new node discovery and install process.  The new annealing process is very transparent and gives clear and immediate feedback about the entire discovery and provisioning process.  I also show how to configure networks (IPv4 and IPv6) and choose which operating system gets installed.

Note: In the videos, I demonstrate using our Docker install process.  Part of moving from Crowbar v2 (in the original Crowbar repo) to OpenCrowbar was so that we could also organize the code for an RPM install.  In either install process, OpenCrowbar no longer uses bloated ISOs with all components pre-cached so you must be connected to the Internet to complete the installation.

Do Be Dense! Dell C8000 unit merges best of bladed and rackable servers

“Double wide” is not a term I’ve commonly applied to servers, but that’s one of the cool things about this new class of servers that Dell, my employer, started shipping today.

My team has been itching for the chance to start cloud and big data reference architectures using this super dense and flexible chassis. You’ll see it included in our next Apache Hadoop release and we’ve already got customers who are making it the foundation of their deployments (Texas Adv Computing Center case study).

If you’re tracking the latest big data & cloud hardware then the Dell PowerEdge C8000 is worth some investigation.

Basically, the Dell C8000 is a chassis that holds a flexible configuration of compute or storage sleds. It’s not a blade frame because the sleds minimize shared infrastructure. In our experience, cloud customers like the dedicated i/o and independence of sleds (as per the Bootstrapping clouds white paper). Those attributes are especially well suited for Hadoop and OpenStack because they support a “flat edges” and scale out design. While i/o independence is valued, we also want shared power infrastructure and density for efficiency reasons. Using a chassis design seems to capture the best of both worlds.

The novelty for the Dell PowerEdge C8000 is that the chassis are scary flexible. You are not locked into a pre-loaded server mix.

There are a plethora of sled choices so that you can mix choices for power, compute density and spindle counts. That includes double-wide sleds positively brimming with drives and expanded GPU processers. Drive density is important for big data configurations that are disk i/o hungry; however, our experience is the customer deployments are highly varied based on the planned workload. There are also significant big data trends towards compute, network, and balanced hardware configurations. Using the C8000 as a foundation is powerful because it can cater to all of these use-case mixes.

That reminds me! Mike Pittaro (our team’s Hadoop lead architect) did an excellent Deploy Hadoop using Crowbar video.

Interested in more opinions about the C8000? Check out Barton George & David Meyer.

OpenStack Deploy Day generates lots of interest, less coding

Last week, my team at Dell led a world-wide OpenStack Essex Deploy event. Kamesh Pemmaraju, our OpenStack-powered solution product manager, did a great summary of the event results (200+ attendees!). What started as a hack-a-thon for deploy scripts morphed into a stunning 14+ hour event with rotating intro content and an ecosystem showcase (videos).  Special kudos to Kamesh, Andi Abes, Judd Maltin, Randy Perryman & Mike Pittaro for leadership at our regional sites.

Clearly, OpenStack is attracting a lot of interest. We’ve been investing time in content to help people who are curious about OpenStack to get started.

While I’m happy to be fueling the OpenStack fervor with an easy on-ramp, our primary objective for the Deploy Day was to collaborate on OpenStack deployments.

On that measure, we have room for improvement. We had some great discussions about how to handle upgrades and market drivers for OpenStack; however, we did not spend the time improving Essex deployments that I was hoping to achieve. I know it’s possible – I’ve talked with developers in the Crowbar community who want this.

If you wanted more expert interaction, here are some of my thoughts for future events.

  • Expert track did not get to deploy coding. I think that we need to simply focus more even tightly on to Crowbar deployments. That means having a Crowbar Hack with an OpenStack focus instead of vice versa.
  • Efforts to serve OpenStack n00bs did not protect time for experts. If we offer expert sessions then we won’t try to have parallel intro sessions. We’ll simply have to direct novices to the homework pages and videos.
  • Combining on-site and on-line is too confusing. As much as I enjoy meeting people face-to-face, I think we’d have a more skilled audience if we kept it online only.
  • Connectivity! Dropped connections, sigh.
  • Better planning for videos (not by the presenters) to make sure that we have good results on the expert track.
  • This event was too long. It’s just not practical to serve Europe, US and Asia in a single event. I think that 2-3 hours is a much more practical maximum. 10-12am Eastern or 6-8pm Pacific would be much more manageable.

Do you have other comments and suggestions? Please let me know!

Austin OpenStack Meetup: Keystone & Knife (2/20 notes via Greg Althaus)

I could not make it to the recent Austin OpenStack Meetup, but Greg Althaus generously let me post his notes from the event.

Background

Matt Ray talks about Chef

Matt Ray from Opscode presented some of the work with Chef and OpenStack. He talked about the three main chef repos floating around. He called out Anso’s original cookbook set that is the basis for the Crowbar cookbooks (his second set), and his final set is the emerging set of cookbooks in OpenStack proper. The third one is interesting and what he plans to continue working on to make into his public openstack cookbooks. These are an amalgamation of smokestack, RCB, Anso improvements, and his (Crowbar’s).

He then demoed his knife plugin (slideshare) to build openstack virtual servers using the Openstack API. This is nice and works against TryStack.org (previously “Free Cloud”) and RCB’s demo cloud. All of that is on his github repo with instructions how to build and use. Matt and I talked about trying to get that into our Crowbar distro.

There were some questions about flow and choice of OpenStack API versus Amazon EC2 API because there was already an EC2 knife set of plugins.

Ziad Sawalha talks about Keystone

Ziad Sawalha is the PLT (Project Technical Lead) for Keystone. He works for Rackspace out of San Antonio. He drove up for the meeting.

He split his talk into two pieces, Incubation Process and Keystone Overview. He asked who was interested in what and focused his talk more towards overview than incubation.

Some key take-aways:

  • Keystone comes from Rackspace’s strong, flexible, and scalable API. It started as a known quantity from his perspective.
  • Community trusted nothing his team produced from an API perspective
  • Community is python or nothing
    • His team was ignored until they had a python prototype implementing the API
    • At this point, comments on API came in.
  • Churn in API caused problems with implementation and expectations around the close of Diablo.
    • Because comments were late, changes occurred.
    • Official implementation lagged and stalled into arriving.
  • API has been stable since Diablo final, but code is changing. that is good and shows strength of API.
  • Side note from Greg, Keystone represents to me the power of API over Code. You can have innovation around the implementation as long all the implementations have a fair ground work to plan under which is an API specification. The replacement of Keystone with the Keystone Light code base is an example of this. The only reason this is possible is that the API was sound and documented.  (Rob’s post on this)

Ziad spent the rest of his time talking about the work flow of Keystone and the API points. He covered the API points.

  • Client to Keystone, Keystone to Client for initial auth token
  • Client to Middleware API for the services to have a front.
  • Middleware to Keystone to verify and establish identity.
  • Middleware to Service to pass identity

Not many details other then flow and flexibility. He stressed the API design separated protocol from actions and data at all the layers. This allows for future variations and innovations while maintaining the APIs.

Ziad talked about the state of Essex.

  • Planned
    • RBAC (aka Role Based Access Control)
    • Stability
    • Many backends
  • Actual
    • Code replacement Keystone Light
    • Stability
    • LDAP backend
    • SQL backend

Folsum work:

  • RBAC
  • Stability
  • AD backend
  • Another backend
  • Federation was planned but will most likely be pushed to G
    • Federation is the ability for multiple independent Keystones to operate (bursting use case)
    • Dependent upon two other federation components (networking and billing/metering)

Early crop of Crowbar 1.3 features popping up

My team at Dell is still figuring out some big items for the 1.3 release; however, somethings were just added that is worth calling out.

  1. Ubuntu 11.04 support!   Thanks to Justin Shepherd from Rackspace Cloud Builders!
  2. Alias names for nodes in the UI
  3. User managed node groups in the UI
  4. Ability to pre-populate the alias, description and group for a node (not integrated with DNS yet)
  5. Hadoop is working again – we addressed the missing Ganglia repo issue.  Thanks to Victor Lowther.
For items 2 – 4, I made a short video tour: Node Alias & Group

Also, I’ve spun new open source ISOs with the new features.  User beware!

Barclamps: now with added portability!

I had a question about moving barclamps between solutions.  Since Victor just changed the barclamp build to create a tar for each barclamp (with the debs/rpms), I thought it was the perfect time to explain the new feature.

You can find the barclamps on the Crowbar ISO under “/dell/barclamps” and you can install the TAR onto a Crowbar system using “./barclamp_install foo.tar.gz” where foo is the name of your barclamp.

Here’s a video of how to find and install barclamp tars:

Note: while you can install OpenStack into a Hadoop system, that combination is NOT tested.  We only test OpenStack on Ubuntu 10.10 and Hadoop on RHEL 5.7.   Community help in expanding support is always welcome!

Hadoop Crowbar released to open source! (plus AN HOUR of videos!)

I’m proud to announce that my team at Dell has open sourced our Apache Hadoop barclamps!  This release follows our Dell | Cloudera Hadoop Solution open source commitment from Hadoop World earlier this month.

As part of this release, we’ve created nearly AN HOUR of video content showing the Hadoop Barclamps in action, installing Crowbar (on CentOS), building Crowbar ISOs in the cloud and specialized developer focused builds.

If you want to talk to the Crowbar team.  We’re attending events in Boston 11/29, Seattle 11/30, and Austin 12/8.

Here are links to the videos:

More Hadoop perspectives from Dell:  Joseph George on what it means and  Barton George‘s backgrounder about barclamps.