DNS is critical – getting physical ops integrations right matters

Why DNS? Maintaining DNS is essential to scale ops.  It’s not as simple as naming servers because each server will have multiple addresses (IPv4, IPv6, teams, bridges, etc) on multiple NICs depending on the systems function and applications. Plus, Errors in DNS are hard to diagnose.

Names MatterI love talking about the small Ops things that make a huge impact in quality of automation.  Things like automatically building a squid proxy cache infrastructure.

Today, I get to rave about the DNS integration that just surfaced in the OpenCrowbar code base. RackN CTO, Greg Althaus, just completed work that incrementally updates DNS entries as new IPs are added into the system.

Why is that a big deal?  There are a lot of names & IPs to manage.

In physical ops, every time you bring up a physical or virtual network interface, you are assigning at least one IP to that interface. For OpenCrowbar, we are assigning two addresses: IPv4 and IPv6.  Servers generally have 3 or more active interfaces (e.g.: BMC, admin, internal, public and storage) so that’s a lot of references.  It gets even more complex when you factor in DNS round robin or other common practices.

Plus mistakes are expensive.  Name resolution is an essential service for operations.

I know we all love memorizing IPv4 addresses (just wait for IPv6!) so accurate naming is essential.  OpenCrowbar already aligns the address 4th octet (Admin .106 goes to the same server as BMC .106) but that’s not always practical or useful.  This is not just a Day 1 problem – DNS drift or staleness becomes an increasing challenging problem when you have to reallocate IP addresses.  The simple fact is that registering IPs is not the hard part of this integration – it’s the flexible and dynamic updates.

What DNS automation did we enable in OpenCrowbar?  Here’s a partial list:

  1. recovery of names and IPs when interfaces and systems are decommissioned
  2. use of flexible naming patterns so that you can control how the systems are registered
  3. ability to register names in multiple DNS infrastructures
  4. ability to understand sub-domains so that you can map DNS by region
  5. ability to register the same system under multiple names
  6. wild card support for C-Names
  7. ability to create a DNS round-robin group and keep it updated

But there’s more! The integration includes both BIND and PowerDNS integrations. Since BIND does not have an API that allows incremental additions, Greg added a Golang service to wrap BIND and provide incremental updates and deletes.

When we talk about infrastructure ops automation and ready state, this is the type of deep integration that makes a difference and is the hallmark of the RackN team’s ops focus with RackN Enterprise and OpenCrowbar.

Talking Functional Ops & Bare Metal DevOps with vBrownBag [video]

Last Wednesday (3/11/15), I had the privilege of talking with the vBrownBag crowd about Functional Ops and bare metal deployment.  In this hour, I talk about how functional operations (FuncOps) works as an extension of ready state.  FuncOps is a critical concept for providing abstractions to scale heterogeneous physical operations.

Timing for this was fantastic since we’d just worked out ESXi install capability for OpenCrowbar (it will exposed for work starting on Drill, the next Crowbar release cycle).

Here’s the brown bag:

If you’d like to see a demo, I’ve got hours of them posted:

Video Progression

Crowbar v2.1 demo: Visual Table of Contents [click for playlist]

Art Fewell and I discuss DevOps, SDN, Containers & OpenStack [video + transcript]

A little while back, Art Fewell and I had two excellent discussions about general trends and challenges in the cloud and scale data center space.  Due to technical difficulties, the first (funnier one) was lost forever to NSA archives, but the second survived!

The video and transcript were just posted to Network World as part of Art’s on going interview series.  It was an action packed hour so I don’t want to re-post the transcript here.  I thought selected quotes (under the video) were worth calling out to whet your appetite for the whole tamale.

My highlights:

  1. .. partnering with a start-up was really hard, but partnering with an open source project actually gave us a lot more influence and control.
  2. Then we got into OpenStack, … we [Dell] wanted to invest our time and that we could be part of and would be sustained and transparent to the community.
  3. Incumbents are starting to be threatened by these new opened technologies … that I think levels of playing field is having an open platform.
  4. …I was pointing at you and laughing… [you’ll have to see the video]
  5. docker and containerization … potentially is disruptive to OpenStack and how OpenStack is operating
  6. You have to turn the crank faster and faster and faster to keep up.
  7. Small things I love about OpenStack … vendors are learning how to work in these open communities. When they don’t do it right they’re told very strongly that they don’t.
  8. It was literally a Power Point of everything that was wrong … [I said,] “Yes, that’s true. You want to help?”
  9. …people aiming missiles at your house right now…
  10. With containers you can sell that same piece of hardware 10 times or more and really pack in the workloads and so you get better performance and over subscription and so the utilization of the infrastructure goes way up.
  11. I’m not as much of a believer in that OpenStack eats the data center phenomena.
  12. First thing is automate. I’ve talked to people a lot about getting ready for OpenStack and what they should do. The bottom line is before you even invest in these technologies, automating your workloads and deployments is a huge component for being successful with that.
  13. Now, all of sudden the SDN layer is connecting these network function virtualization ..  It’s a big mess. It’s really hard, it’s really complex.
  14. The thing that I’m really excited about is the service architecture. We’re in the middle of doing on the RackN and Crowbar side, we’re in the middle of doing an architecture that’s basically turning data center operations into services.
  15. What platform as a service really is about, it’s about how you store the information. What services do you offer around the elastic part? Elastic is time based, it’s where you’re manipulating in the data.
  16. RE RackN: You can’t manufacture infrastructure but you can use it in a much “cloudier way”. It really redefines what you can do in a datacenter.
  17. That abstraction layer means that people can work together and actually share scripts
  18. I definitely think that OpenStack’s legacy will more likely be the community and the governance and what we’ve learned from that than probably the code.

too easy to bare metal? Ansible just works with OpenCrowbar

2012-01-15_10-21-12_716I’ve talked before about how OpenCrowbar distributes SSH keys automatically as part of its deployment process.  Now, it’s time to unleash some of the subsequent magic!

[5/21 Update: We added the “crowbar-access” role to the Drill release that allows you to inject/remove keys on a per node basis from the API or CLI at any point in the node life-cycle]

If you provision servers with your keys in place, then Ansible will just work with truly minimal configuration (one line in a file!).

Video Demo (steps bellow):

Here are my steps:

  1. Install OpenCrowbar and run some nodes to ready state [videos]
  2. Install Ansible [simple steps]
  3. Add hosts range “192.168.124.[81:83]  ansible_ssh_user=root” to the
    “/etc/ansible/hosts” file
  4. If you are really lazy, add “[Default] // host_key_checking = False” to your “~/.ansible.cfg” file
  5. now ping the hosts, “ansible all -m ping”
  6. pat yourself on the back, you’re done.
  7. to show off:
    1. touch all machines “ansible all -a “/bin/echo hello”
    2. look at types of Linux “ansible all -a “uname -a”

Further integration work can make this even more powerful.  

I’d like to see OpenCrowbar generate the Ansible inventory file from the discovery data and to map Ansible groups from deployments.  Crowbar could also call Ansible directly to use playbooks or even do a direct hand-off to Tower to complete an install without user intervention.

Wow, that would be pretty handy!   If you think so too, please join us in the OpenCrowbar community.

Research showing that Short Lived Servers (“mayflies”) create efficiency at scale [DATA REQUESTED]

Last summer, Josh McKenty and I extended the puppies and cattle metaphor to limited life cattle we called “mayflies.” It was an attempt to help drive the cattle mindset (I think of it as social engineering, or maybe PsychOps) by forcing churn. I’ve come to think of it a step in between cattle and chaos monkeys (see Adrian Cockcroft).

While our thoughts were on mainly ops patterns, I’ve heard that there could be a real operational benefit from encouraging this behavior. The increased turn over in the environment improves scheduler optimization, planned load drains and coping with platform/environment migration.

Now we have a chance to quantify this benefit: a college student (disclosure: he’s my son) has created a data center emulation to see if Mayflies help with utilization. His model appears to work.

Now, he needs some real world data, here’s his request for assistance [note: he needs data by 1/20 to be included in this term]:

Hello!

I am Alexander Hirschfeld, a freshman at Rose-Hulman Institute of Technology. I am working on an independent study about Mayflies, a new idea in virtual machine management in cloud computing. Part of this management is load balancing and resource allocation for virtual machines across a collection of servers. The emulation that I am working on needs a realistic set of data to be the most accurate when modeling the results of using the methods outlined by the theory of mayflies.

Mayflies are an extension of the puppies verses cattle approach to machines, they are the extreme version of cattle as they have a known limited lifespan, such as 7 days. This requires the users of the cloud to build inherently more automated and fault-resistant applications. If you could send me a collection of the requests for new virtual machines(per standard unit of time and their requested specs/size), as well as an average lifetime for the virtual machines (or a graph or list of designated/estimated life times), and a basic summary of the collection of servers running the virtual machines(number, ram, cores), I would be better able to understand how Mayflies can affect a cloud.

Thanks,
Alexander Hirschfeld, twitter: @d-qoi

Needless to say, I’m really excited about the progress on demonstrating some the impact of this practice and am looking forward to posting about his results in the near future.

If you post in the comments, I will make sure you are connected to Alex.

Nextcast #14 Transcription on OpenStack & Crowbar > “we can’t hand out trophies to everyone”

Last week, I was a guest on the NextCast OpenStack podcast hosted by Niki Acosta (EMC) [Jeff Dickey could not join].   I’ve taken some time to transcribe highlights.

We had a great discussion nextcastabout OpenStack, Ops and Crowbar.  I appreciate Niki’s insightful questions and an opportunity to share my opinions.  I feel that we covered years of material in just 1 hour and I appreciate the opportunity to appear on the podcast.

Video from full post (youtube) and the audio for download.

Plus, a FULL TRANSCRIPT!  Here’s my Next Cast #14 Short Transcripton

The objective of this transcription is to help navigate the recording, not replace it.  I did not provide complete context for remarks.

  • 04:30 Birth of Crowbar (to address Ops battle scars)
  • 08:00 The need for repeatable Ready State baseline to help community work together
  • 10:30 Should hardware matter in OpenStack? It has to, details and topology matters not vendor.
  • 11:20 OpenCompute – people are trying to open source hardware design
  • 11:50 When you are dealing with hardware, it matters. You have to get it right.
  • 12:40 Customers are hardware heterogeneous by design (and for ops tooling). Crowbar is neutral territory
  • 14:50 It’s not worth telling people they are wrong, because they are not. There are a lot of right ways to install OpenStack
  • 16:10 Sometimes people make expensive choices because it’s what they are comfortable with and it’s not helpful for me to them they a wrong – they are not.
  • 16:30 You get into a weird corner if you don’t tell anyone no. And an equally weird corner if you tell everyone yes.
  • 18:00 Aspirations of having an interoperable cloud was much harder than the actual work to build it
  • 18:30 Community want to say yes, “bring your code” but to operators that’s very frustrating because they want to be able to make substitutions
  • 19:30 Thinking that if something is included then it’s required – that’s not clear
  • 19:50 Interlock Dilemma [see my back reference]
  • 20:10 Orwell Animal Farm reference – “all animals equal but pigs are more equal”
  • 22:20 Rob defines DefCore, it’s not big and scary
  • 22:35 DefCore is about commercial use, not running the technical project
  • 23:35 OpenStack had to make money for the companies are paying for the developers who participate… they need to see ROI
  • 24:00 OpenStack is an infrastructure project, stability is the #1 feature
  • 24:40 You have to give a reason why you are saying no and a path to yes
  • 25:00 DefCore is test driven: quantitative results
  • 26:15 Balance between whole project and parts – examples are Swiftstack (wants Object only) and Dreamhost (wants Compute only)
  • 27:00 DefCore created core components vs platform levels
  • 27:30 No vendor has said they can implement DefCore without some effort
  • 28:10 We have outlets for vendors who do not want to implement the process
  • 28:30 The Board is not in a position to make technical call about what’s in, we had to build a process for community input
  • 29:10 We had to define something that could say, “this is it and we have to move on”
  • 29:50 What we want is for people to start with the core and then bring in the other projects. We want to know what people are adding so we can make that core in time
  • 30:10 This is not a recommendation is a base.
  • 30:35 OpenStack is a bubble – does not help if we just get together to pad each other on the back, we want to have a thriving ecosystem
  • 31:15 Question: “have vendors been selfish”
  • 31:35 Rob rephrased as “does OpenStack have a tragedy of the commons” problem
  • 32:30 We need to make sure that everyone is contributing back upstream
  • 32:50 Benefit of a Benevolent Dictator is that they can block features unless community needs are met
  • 33:10 We have NOT made it clear where companies should be contributing to the community. We are not doing a good job directing community efforts
  • 33:45 Hidden Influencers becomes OpenStack Product group
  • 34:55 Hidden Influencers were not connecting at the summit in a public way (like developers were)
  • 35:20 Developers could not really make big commitments of their time without the buy in from their managers (product and line)
  • 35:50 Subtle selfishness – focusing on your own features can disrupt the whole release where things would flow better if they helped others
  • 37:40 Rob was concerned that there was a lot of drift between developers and company’s product descriptions
  • 38:20 BYLAWS CHANGES – vote! here’s why we need to change
  • 38:50 Having whole projects designated as core sucks – code in core should be slower and less changing. Innovation at the core will break interoperability
  • 39:40 Hoping that core will help product managers understand where they are using the standard and adding values
  • 41:10 All babies are ugly > with core, that’s good. We are looking for the grown ups who can do work and deliver value. Babies are things you nurture and help grow because they have potential.
  • 42:00 We undermine our credibility in the community when we talk about projects that are babies as if they were ready.
  • 43:15 DefCore’s job was to help pick projects. If everyone is core then we look like a youth soccer team where everyone is getting a trophy
  • 44:30 Question: “What do you tell to users to instill confidence in OpenStack”
  • 44:50 first thing: focus on operations and automation. Table stakes (for any cloud) is getting your deployments automated. Puppies vs Cattle.
  • 45:25 People who were successful with early OpenStack were using automated deployments against the APIs.
  • 46:00 DevOps is a fundamental part of cloud computing – if you’re hand-built and not automated then you are old school IT.
  • 46:40 Niki references Gartner “Bimodal IT” [excellent reference, go read it!]
  • 47:20 VMWare is a great crutch for OpenStack. We can use VMWare for the puppies.
  • 47:45 OpenStack is not going to run on every servers (perhaps that’s heresy) but it does not make sense in every workload
  • 48:15 One size does not fit all – we need to be good at what we’re good at
  • 48:30 OpenStack needs to focus on doing something really well. That means helping people who want to bring automated workloads into the cloud
  • 49:20 Core was about sending a signal about what’s ready and people can rely on
  • 49:45 Back in 2011, I was saying OpenStack was ready for people who would make the operational investment
  • 50:30 We use Crowbar because it makes it easier to do automated deployments for infrastructure like Hadoop and Ceph where you want access to the physical media
  • 51:00 We should be encouraging people to use OpenStack for its use cases
  • 51:30 Existential question for OpenStack: are we a suite or product. The community is split here
  • 51:30 In comparing with Amazon, does OpenStack have to implement it or build an ecosystem to compete
  • 53:00 As soon as you make something THE OpenStack project (like Heat) you are sending a message that the alternates are not welcome
  • 54:30 OpenStack ends up in a trap if we pick a single project and make it the way that we are going do something. New implementations are going to surface from WITHIN the projects and we need to ready for that.
  • 55:15 new implementations are coming, we have to be ready for that. We can make ourselves vulnerable to splitting if we do not prepare.
  • 56:00 API vs Implementation? This is something that splits the community. Ultimately we to be an API spec but we are not ready for that. We have a lot of work to do first using the same code base.
  • 56:50 DefCore has taken a balanced approach using our diversity as a strength
  • 57:20 Bylaws did not allow for enough flexibility for what is core
  • 59:00 We need voters for the quorum!
  • 59:30 Rob recommended Rocky Grober (Huawei) and Shamail Tahir (EMC) for future shows

Delicious 7 Layer DIP (DevOps Infrastructure Provisioning) model with graphic!

Applying architecture and computer science principles to infrastructure automation helps us build better controls.  In this post, we create an OSI-like model that helps decompose the ops environment.

The RackN team discussions about “what is Ready State” have led to some interesting realizations about physical ops.  One of the most critical has been splitting the operational configuration (DNS, NTP, SSH Keys, Monitoring, Security, etc) from the application configuration.

Interactions between these layers is much more dynamic than developers and operators expect.  

In cloud deployments, you can use ask for the virtual infrastructure to be configured in advance via the IaaS and/or golden base images.  In hardware, the environment build up needs to be more incremental because that variations in physical infrastructure and operations have to be accommodated.

Greg Althaus, Crowbar co-founder, and I put together this 7 layer model (it started as 3 and grew) because we needed to be more specific in discussion about provisioning and upgrade activity.  The system view helps explain how layer 5 and 6 operate at the system layer.

7 Layer DIP

The Seven Layers of our DIP:

  1. shared infrastructure – the base layer is about the interconnects between the nodes.  In this model, we care about the specific linkage to the node: VLAN tags on the switch port, which switch is connected, which PDU ID controls turns it on.
  2. firmware and management – nodes have substantial driver (RAID/BIOS/IPMI) software below the operating system that must be configured correctly.   In some cases, these configurations have external interfaces (BMC) that require out-of-band access while others can only be configured in pre-install environments (I call that side-band).
  3. operating system – while the operating system is critical, operators are striving to keep this layer as thin to avoid overhead.  Even so, there are critical security, networking and device mapping functions that must be configured.  Critical local resource management items like mapping media or building network teams and bridges are level 2 functions.
  4. operations clients – this layer connects the node to the logical data center infrastructure is basic ways like time synch (NTP) and name resolution (DNS).  It’s also where more sophisticated operators configure things like distributed cache, centralized logging and system health monitoring.  CMDB agents like Chef, Puppet or Saltstack are installed at the “top” of this layer to complete ready state.
  5. applications – once all the baseline is setup, this is the unique workload.  It can range from platforms for other applications (like OpenStack or Kubernetes) or the software itself like Ceph, Hadoop or anything.
  6. operations management – the external system references for layer 3 must be factored into the operations model because they often require synchronized configuration.  For example, registering a server name and IP addresses in a DNS, updating an inventory database or adding it’s thresholds to a monitoring infrastructure.  For scale and security, it is critical to keep the node configuration (layer 3) constantly synchronized with the central management systems.
  7. cluster coordination – no application stands alone; consequently, actions from layer 4 nodes must be coordinated with other nodes.  This ranges from database registration and load balancing to complex upgrades with live data migration. Working in layer 4 without layer 6 coordination creates unmanageable infrastructure.

This seven layer operations model helps us discuss which actions are required when provisioning a scale infrastructure.  In my experience, many developers want to work exclusively in layer 4 and overlook the need to have a consistent and managed infrastructure in all the other layers.  We enable this thinking in cloud and platform as a service (PaaS) and that helps improve developer productivity.

We cannot overlook the other layers in physical ops; however, working to ready state helps us create more cloud-like boundaries.  Those boundaries are a natural segue my upcoming post about functional operations (older efforts here).