August 25 – Weekly Recap of All Things Site Reliability Engineering (SRE)

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

SRE Items of the Week


Image from 

What is “Site Reliability Engineering?” 
https://landing.google.com/sre/interview/ben-treynor.html 

In this interview, Ben Treynor shares his thoughts with Niall Murphy about what Site Reliability Engineering (SRE) is, how and why it works so well, and the factors that differentiate SRE from operations teams in industry. READ MORE

Podcast: A Nice Mix of Ansible and Digital Rebar
http://bit.ly/2vkBYEe 

Follow our new L8ist Sh9y Podcast on SoundCloud at https://soundcloud.com/user-410091210.

Digital Rebar Mascot Naming 

Next week the Digital Rebar community will be finalizing the name for our mascot

Several possible names are listed on a recent blog post for your consideration. Please tweet to @DigitalRebar any ideas you have as we will be choosing a name next week via a Twitter poll.

Digital Rebar v3 Provision
http://rebar.digital/

Digital Rebar is the open, fast and simple data center provisioning and control scaffolding designed with a cloud native architecture.

Our extensible stand-alone DHCP/PXE/IPXE service has minimal overhead so it can be installed and provisioning in under 5 minutes on a laptop, RPi or switch. From there, users can add custom or pre-packaged workflows for full life-cycle automation using our API and CLI or a community UX.

A cloud native bare metal approach provides API-driven infrastructure-as-code automation without locking you into a specific hardware platform, operating system or configuration model.

For physical infrastructure provisioning, Digital Rebar replaces CobblerForemanMaaS or similar with the added bonus of being able to include simple control workflows for RAID, IPMI and BIOS configuration. We also provide event driven actions via websockets API and a simple plug-in model. By design, Digital Rebar is not opinionated about scripting tools so you can mix and match Chef, Puppet, Ansible, SaltStack and even Bash.

Next version: release of v3.1 is anticipated on 9/4/2017.

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

OTHER NEWSLETTERS

July 28 – Weekly Recap of All Things Site Reliability Engineering (SRE)

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

This week, we launched our new RackN website to provide more information on our solutions and services as well as provide customer examples. Click over to our new site and let us know your thoughts.

SRE Items of the Week

Site Reliability Engineer: Don’t fall victim to the bias blind spot
http://sdtimes.com/site-reliability-engineer-dont-fall-victim-to-the-bias-blind-spot/

To ensure websites and applications deliver consistently excellent speed and availability, some organizations are adopting Google’s Site Reliability Engineering (SRE) model. In this model, a Site Reliability Engineer (SRE) – usually someone with both development and IT Ops experience – institutes clear-cut metrics to determine when a website or application is production-ready from a user performance perspective. This helps reduce friction that often exists between the “dev” and “ops” sides of organizations. More specifically, metrics can eliminate the conflict between developers’ desire to “Ship it!” and operations desire to not be paged when they are on-call. If performance thresholds aren’t met, releases cannot move forward. READ MORE

Episode 50 – SRE Revisited plus the Challenge of Ops and more with Rob Hirschfeld
http://podcast.discoposse.com/e/ep-50-sre-revisited-plus-the-challenges-of-ops-and-more-with-rob-hirschfeld-zehicle/

This fun chat expands on what we started talking about in episode 42 (http://podcast.discoposse.com/e/ep-42-spiraling-ops-debt-sre-solutions-and-rackn-chat-with-rob-hirschfeld-zehicle/) as we dive into the challenges and potential solutions for thinking and acting with the SRE approach. Big thansk to Rob Hirschfeld from @RackN for sharing his thoughts and experiences from the field on this very exciting subject. LISTEN HERE

Site Reliability Engineering – Operators and Developers Working Together
http://bit.ly/2u7eSmm 

Rob Hirschfeld, Co-Founder and CEO of RackN provides his thoughts on how operators are equivalent to developers and work together to accomplish the critical task of keep the infrastructure running and available with constant changes in the data center

Subscribe to our new daily DevOps, SRE, & Operations Newsletter https://paper.li/e-1498071701#/
_____________

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

OTHER NEWSLETTERS

Datanauts #89 dives deep on SRE approach and urgency

TL;DR: SRE makes Ops more Dev like in critical ways like status equity and tooling approaches.

In Datanauts 089, Chris Wahl and Ethan Banks help me break down the concepts from my “DevOps vs SRE vs Cloud Native” presentation from DevOpsDays Austin last spring. They do a great job exploring the tough topics and concepts from the presentation.  It’s almost like an extended Q&A so you may want to review the slides or recording before diving into the podcast.

Advanced Reading: my follow-up discussion on SRE with the Cloudcast team and my previous Datanauts podcast.

Here are my notes from the podcast:

  • 01:00 “Doing infrastructure in a way that the robots can take over”
  • 01:51 Video where Charity & Rob Debated the SRE term
  • 02:00 History of SRE term from Google vs Sys Ops – if site was not up, money was not flowing.  SRE culture fixed pay equity and career ladder, ops would have automation/dev time, dev on hooks for errors
  • 03:00 Google took a systems approach with lots of time for automation and coding
  • 03:20 Finding a 10x improvement in ops.  Go buy the book
  • 04:00 SRE is a new definition of System Op
  • 04:10 The S in could be “system” or physical location (not web site).
  • 05:00 We’re seeing SRE teams showing up in companies of every size.  Replacing DevOps teams (which is a good thing).  Rob is hoping that SRE is replacing DevOps as a job title.  
  • 06:10 Don’t fall for a title change from Sys Op to SRE with actually getting the pay and authority
  • 06:45 Ethan believes that SRE is transforming to have a broad set of responsibilities.  Is just a new System Admin definition?
  • 07:30 Rob things that the SRE expectation is for a much higher level of automation.  There’s a big thinking shift.
  • 08:00 SREs are still operators.  You have to walk the walk to know how to run the system.  Not developers who are writing the platform.
  • 08:30 Chris asks about the Ops technical debt
  • 09:00 We need to make Ops tooling “better enough” – we’re not solving this problem fast enough.  We have to do a better job – Rob talks about the Wannacry event.
  • 10:30 Chris asks how to fix this since complexity is increasing.  Rob plugs Digital Rebar as a way to solve this.
  • 11:00 People are excited about Digital Rebar but don’t have the time to fix the problem.  They are running crisis to crisis so we never get to automation that actually improves things.
  • 12:00 At best, Ops is invisible.  SRE is different because it includes CI/CD with on going interactions.  There’s a lot coming with immutable operating systems and constantly term.
  • 13:00 The idea that a Linux system has been up for 10 years is an anti-pattern.  Rob would rather have people say that none of their servers has been up for more than a week (because they are constantly refreshed)
  • 13:19 Chris & Ethan – SECTION 1 REVIEW
    • SRE is not new, it’s about moving into a proactive stance (automatically reacting)
    • The power is the buy in so that Ops has ownership of the stack
  • 15:00 SRE vs DevOps vs Cloud Native – not in conflict, but we love to create opposition
  • 15:40 There is a difference, they are not interchangeable.  SRE is a job title, DevOps is a process and Cloud Native is an architecture.
  • 16:30 We need to resist that Cloud Native is a “new shiney” that replaces DevOps. We don’t have to take things away.
  • 17:00 Lean is a process where we’re trying to shorten the flow from ideation to delivery.  Read the Goal [links] and The Phoenix Project [links].  
  • 18:00 Bottlenecks (where we’ve added work or delays) really break our pipelines.  
  • 19:00 Ethan’s adds the insight: If you don’t have small steps then you don’t really understand your process
  • 20:00 Platform as a Service is not really reducing complexity, we’re just hiding/abstracting it.  That moves the complexity.  We may hide it from developers but may be passing it to the operators.
  • 21:00 Chris asks if this can be mapped to legacy?  Rob agrees that it’s a legacy architectural choice that was made to reduce incremental risk.  Today, we’re trying to make our risk into smaller steps which makes it so that we will have smaller but more frequent breaks.
  • 22:40 The way we deliver systems is changing to require a much faster pace of taking changes
  • 23:00 SREs are data driven so they can feed information back to devs.  They can’t (shouldn’t) walk away from running systems.  This is an investment requirement so we can create data.
  • 24:00 We let a lot of problems lurk below the surface that eventually surface as a critical issue.  Cannot let toothaches turn into abscesses.  SREs should watch systems over time.
  • 25:20 If you are running under performance in the cloud, then you are wasting money.
  • 26:00 Cloud Native, an architecture?  What is it?  It means a ton of things.  For this preso, Rob made it about 12 factor and API driven infrastructure.
  • 26:50 “If you are not worried about rising debt then we are in trouble.”  We need to root cause!  If not, they snowball and operators are just running fire to fire.  We need to stop having operators be heros / grenade divers because it’s an anti-pattern.  Predictable systems do not create a lot of interrupts or crises.  Operators should not be event driven.
  • 28:40 Chris & Ethan – SECTION 2 REVIEW
    • Chris: Being data driven combats complexity
    • Ethan: Breaking down processes into smaller units reduces risk.  
  • 30:00 Cloud First is not Cloud Only.  CNCF projects are not VM specific, they are about abstractions that help developers be more productive.  Ideally, the abstractions remove infrastructure because developers don’t want to do any infrastructure.  We should not are about which type of infrastructure we are using
  • 31:30 The similarities between the concepts is in their common outcomes/values.  Cloud First wants to be infrastructure agnostic.
  • 32:30 Chris ask how important CI/CD should be.  Are these still important in non-Cloud environments.  Rob things that Cloud Native may “cloud wash” architectures that are really just as important in traditional infrastructure.  
  • 34:00 Cloud Native was a defensive architecture because early cloud was not very good.  CI/CD pipelines would be considered best practices in regular manufacturing. 
  • 35:00 These ideas are really good manufacturing process applied back to IT.  Thankfully, there’s really nothing unexpected from repeatable production.
  • 36:30 Lesson: Pay Equity.  Traditionally operators are not paid as well as developers and that means that we’re giving them less respect.  HiPPO (highest paid person in organization) is a very real effect where you can create a respect gap.
  • 38:00 Lesson: Disrupt Less.  We love the idea of disruption but they are very expensive and disproportionately to the operators.  Change for Developers may be small but have big impacts to operators.  More disruptive changes actually slow down adoption because that slows down inertia.  SREs should be able to push back to insist on migration paths.
  • 40:00 Rob talks about how RedFish, while good to replace IPMI, will take long time before it.  There are pros and cons.

 

The Case for Ops Engineering Pay Equity w/ Charity Majors

TL;DR: Operators need pay/status equity to succeed.

Charity Majors is one of my DevOps and SRE heroes* so it was great fun to be able to debate SRE with her at Gluecon this spring.  Encouraged by Mike Maney to retell the story, we got to recapture our disagreement about “Is SRE is Good Term?” from the evening before.

While it’s hard to fully recapture with adult beverages, we were able to recreate the key points.

First, we both strongly agree that we need status and pay equity for operators.  That part of the SRE message is essential regardless of the name of the department.

Then it get’s more nuanced. Charity, whose more of a Silicon Valley insider, believes that SRE is tainted by the “Google for Everyone” cargo cult.  She has trouble separating the term SRE from the specific Google practices that helped define it.  

As someone who simply commutes to Silicon Valley, I do not see that bias in the discussions I’ve been having.  I do agree that companies that try to simply copy Google (or other unicorns) in every way is a failure pattern.

Charity: “I don’t want get paid to keep someone else’s shit site alive”

I think Google did a good job with the book by defining the term for a broad audience. Charity believes this signals that SRE means you are working for a big org.  Charity suggested several better alternatives, Operations Engineer.  At the end, the danger seems to be when Dev and Ops create silos instead of collaborating.

Consensus: Job Title?  Who cares.  The need to to make operations more respected and equal.

What did you think of the video?  How is your team defining Operations titles and teams?

(*) yes, I’m working on an actual list – stay tuned.

May 26 – Weekly Recap of All Things Site Reliability Engineering (SRE)

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

SRE Items of the Week

booth.PNG
Co-Founders of RackN Rob Hirschfeld and Greg Althaus at GlueCon

Reuven Cohen and Rob Hirschfeld Chat at GlueCon17
Reuven Cohen (@ruv) and Rob Hirschfeld discuss data center infrastructure trends concerning provisioning, automation and challenges. Rob highlights his company RackN and the open source project Digital Rebar sponsored by RackN.


_____________

Is SRE a Good Term?
Interview with Rob Hirschfeld (RackN) and Charity Majors (Honeycomb) at Gluecon 2017


_____________

How Google Runs its Production Systems – Get the Book
http://www.techrepublic.com/article/want-to-understand-how-google-runs-its-production-systems-read-this-free-book/

The book Site Reliability Engineering helps readers understand how some Googlers think: It contains the ideas of more than 125 authors. The four editors, Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy, managed to weave all of the different perspectives into a unified work that conveys a coherent approach to managing distributed production systems.

Site Reliability Engineering delivers 34 chapters—totaling more than 500 printed pages from O’Reilly Media—that encompass the principles and practices that keep Google’s production systems working. The entire book is available online at https://landing.google.com/sre/book.html, along with links to other talks, interviews, publications, and events.

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

Velocity : June 19 – 20 in San Jose, CA

OTHER NEWSLETTERS

SRE Weekly (@SREWeekly)Issue #73

Evolution or Rebellion? The rise of Site Reliability Engineers (SRE)

What is a Google SRE?  Charity Majors gave a great overview on Datanauts #65, Susan Fowler from Uber talks about “no ops” tensions and Patrick Hill from Atlassian wrote up a good review too.  This is not new: Ben Treynor defined it back in 2014.

DevOps is under attack.

Well, not DevOps exactly but the common misconception that DevOps is about Developers doing Ops (it’s really about lean process, system thinking, and positive culture).  It turns out the Ops is hard and, as I recently discussed with John Furrier, developers really really don’t want be that focused on infrastructure.

In fact, I see containers and serverless as a “developers won’t waste time on ops revolt.”  (I discuss this more in my 2016 retrospective).

The tension between Ops and Dev goes way back and has been a source of confusion for me and my RackN co-founders.  We believe we are developers, except that we spend our whole time focused on writing code for operations.  With the rise of Site Reliability Engineers (SRE) as a job classification, our type of black swan engineer is being embraced as a critical skill.  It’s recognized as the only way to stay ahead of our ravenous appetite for  computing infrastructure.

I’ve been writing about Site Reliability Engineering (SRE) tasks for nearly 5 years under a lot of different names such as DevOps, Ready State, Open Operations and Underlay Operations. SRE is a term popularized by Google (there’s a book!) for the operators who build and automate their infrastructure. Their role is not administration, it is redefining how infrastructure is used and managed within Google.

Using infrastructure effectively is a competitive advantage for Google and their SREs carry tremendous authority and respect for executing on that mission.

ManagersMeanwhile, we’re in the midst of an Enterprise revolt against running infrastructure. Companies, for very good reasons, are shutting down internal IT efforts in favor of using outsourced infrastructure. Operations has simply not been able to complete with the capability, flexibility and breadth of infrastructure services offered by Amazon.

SRE is about operational excellence and we keep up with the increasingly rapid pace of IT.  It’s a recognition that we cannot scale people quickly as we add infrastructure.  And, critically, it is not infrastructure specific.

Over the next year, I’ll continue to dig deeply into the skills, tools and processes around operations.  I think that SRE may be the right banner for these thoughts and I’d like to hear your thoughts about that.

MORE?  Here’s the next post in the series about Spiraling Ops Debt.  Or Skip to Podcasts with Eric Wright and Stephen Spector.

my 8 steps that would improve OpenStack Interop w/ AWS

I’ve been talking with a lot of OpenStack people about frustrating my attempted hybrid work on seven OpenStack clouds [OpenStack Session Wed 2:40].  This post documents the behavior Digital Rebar expects from the multiple clouds that we have integrated with so far.  At RackN, we use this pattern for both cloud and physical automation.

Sunday, I found myself back in front of the the Board talking about the challenge that implementation variation creates for users.  Ultimately, the question “does this harm users?” is answered by “no, they just leave for Amazon.”

I can’t stress this enough: it’s not about APIs!  The challenge is twofold: implementation variance between OpenStack clouds and variance between OpenStack and AWS.

The obvious and simplest answer is that OpenStack implementers need to conform more closely to AWS patterns (once again, NOT the APIs).

Here are the eight Digital Rebar node allocation steps [and my notes about general availability on OpenStack clouds]:

  1. Add node specific SSH key [YES]
  2. Get Metadata on Networks, Flavors and Images [YES]
  3. Pick correct network, flavors and images [NO, each site is distinct]
  4. Request node [YES]
  5. Get node PUBLIC address for node [NO, most OpenStack clouds do not have external access by default]
  6. Login into system using node SSH key [PARTIAL, the account name varies]
  7. Add root account with Rebar SSH key(s) and remove password login [PARTIAL, does not work on some systems]
  8. Remove node specific SSH key [YES]

These steps work on every other cloud infrastructure that we’ve used.  And they are achievable on OpenStack – DreamHost delivered this experience on their new DreamCompute infrastructure.

I think that this is very achievable for OpenStack, but we’re doing to have to drive conformance and figure out an alternative to the Floating IP (FIP) pattern (IPv6, port forwarding, or adding FIPs by default) would all work as part of the solution.

For Digital Rebar, the quick answer is to simply allocate a FIP for every node.  We can easily make this a configuration option; however, it feels like a pattern fail to me.  It’s certainly not a requirement from other clouds.

I hope this post provides specifics about delivering a more portable hybrid experience.  What critical items do you want as part of your cloud ops process?

To avoid echo chamber, OpenStack must embrace competitive cloud ecosystem

wpid-20151023_100533.jpg
Japanese Bullet Train View

I was in Japan before the Tokyo summit on a bullet train to Kyoto watching the mix of heavy industry and bucolic mountains pass by. That scene reflects an OpenStack duality: we want to be both a dominant platform delivering core cloud services and an open source values driven collective.

First, I fundamentally believe in the success of OpenStack as the open virtual infrastructure management platform.

I believe that we have solved the virtual compute/storage/network problem sufficiently to become the de facto open IaaS platform. While not perfect, the technologies are sufficient assuming we continue to improve ease of use and operational hardening. Pursing that base capability is my primary motivation for DefCore work.

I don’t believe that the OpenStack community is, or should try to become, the authority on “all things cloud.”

In the presence of Amazon, VMware, Microsoft and Google, we cannot make that claim with any degree of self-respect. Even newcomers like DigitalOcean have an undeniable footprint and influence. Those vendor platforms drive cloud ecosystems and technologies which foster fast innovation because there is no friction to joining their ecosystems and they are sufficiently large and stable enough to represent a target market. We’ve seen clear signs from Rackspace, HP and others that platform diversity improves cloud strength.

I continue to think we (OpenStack) spend too much time evaluating what is “in” or “out” of the project and too little time talking about what’s “on,” “under” and “with” the project like Kubernetes, Mesos, Docker, SDN, Hadoop and Ceph. That type of thinking creates distance between OpenStack efforts and the majority of the market.

What motivates the drive to an all open captive community? It’s the reasonable concern that critical parts of the infrastructure will become pay-to-play. For example, what if a non-OpenStack alternative to Heat Orchestration gained popularity for OpenStack implementers. Perhaps something that ran on Amazon also. That would create external pressure that would drive internal priorities. These “non-OpenStack” products would then have influence without having to contribute back to upstream.

Can we afford to have external entities driving internal priorities? Hell yes, that’s what customer adoption looks like.

OpenStack does not own the market sufficiently to create cloud echo chamber. The next wave of cloud innovation (my money is on container platforms) will follow the path of least resistance and widest adoption. We need to embrace that these innovations will not all be inside our community so that we can welcome them as part of our ecosystem. The community needs to find peace with that.

The Upstream Imperative: paving the way for content creators is required for platform success

Since content is king, platform companies (like Google, Microsoft, Twitter, Facebook and Amazon) win by attracting developers to build on their services.  Open source tooling and frameworks are the critical interfaces for these adopters; consequently, they must invest in building communities around those platforms even if it means open sourcing previously internal only tools.

This post expands on one of my OSCON observations: companies who write lots of code have discovered an imperative to upstream their internal projects.   For background, review my thoughts about open source and supply chain management.

Huh?  What is an “upstream imperative?”  It sounds like what salmon do during spawning then read the post-script!

Historically, companies with a lot of internal development tools had no inventive to open those projects.  In fact, the “collaboration tax” of open source discouraged companies from sharing code for essential operations.   Historically, open source was considered less featured and slower than commercial or internal projects; however, this perception has been totally shattered.  So companies are faced with a balance between the overhead of supporting external needs (aka collaboration) and the innovation those users bring into the effort.

Until recently, this balance usually tipped towards opening a project but under-investing in the community to keep the collaboration costs low.  The change I saw at OSCON is that companies understand that making open projects successful bring communities closer to their products and services.

That’s a huge boon to the overall technology community.

Being able to leverage and extend tools that have been proven by these internal teams strengthens and accelerates everyone. These communities act as free laboratories that breed new platforms and build deep relationships with critical influencers.  The upstream savvy companies see returns from both innovation around their tools and more content that’s well matched to their platforms.

Oh, and companies that fail to upstream will find it increasingly hard to attract critical mind share.  Thinking the alternatives gives us a Windows into how open source impacts past incumbents.

That leads to a future post about how XaaS dog fooding and “pure-play” aaS projects like OpenStack and CloudFoundry.

Post Script about Upstreaming:

Continue reading

PaaS, much ado about network services

There’s a surprising about of a hair pulling regarding IaaS vs PaaS.  People in the industry get into shouting matches about this topic as if it mattered more than Lindsay Lohan’s journey through rehab.

The cold hard reality is that while pundits are busy writing XaaS white papers, developers are off just writing software.  We are writing software that fits within cloud environments (weak SLA, small VMs), saves money (hosted data instead of data in VMs), and changes quickly (interpreted languages).  We’re doing using an expanding tool kit of networked components like databases, object stores, shared cache, message queue, etc.

Using network components in an application architecture is about as novel as building houses made of bricks.  So, what makes cloud architectures any better or different?

Nothing!  There is no difference if you buy VMs, install services, and wire together your application in its own little cloud bubble.  If I wanted to bait trolls, I’d call that an IaaS deployment.

However, there’s an emerging economic driver to leverage lower cost and more elastic infrastructure by using services provided by hosts rather than standing them up in a VM.  These services replace dedicated infrastructure with managed network attached services and they have become a key differentiator for all the cloud vendors

  • At Google App Engine, they include Big Tables, Queues, MemCache, etc
  • At Microsoft Azure, they include SQL Azure, Azure Storage, AppFabric, etc
  • At Amazon AWS, they include S3, SimpleDB, RDS (MySQL), Queue & Notify, etc

Using these services allows developers to focus on the business problems we are solving instead of building out infrastructure to run our applications.  We also save money because consuming an elastic managed network service is less expensive (and more consumption based) than standing up dedicated VMs to operate the services.

Ultimately, an application can be written as stateless code (really “externalized state” is more a accurate description) that relies on these services for persistence.  If a host were to dynamically instantiate instances of that code based on incoming requests then my application resource requirements would become strictly consumption based.   I would describe that as true cloud architecture. 

On a bold day, I would even consider an environment that enforced offered that architecture to be a platform.  Some may even dare to describe that as a PaaS; however, I think it’s a mistake to look to the service offering for the definition when it’s driven by the application designers’ decisions to use network services.

While we argue about PaaS vs IaaS, developers are just doing what they need.  Today they may stand-up their own services and tomorrow they incorporate 3rd party managed services.  The choice is not a binary switch, a layer cake, or a holy war.

The choice is about choosing the right most cost effective and scalable resource model.