Week in Review – Immutable Infrastructure and Podcast with CNCF Ambassador

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Immutable Infrastructure

Immutable infrastructure offers IT a new deployment methodology of create / destroy / repeat from a fixed image for deploying services vs the more common method of deploying once and then patching regularly to keep a service in operations. Continually tearing down and launching fixed images solves challenges such as hacking, version control, and unknown service status.

RackN automation and provisioning management simplify the immutable infrastructure methodology allowing IT team to focus on issues other than version control and hacked services.

Learn More 

L8istSh9y Podcast

Chris Short, Senior DevOps Advocate, SJ Technologies and CNCF Ambassador provides his thoughts on Site Reliability Engineering, DevSecOps, Kubernetes and other hot topics in the industry.


News

RackN

  • RackN Trial – 30-day access to RackN technology with support and training from RackN team – Register Today
  • NEW YouTube Videos
    • Digital Rebar Provision v3.8 Workflows – Listen (17 min 51 sec)
    • Terraform Digital Rebar Provider with Workflows – Listen (11 min 49 sec)
  • Summer Events
    • Still Working on Plan ~ Stay Tuned

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

Catch up with the RackN and Digital Rebar Team at OpenStack Summit

We are heading out to Vancouver next week for the OpenStack Summit from May 21 – 24. Rob Hirschfeld, our Co-Founder/CEO will be available to meet onsite as well as help drive the OpenStack community forward. If you are interested in meeting, please contact me.

Rob has 2 sessions scheduled and we encourage you to attend.

Sessions

Security Considerations for Cloud Edge Computing
Date & Time: May 23 from 11:50 – 12:30pm

Location: Vancouver Convention Centre West – Level 2 – Room 205-207

Panel: (Moderator) Beth Cohen, Verizon : Rob Hirschfeld, RackN : Glen McGowan, Dell EMC : Shuquan Huang, 99cloud

Cloud Edge computing use cases range from IoT to VR/AR and any widely distributed application in between.  However, taking OpenStack out of the data center requires an entirely new approach to security when there is far less ability to restrict access and often the applications require a shared tenant model.

Avoiding Infrastructure at Rest – The Power of Immutable Infrastructure

Date & Time: May 23 from 3:30 – 4:10pm
Location: Vancouver Convention Center West – Level Three – Room 301

Keeping up with patches has never been more critical.  For hardware, that’s… hard.  What if servers were deployed 100% ready to run without any need for remote configuration or access?  What if we were able to roll a complete rebuild of an entire application stack from the BIOS up in minutes.  Those are key concepts behind a cloud deployment pattern called “immutable infrastructure”  because the servers are deployed from images produced by CI/CD process and destroyed after use instead of being reconfigured.

We’ll cover the specific process and it’s advantages.  Then we’ll dive deeply into open tools and processes that make it possible to drive immutable images into your own infrastructure.  The talk will include live demos and go discuss process and field challenges that attendees will likely face when they start implementation at home.  We’ll also cover the significant security, time and cost benefits of this approach to make pitching the idea effective.

Mobilize your Ops Team Against Operational Paralysis

Many IT departments struggle with keeping “the lights on” as legacy hardware and software consume significant resources preventing the team from taking advantage of new technologies to modernize their infrastructure. These legacy issues not only consume resources but also cause challenges to find qualified experts to keep them operational as the older the technology the less likely to find experienced support. Even worse, new employees are typically not interested in working on old technology while the IT press obsesses on what comes next.

Freezing older technology in place without capable support or an understanding of how the product works is certainly not an industry best practice; however, it is commonly accepted in many large IT organizations. RackN has built a single, open source platform to manage not just new technologies but also legacy services allowing IT teams to actively engage the older technology without fear.

Issue: Expertise & the Unknown

  • Existing Infrastructure – legacy technology abounds in modern enterprise infrastructure with few employees capable of maintaining
  • State of the Art vs the Past – new employees are experienced in the latest technology and not interested in working on legacy solutions

Impact: Left Behind

  • Stuck in the Past – IT teams are unwilling to touch old technology that just works
  • Employee Exodus – limited future for employees maintaining the past

RackN Solution: Stagnation to Action

  • Operations Excellence – RackN’s foundational management ensures IT can operate services regardless of platform (e.g. data center, public cloud, etc)
  • Operational Paralysis – RackN delivers a single platform for IT to single platform capable of supporting existing solutions, newly arriving technologies as well as prepare for future innovation down the road.

The RackN team is ready to unlock your operational potential by preventing paralysis:

DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

Week in Review: RackN talks Immutability and DevOps at SRECon Americas

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Immutable Deployments talk at SRECon Americas

Rob Hirschfeld presented at SRECon Americas this week, “Don’t Ever Change! Are Immutable Deployments Really Simpler, Faster and Safer?”

Configuration is fragile because we’re talking about mutating a system. Infrastructure as code, means building everything in place. Every one of our systems have to be configured and managed and that creates a dependency graph. We can lock things down, but we inevitably have to patch our systems.

Immutable infrastructure is another way of saying “pre-configured systems”. Traditional deployment models do configuration after deployment, but it’s better if we can do it beforehand. Immutability is a DevOps pattern. Shift configuration to the left of our pipeline; move it from the production to build stage.

Finish Reading Review from Tanya Reilly (@whereistanya)


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Series Intro: A Focus on Sustaining Operations

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

As an industry, we need to actively choose implementations that unify our operational models to create portability and eliminate silos. This means investing more in sustaining operations (aka Day 2 Ops) that ensure our IT systems can be constantly patched, updated and maintained. The pace of innovation (and discovered vulnerabilities!) requires that we build with the assumption of change. DC2020 cannot be “fire and forget” building that assumes occasional updates.

There are a lot of disruptive and exciting technologies entering the market. These create tremendous opportunities for improvement and faster innovation cycles. They also create significant risk for further fragmenting our IT operations landscape in ways to increase costs, decrease security and further churn our market.

It is possible to be for both rapid innovation and sustaining operations, but it requires a plan for building robust automation.

The focus on tightly integrated development and operations work is a common theme in both DevOps and Site/System Reliability Engineering topics that we cover all the time. They are not only practical, we believe they are essential requirements for building DC2020.

Over this week, I’m going to be using the backdrop of IBM Think to outline the concepts for DC2020. I’ll both pull in topics that I’m hearing there and revisit topics that we’ve been discussing on our blogs and L8ist Sh9y podcast. Ultimately, we’ll create a comprehensive document: for now, we invite you to share your thoughts about this content in it’s more raw narrative form. 

Week in Review : Test Digital Rebar in Minutes with Hosted Physical Infrastructure

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Deploy and Test Digital Rebar Provision with No Infrastructure in 10 Minutes

For operators looking to better understand Digital Rebar Provision (DRP) RackN has developed an easy to follow process leveraging Packet.net for physical device creation. This process allows new users to create a physical DRP endpoint and then provision a new physical node on Packet. Information and code to run this guide is available at https://github.com/digitalrebar/provision/tree/master/examples/pkt-demo.

In this blog, I will take the reader through the process with images based on running via my Mac.

Read More

Site Reliability Engineering: 4 Things to Know

Organizations that have embraced DevOps and cloud-native architecture might also want to investigate SRE. Interop ITX expert Rob Hirschfeld explains why.

To find out more about site reliability engineering, Network Computing spoke with Rob Hirschfeld, who has been in the cloud and infrastructure space for nearly 15 years, including work with early ESX betas and serving on the Open Stack Foundation Board. Hirschfeld, cofounder and CEO of RackN, will present “DevOps vs SRE vs Cloud Native” at Interop ITX 2018.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media