Mobilize your Ops Team Against Operational Paralysis

Many IT departments struggle with keeping “the lights on” as legacy hardware and software consume significant resources preventing the team from taking advantage of new technologies to modernize their infrastructure. These legacy issues not only consume resources but also cause challenges to find qualified experts to keep them operational as the older the technology the less likely to find experienced support. Even worse, new employees are typically not interested in working on old technology while the IT press obsesses on what comes next.

Freezing older technology in place without capable support or an understanding of how the product works is certainly not an industry best practice; however, it is commonly accepted in many large IT organizations. RackN has built a single, open source platform to manage not just new technologies but also legacy services allowing IT teams to actively engage the older technology without fear.

Issue: Expertise & the Unknown

  • Existing Infrastructure – legacy technology abounds in modern enterprise infrastructure with few employees capable of maintaining
  • State of the Art vs the Past – new employees are experienced in the latest technology and not interested in working on legacy solutions

Impact: Left Behind

  • Stuck in the Past – IT teams are unwilling to touch old technology that just works
  • Employee Exodus – limited future for employees maintaining the past

RackN Solution: Stagnation to Action

  • Operations Excellence – RackN’s foundational management ensures IT can operate services regardless of platform (e.g. data center, public cloud, etc)
  • Operational Paralysis – RackN delivers a single platform for IT to single platform capable of supporting existing solutions, newly arriving technologies as well as prepare for future innovation down the road.

The RackN team is ready to unlock your operational potential by preventing paralysis:

Week in Review: Data Center 2020 Blog Series Update on Data Centric Computing

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

DC2020: Putting the Data Back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

Read Post and Full DC2020 Blog Series


News

RackN

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

Week in Review: Data Center 2020 Blog Series from IBM Think 2018

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Data Center of 2020 Blog Series from IBM Think 2018
(Series by Rob Hirschfeld, CEO/Co-Founder, RackN)

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

RackN Portal Management Connection for the 10 Minute Demo

In my previous blog, I provided step by step directions to install Digital Rebar Provision on a new endpoint and create a new node using Packet.net for users without a local hardware setup. (Demo Tool on GitHub) In this blog, I will introduce the RackN Portal and connect it to the active setup running on Packet.net at the end of the demo process.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

DC2020: Skeptics Guide to Blockchain in the Data Center

At Think 2018, Machine Learning and Blockchain technologies are beyond pervasive, they are assumed to be beneficial to ROI in every situation. That type of hype begs for closer review. In this post, we’ll look at a potentially real use of blockchain for operations.

There is so much noise about blockchain that it can be difficult to find a starting point. I’m leaving background reading as an exercise for the reader; instead, I want to focus on how blockchain creates a distributed ledger with shared trust. That’s a lot of buzz words! Basically, we’re talking about a system where nodes share data in a way that they use consensus with their peer to determine if the information is trustworthy.

The key concept in blockchain is moving from a central authority to a distributed authority.

In the data center, administrative trust is essential. The premises, networks, and access credentials all rely on the idea that we have a centralized authoritative group. Even PKI, which is designed for decentralized trust, relies on a centralized trust to sign keys. Looking objectively at the bundle of passwords, certificates, keys and isolation layers, there are gaping risks in this model. It only takes getting the right access to flip administrative control from an asset into a liability.

Blockchain allows us to decentralize trust in the data center by requiring systems to collaboratively validate administrative instructions.

In this model, we’d still have administrative controls and management; however, the nodes would be able to validate configuration changes with their peers or other administrative sources. For example, an out of process change (potential hack?) on a single node would be confirmed via consensus with other nodes instead of automatically trusting the source. The body of nodes protects from a bad administrative request. It also allows operators to quickly propagate configurations peer-to-peer instead of relying on a central hub and spoke model.

This is even more powerful if configuration is composited from multiple sources in a pipeline. In a multiple author system, each contributor will be involved in verifying that changes to the whole configuration. This ensures that downstream insertions are both communicated and accepted by upstream steps.  This works because blockchain is a distributed ledger. Changes made to the chain are passed back to all parties. Just like in a decentralized supply chain model, this ensures both validation and transparency.

Blockchain’s ability to provide both horizontal and vertical integrity for operations is an intriguing possibility.

I’m interested in hearing your thoughts about this application for blockchain. From a RackN and Digital Rebar perspective, these capabilities are well aligned with our composable approach to configuration. We’d be happy to talk with operators who want to look more deeply into this type of integration.

Series Intro: A Focus on Sustaining Operations

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

As an industry, we need to actively choose implementations that unify our operational models to create portability and eliminate silos. This means investing more in sustaining operations (aka Day 2 Ops) that ensure our IT systems can be constantly patched, updated and maintained. The pace of innovation (and discovered vulnerabilities!) requires that we build with the assumption of change. DC2020 cannot be “fire and forget” building that assumes occasional updates.

There are a lot of disruptive and exciting technologies entering the market. These create tremendous opportunities for improvement and faster innovation cycles. They also create significant risk for further fragmenting our IT operations landscape in ways to increase costs, decrease security and further churn our market.

It is possible to be for both rapid innovation and sustaining operations, but it requires a plan for building robust automation.

The focus on tightly integrated development and operations work is a common theme in both DevOps and Site/System Reliability Engineering topics that we cover all the time. They are not only practical, we believe they are essential requirements for building DC2020.

Over this week, I’m going to be using the backdrop of IBM Think to outline the concepts for DC2020. I’ll both pull in topics that I’m hearing there and revisit topics that we’ve been discussing on our blogs and L8ist Sh9y podcast. Ultimately, we’ll create a comprehensive document: for now, we invite you to share your thoughts about this content in it’s more raw narrative form. 

Cloud Immutability on Metal in the Data Center

Cloud has enabled a create-destroy infrastructure process that is now seen as common, e.g.  launching and destroying virtual machines and containers. This process is referred to as immutable infrastructure and until now, has not been available to operators within a data center. RackN technology is now actively supporting customers in enabling immutability within a data center on physical infrastructure.

In this post, I will highlight the problems faced by operators in deploying services at scale and introduce the immutability solution available from RackN. In addition, I have added two videos providing background on this topic and a demonstration showing an image deployment of Linux and Windows on RackN using this methodology.

PROBLEM

Traditional data center operations provision and deploy services to a node before configuring the application. This post-deployment configuration introduces mutability into the infrastructure due to dependency issues such as operating system updates, library changes, and patches. Even worse, these changes make it incredibly difficult to rollback a change to a previous version should the update cause an issue.

Looking at patch management highlights key problems faced by operators. Applying patches across multiple nodes may lead to inconsistent services with various dependency changes impacted not just by the software but also the hardware. The ability to apply these patches require root access to the nodes which leaves a security vulnerability for an unauthorized login.

SOLUTION

Moving the configuration of a service before deployment solves the problems discussed previously by delivering a complete runnable image for execution. However, there is some initialization that is hardware dependent and should only be run once (Cloud-Init) allowing a variety of hardware to be used.

This new approach moves the patching stage earlier in the process allowing operators to ensure a consistent deployment image without the possibility of drift, security issues as no root access is required, as well as simplifying the ability to instantly and quickly move backwards to a previously running image.

IMMUTABILITY OVERVIEW

In this presentation, Rob Hirschfeld makes the case of immutable infrastructure on bare metal within your data center using RackN technology. Rob delivers the complete story highlighted in this blog post.

DEMONSTRATION 

In this demonstration, Rob Hirschfeld and Greg Althaus do a complete immutable image deployment of a Linux server and a Windows server using the RackN Portal in less than 20 minutes.

Get started with RackN today to learn more about how you can change your model to this immutability approach.

  • Join the Digital Rebar Community to learn the basics of the Digital Rebar Provision
  • Create an account on the RackN Portal to simplify DRP installation and management
  • Join the RackN Trial program to obtain access to advanced RackN features

Redefining PXE Boot Provisioning for the Modern Data Center

Over the past 20 years, Linux admins have defined provisioning with a limited scope; PXE boot with Cobbler. This approach continues to be popular today even though it only installs an operating system limiting the operators’ ability to move beyond this outdated paradigm

Digital Rebar is the answer operators have been looking for as provisioning has taken on a new role within the data center to include workflow management, infrastructure automation, bare metal, virtual machines inside and outside the firewall as well as the coming need for edge IoT management. The active open source community is expanding the capabilities of provisioning ensuring operators a new foundational technology to rethink how data centers can be managed to meet today’s rapid delivery requirements.

Digital Rebar was architected with the global Cobbler user-base in mind to not only simplify the transition but also offer a set of common packages that are shareable across the community to simplify and automate repetitive tasks; freeing up operators to spend more time focusing on key issues instead of finding new OS packages for example.

I encourage you to take 15 minutes and visit the Digital Rebar community to learn more about this technology and how you can up-level your organization’s capability to automate infrastructure at scale,

Open Source, Operators, and DevOps Come Together for Data Center Automation

Running data centers is a complex challenge as the typical environment consists of multiple hardware platforms, operating systems, and processes to manage. Operators face daily “fire drills” to keep the machines running while simultaneously trying to expand service offerings and learn new technologies. The adoption of virtualization and cloud has not simplified anything for IT teams and has only made their job more complicated.

Our founders have years of experience working on deploying and operating large, complex data center environments and clouds. They are also well versed in the open source community space and see the merger of community with operations leading to a better way forward for data center management.

We are building an operators community sharing best practices and code to reuse across work sites to fully automate data centers. Working together operators can solve operational challenges for not just their infrastructure, but also find common patterns to leverage across a broad set of architectures.

Community is a powerful force in the software industry and there is no reason why those concepts cannot be leveraged by operators and DevOps teams to completely change the ROI of running a data center. RackN is founded on this belief that working together we can transform data center management via automation and physical ops.

Join us today to help build the future of data center automation and provisioning technology.

RackN talks Cloud Native Landscape on Rishidot.TV

Rob Hirschfeld speaks on Rishidot.TV  as part of the Cloud Native Landscape video interview series. Questions asked:

  • Background on RackN
  • Cloud Native Ecosystem Fit – embracing DevOps and Site Reliability Engineering
    • Running “Cloud” in their existing data centers
  •  Differentiation – Build on open source Digital Rebar replacing Cobbler, Maas, and other provisioning tools
    • API driven, Infrastructure as Code feel
  • Use Cases –  Immutable Infrastructure & API driven design
    • Image-based Deployments direct to Metal
    • CI/CD infrastructure, zero-touch automation