DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

Week in Review: RackN talks Immutability and DevOps at SRECon Americas

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Immutable Deployments talk at SRECon Americas

Rob Hirschfeld presented at SRECon Americas this week, “Don’t Ever Change! Are Immutable Deployments Really Simpler, Faster and Safer?”

Configuration is fragile because we’re talking about mutating a system. Infrastructure as code, means building everything in place. Every one of our systems have to be configured and managed and that creates a dependency graph. We can lock things down, but we inevitably have to patch our systems.

Immutable infrastructure is another way of saying “pre-configured systems”. Traditional deployment models do configuration after deployment, but it’s better if we can do it beforehand. Immutability is a DevOps pattern. Shift configuration to the left of our pipeline; move it from the production to build stage.

Finish Reading Review from Tanya Reilly (@whereistanya)


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Series Intro: A Focus on Sustaining Operations

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

As an industry, we need to actively choose implementations that unify our operational models to create portability and eliminate silos. This means investing more in sustaining operations (aka Day 2 Ops) that ensure our IT systems can be constantly patched, updated and maintained. The pace of innovation (and discovered vulnerabilities!) requires that we build with the assumption of change. DC2020 cannot be “fire and forget” building that assumes occasional updates.

There are a lot of disruptive and exciting technologies entering the market. These create tremendous opportunities for improvement and faster innovation cycles. They also create significant risk for further fragmenting our IT operations landscape in ways to increase costs, decrease security and further churn our market.

It is possible to be for both rapid innovation and sustaining operations, but it requires a plan for building robust automation.

The focus on tightly integrated development and operations work is a common theme in both DevOps and Site/System Reliability Engineering topics that we cover all the time. They are not only practical, we believe they are essential requirements for building DC2020.

Over this week, I’m going to be using the backdrop of IBM Think to outline the concepts for DC2020. I’ll both pull in topics that I’m hearing there and revisit topics that we’ve been discussing on our blogs and L8ist Sh9y podcast. Ultimately, we’ll create a comprehensive document: for now, we invite you to share your thoughts about this content in it’s more raw narrative form. 

Week in Review : Test Digital Rebar in Minutes with Hosted Physical Infrastructure

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Deploy and Test Digital Rebar Provision with No Infrastructure in 10 Minutes

For operators looking to better understand Digital Rebar Provision (DRP) RackN has developed an easy to follow process leveraging Packet.net for physical device creation. This process allows new users to create a physical DRP endpoint and then provision a new physical node on Packet. Information and code to run this guide is available at https://github.com/digitalrebar/provision/tree/master/examples/pkt-demo.

In this blog, I will take the reader through the process with images based on running via my Mac.

Read More

Site Reliability Engineering: 4 Things to Know

Organizations that have embraced DevOps and cloud-native architecture might also want to investigate SRE. Interop ITX expert Rob Hirschfeld explains why.

To find out more about site reliability engineering, Network Computing spoke with Rob Hirschfeld, who has been in the cloud and infrastructure space for nearly 15 years, including work with early ESX betas and serving on the Open Stack Foundation Board. Hirschfeld, cofounder and CEO of RackN, will present “DevOps vs SRE vs Cloud Native” at Interop ITX 2018.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Cloud Immutability on Metal in the Data Center

Cloud has enabled a create-destroy infrastructure process that is now seen as common, e.g.  launching and destroying virtual machines and containers. This process is referred to as immutable infrastructure and until now, has not been available to operators within a data center. RackN technology is now actively supporting customers in enabling immutability within a data center on physical infrastructure.

In this post, I will highlight the problems faced by operators in deploying services at scale and introduce the immutability solution available from RackN. In addition, I have added two videos providing background on this topic and a demonstration showing an image deployment of Linux and Windows on RackN using this methodology.

PROBLEM

Traditional data center operations provision and deploy services to a node before configuring the application. This post-deployment configuration introduces mutability into the infrastructure due to dependency issues such as operating system updates, library changes, and patches. Even worse, these changes make it incredibly difficult to rollback a change to a previous version should the update cause an issue.

Looking at patch management highlights key problems faced by operators. Applying patches across multiple nodes may lead to inconsistent services with various dependency changes impacted not just by the software but also the hardware. The ability to apply these patches require root access to the nodes which leaves a security vulnerability for an unauthorized login.

SOLUTION

Moving the configuration of a service before deployment solves the problems discussed previously by delivering a complete runnable image for execution. However, there is some initialization that is hardware dependent and should only be run once (Cloud-Init) allowing a variety of hardware to be used.

This new approach moves the patching stage earlier in the process allowing operators to ensure a consistent deployment image without the possibility of drift, security issues as no root access is required, as well as simplifying the ability to instantly and quickly move backwards to a previously running image.

IMMUTABILITY OVERVIEW

In this presentation, Rob Hirschfeld makes the case of immutable infrastructure on bare metal within your data center using RackN technology. Rob delivers the complete story highlighted in this blog post.

DEMONSTRATION 

In this demonstration, Rob Hirschfeld and Greg Althaus do a complete immutable image deployment of a Linux server and a Windows server using the RackN Portal in less than 20 minutes.

Get started with RackN today to learn more about how you can change your model to this immutability approach.

  • Join the Digital Rebar Community to learn the basics of the Digital Rebar Provision
  • Create an account on the RackN Portal to simplify DRP installation and management
  • Join the RackN Trial program to obtain access to advanced RackN features

Immutable Infrastructure Delivery on Metal : See RackN at Data Center World

 

 

The RackN team is heading to San Antonio, TX next week for Data Center World, March 12 – 15. Our co-founder/CEO Rob Hirschfeld is giving a talk on immutable infrastructure for bare metal in the data center (see session information below).

We are interested in meeting and talking with fellow technologists. Contact us this week so we can setup times to meet at the event. If you are able to attend Rob’s session be sure to let him know you saw it here on the RackN blog.

RackN Session

March 12 at 2:10pm
Room 206AM
Session IT7
Tracks: Cloud Services, Direct Access

Operate your Data Center like a Public Cloud with Immutable Infrastructure

The pressure on IT departments to deliver services to internal customers is considerably higher today as public cloud vendors are able to operate on a massive scale, forcing CIOs to challenge their own staff to raise the bar in data center operation. Of course, enterprise IT departments don’t have the large staff of an AWS or Azure; however, the fundamental process running those public clouds is now available for consumption in the enterprise. This process is called “immutable infrastructure” and allows servers to be deployed 100% ready to run without any need for remote configuration of access. It’s called immutable because the servers are deployed from images produced by CI/CD process and destroyed after use instead of being reconfigured. It’s a container and cloud pattern that has finally made it to physical. In this talk, we’ll cover the specific process and its advantages over traditional server configuration.

Open Source, Operators, and DevOps Come Together for Data Center Automation

Running data centers is a complex challenge as the typical environment consists of multiple hardware platforms, operating systems, and processes to manage. Operators face daily “fire drills” to keep the machines running while simultaneously trying to expand service offerings and learn new technologies. The adoption of virtualization and cloud has not simplified anything for IT teams and has only made their job more complicated.

Our founders have years of experience working on deploying and operating large, complex data center environments and clouds. They are also well versed in the open source community space and see the merger of community with operations leading to a better way forward for data center management.

We are building an operators community sharing best practices and code to reuse across work sites to fully automate data centers. Working together operators can solve operational challenges for not just their infrastructure, but also find common patterns to leverage across a broad set of architectures.

Community is a powerful force in the software industry and there is no reason why those concepts cannot be leveraged by operators and DevOps teams to completely change the ROI of running a data center. RackN is founded on this belief that working together we can transform data center management via automation and physical ops.

Join us today to help build the future of data center automation and provisioning technology.

RackN talks Cloud Native Landscape on Rishidot.TV

Rob Hirschfeld speaks on Rishidot.TV  as part of the Cloud Native Landscape video interview series. Questions asked:

  • Background on RackN
  • Cloud Native Ecosystem Fit – embracing DevOps and Site Reliability Engineering
    • Running “Cloud” in their existing data centers
  •  Differentiation – Build on open source Digital Rebar replacing Cobbler, Maas, and other provisioning tools
    • API driven, Infrastructure as Code feel
  • Use Cases –  Immutable Infrastructure & API driven design
    • Image-based Deployments direct to Metal
    • CI/CD infrastructure, zero-touch automation

 

Great Fun Accessing your Infrastructure: How Secure are You?

How secure is your infrastructure? Not just your internal data centers, but what about your networks connecting to public clouds or hosting providers? How about your corporate data which could be anywhere in the world as you certainly have Shadow IT somewhere?

RackN believes that IT security begins with a secure foundation for provisioning not only within your data center but into your cloud environments as well. Having a single tool architected with security as a key feature allows SecOps to spend more time worrying about protecting attacks at the application and data storage layer instead of allowing attacks at the metal.  

Issue – Secure the Enterprise

  • Many enterprises fail to patch both software and hardware on a regular basis due to their inability to reliably and safely manage the process without impacting service delivery.
  • With applications and data running globally, IT has lost the ability to know with certainty where their services are operating from and how secure they are; this is true even beyond public clouds.

Impact – Business is Digital

  • All business is now digital and a majority of companies don’t have the technical staff to ensure a high level of security and simply trusting cloud providers is not enough.
  • Companies must ensure that networks are protected and that applications and hardware are updated with the latest patches; is your company doing this?

RackN Solution – Secure Foundation

  • Delivering provisioning via an automated layered approach provides IT teams a secure and repeatable process to ensure application availability regardless of location; e.g. Data Center, Hosting Provider, Public Cloud, and eventually Edge infrastructure.
  • Like any construction project security starts with a solid foundation; RackN is that foundation to build your IT infrastructure on.

The RackN team is ready to start you on the path to operations excellence:

Podcast: Paul Teich on Enterprise Security, Hardware Issues at Edge, Augmented Reality and 5G

In this week’s podcast, we speak with Paul Teich, Principal Analyst, Tirias Research. Paul offered his insight into several key industry trends as well as the recent Spectre and Meltdown discoveries.

  • Spectre and Meltdown – Will this drive additional security focus?
  • Augmented Reality and AI is the holy grail of Edge and Cloud
  • Capabilities of 5G and its impact over next 10 years
  • Why is Hyper Converged Infrastructure popular?

Topic                                                                     Time (Minutes.Seconds)

Introduction                                                          0.0 –  3.06 (Texas and Texas A&M)
Spectre and Meltdown Lead to Security?      3.06 – 6.30
Industry-Wide Refresh                                       6.30 – 10.38 (At least 12 months to new silicon)
Enterprise Thoughts on Patching/Updates   10.38 – 15.03 (Profit over Security)
Major Services and Rolling Blackouts             15.03 – 16.06 (Service Patching Underway – Intel)
Security Vulnerabilities Always Exist              16.06 – 17.50
Edge ~ Highly Distributed Management        17.50 – 22.23 (Definition)
Hardware Component to Edge                        22.23 – 25.03 (Opening for ARM?)
Edge is Heterogeneous                                    25.03 – 27.48
Portability b/w Cloud and Edge Required    27.48 – 31.47 (End of Mgmt from H/W Vendors)
GPUs on the Edge                                              31.47 –  36.29 (Tesla and Nvidia Announcement)
Infrastructure Deployment in an Instant        36.29 – 40.00
Multi-Tenancy at Edge                                       40.00 – 42.50 (Jevon’s Paradox Appears Again)
Augmented Reality & AI                                    42.50 – 45.13
5G Rollout                                                            45.13 – 47.17
Hyper Converged Infrastructure – Why?       47.17 – 52.30
Wrap-Up                                                               52.30 – END

Podcast Guest
Paul Teich, Principal Analyst, Tirias Research

Paul Teich is a Principal Analyst with a technical background and over 30 years of industry experience in computing, storage, and networking. Paul’s strength is in assessing the technical feasibility and market opportunity for new technologies and developing profitable strategies to commercialize those technologies.

Paul’s prior experience includes being a key member of AMD’s Opteron server processor team in the early 2000s, which redefined 64-bit computing; product manager of a web service at the height of the first internet bubble; designer of low-cost consumer PCs before multi-PC households were common; and product manager of RISC processors used as graphics accelerators in the early 1990s, which is now back in vogue on a larger scale with deep learning.

Over the past few years Paul has spoken and moderated panels at many industry events, including IoT Dev-Con, Open Server Summit, Dell World, TiEcon Silicon Valley, NIWeek, ARM TechCon, and SXSW Interactive. Paul is quoted by an equally diverse set of industry press, including: IDG, SiliconANGLE, ComputerWorld, InfoWorld, eWeek, and Processor.com.

Paul also serves as an adviser to the EEMBC Cloud and Big Data Server Benchmarking working group (“ScaleMark”) and has been a co-organizer of the Open Server Summit’s scale-out server track. In addition, he has recently been an expert consultant in an intellectual property court case and has supported a client in front of a US government committee.

Paul holds a BS in Computer Science from Texas A&M and an MS in Technology Commercialization from the University of Texas’ McCombs School of Business. His technical accomplishments include 12 US patents and senior membership in both the ACM and the IEEE.