Week In Review: Immutability in your Data Center with RackN

Welcome to our new format for the RackN and Digital Rebar Weekly Review. It contains the same great information you are accustomed to; however, I have reorganized it to place a new section at the start with my thoughts on various topics. You can still find the latest news items related to Edge, DevOps and other relevant topics below.

Cloud Immutability on Metal in the Data Center

Cloud has enabled a create-destroy infrastructure process that is now seen as common, e.g.  launching and destroying virtual machines and containers. This process is referred to as immutable infrastructure and until now, has not been available to operators within a data center. RackN technology is now actively supporting customers in enabling immutability within a data center on physical infrastructure.

Read More

Physical Infrastructure Automation

Automation is not simply taking manual tasks and replacing them with a machine. Rather, it is a methodology to assemble hardware and software infrastructure in a reliable, repeatable way saving time and effort. Automation also provides IT teams with the capability to rapidly meet new business challenges, learn new technologies, and reduce fire drills rather than spending significant cycles manually pushing buttons.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Cloud Immutability on Metal in the Data Center

Cloud has enabled a create-destroy infrastructure process that is now seen as common, e.g.  launching and destroying virtual machines and containers. This process is referred to as immutable infrastructure and until now, has not been available to operators within a data center. RackN technology is now actively supporting customers in enabling immutability within a data center on physical infrastructure.

In this post, I will highlight the problems faced by operators in deploying services at scale and introduce the immutability solution available from RackN. In addition, I have added two videos providing background on this topic and a demonstration showing an image deployment of Linux and Windows on RackN using this methodology.

PROBLEM

Traditional data center operations provision and deploy services to a node before configuring the application. This post-deployment configuration introduces mutability into the infrastructure due to dependency issues such as operating system updates, library changes, and patches. Even worse, these changes make it incredibly difficult to rollback a change to a previous version should the update cause an issue.

Looking at patch management highlights key problems faced by operators. Applying patches across multiple nodes may lead to inconsistent services with various dependency changes impacted not just by the software but also the hardware. The ability to apply these patches require root access to the nodes which leaves a security vulnerability for an unauthorized login.

SOLUTION

Moving the configuration of a service before deployment solves the problems discussed previously by delivering a complete runnable image for execution. However, there is some initialization that is hardware dependent and should only be run once (Cloud-Init) allowing a variety of hardware to be used.

This new approach moves the patching stage earlier in the process allowing operators to ensure a consistent deployment image without the possibility of drift, security issues as no root access is required, as well as simplifying the ability to instantly and quickly move backwards to a previously running image.

IMMUTABILITY OVERVIEW

In this presentation, Rob Hirschfeld makes the case of immutable infrastructure on bare metal within your data center using RackN technology. Rob delivers the complete story highlighted in this blog post.

DEMONSTRATION 

In this demonstration, Rob Hirschfeld and Greg Althaus do a complete immutable image deployment of a Linux server and a Windows server using the RackN Portal in less than 20 minutes.

Get started with RackN today to learn more about how you can change your model to this immutability approach.

  • Join the Digital Rebar Community to learn the basics of the Digital Rebar Provision
  • Create an account on the RackN Portal to simplify DRP installation and management
  • Join the RackN Trial program to obtain access to advanced RackN features

Immutable Infrastructure Delivery on Metal : See RackN at Data Center World

 

 

The RackN team is heading to San Antonio, TX next week for Data Center World, March 12 – 15. Our co-founder/CEO Rob Hirschfeld is giving a talk on immutable infrastructure for bare metal in the data center (see session information below).

We are interested in meeting and talking with fellow technologists. Contact us this week so we can setup times to meet at the event. If you are able to attend Rob’s session be sure to let him know you saw it here on the RackN blog.

RackN Session

March 12 at 2:10pm
Room 206AM
Session IT7
Tracks: Cloud Services, Direct Access

Operate your Data Center like a Public Cloud with Immutable Infrastructure

The pressure on IT departments to deliver services to internal customers is considerably higher today as public cloud vendors are able to operate on a massive scale, forcing CIOs to challenge their own staff to raise the bar in data center operation. Of course, enterprise IT departments don’t have the large staff of an AWS or Azure; however, the fundamental process running those public clouds is now available for consumption in the enterprise. This process is called “immutable infrastructure” and allows servers to be deployed 100% ready to run without any need for remote configuration of access. It’s called immutable because the servers are deployed from images produced by CI/CD process and destroyed after use instead of being reconfigured. It’s a container and cloud pattern that has finally made it to physical. In this talk, we’ll cover the specific process and its advantages over traditional server configuration.

Podcast: Kong Yang on golden age of cloud, CI/CD and DevOps, and operator opportunity

In this week’s podcast, we speak with Kong Yang, Head Geek at SolarWinds. He also hosts the Wide World of Tech podcast. Key topics discussed in the podcast:

  • State of cloud computing ~ entering its golden age
  • IT & business units coming together to deal with shadow IT responsibly
  • Building technology on services with no control over them
  • CI/CD model
  • Operators skills and time available
  • Human aspect

Topic                                                   Time (Minutes.Seconds)

Introduction                                         0.0 – 1.58 (Gina Rosenthal Podcast)
Cloud is Settling In                             1.58 – 5.18 (Entering it’s Golden Age)
Is Amazon Frictionless?                    5.18 – 6.32 (Lack of Governance Issues for IT)
Multi-Technology Issues                  6.32 – 8.05 (IT must Partner with Business Units)
Moving Away from Control Infra     8.05 – 10.40
Tooling Choices                                 10.40 – 15.10 (Managing Services is Uneasy w/ no Control)
CI/CD                                                  15.10 – 22.15 (Dev’s view vs Ops view)
Time for Operators is Limited         22.15 – 25.55 (Always have to be learning)
Pipeline Specialization per Cloud  25.55 – 36.31 (People Challenges / SRE Half-Life)
Wrap Up                                             36.31 – END

Podcast Guest
Kong Yang, Head Geek at Solar Winds

Kong Yang is a Head Geek at SolarWinds® with over 20 years of IT experience specializing in virtualization and cloud management. He is a VMware vExpert, Cisco Champion, and active contributing practice leader within the virtualization and cloud communities.

Yang’s industry expertise includes application performance management, virtualization sizing and capacity planning best practices, community engagement, and technology evangelism. Yang is passionate about understanding the behavior of the application lifecycle and ecosystem – the analytics of the interdependencies as well as qualifying and quantifying the results to empower the organization’s bottom line.

He focuses on virtualization and cloud technologies; application performance management; hybrid cloud best practices; technology stacks such as containers, microservices, serverless, and cloud native best practices; DevOps conversations; converged infrastructure technologies; and data analytics. Yang is a past speaker at BrightTALK Summits, Spiceworks SpiceWorld, Interop ITX, and VMworld events.

He is also the owner of United States Patent 8,176,497 for an intelligent method to auto-scale VMs to fulfill peak database workloads. Yang’s past roles include a Cloud Practice Leader at Gravitant and various roles at Dell Technologies.

Follow Kong at @KongYang

Week in Review: Building Bridges between DevOps and Architects

Welcome to our new format for the RackN and Digital Rebar Weekly Review. It contains the same great information you are accustomed to; however, I have reorganized it to place a new section at the start with my thoughts on various topics. You can still find the latest news items related to Edge, DevOps and other relevant topics below.

Building Bridges between Operators, Developers and Architects

DevOps is not enough. Infrastructure architects play a key role and are often not considered. It is our experience that bringing the architect together with DevOps team leads to the optimal solution. Hear from Rob Hirschfeld on why this is critical for success:

Redefining PXE Provisioning for the Modern Data Center

Over the past 20 years, Linux admins have defined provisioning with a limited scope; PXE boot with Cobbler. This approach continues to be popular today even though it only installs an operating system limiting the operators’ ability to move beyond this outdated paradigm.

Digital Rebar is the answer operators have been looking for as provisioning has taken on a new role within the data center to include workflow management, infrastructure automation, bare metal, virtual machines inside and outside the firewall as well as the coming need for edge IoT management.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Redefining PXE Boot Provisioning for the Modern Data Center

Over the past 20 years, Linux admins have defined provisioning with a limited scope; PXE boot with Cobbler. This approach continues to be popular today even though it only installs an operating system limiting the operators’ ability to move beyond this outdated paradigm

Digital Rebar is the answer operators have been looking for as provisioning has taken on a new role within the data center to include workflow management, infrastructure automation, bare metal, virtual machines inside and outside the firewall as well as the coming need for edge IoT management. The active open source community is expanding the capabilities of provisioning ensuring operators a new foundational technology to rethink how data centers can be managed to meet today’s rapid delivery requirements.

Digital Rebar was architected with the global Cobbler user-base in mind to not only simplify the transition but also offer a set of common packages that are shareable across the community to simplify and automate repetitive tasks; freeing up operators to spend more time focusing on key issues instead of finding new OS packages for example.

I encourage you to take 15 minutes and visit the Digital Rebar community to learn more about this technology and how you can up-level your organization’s capability to automate infrastructure at scale,

Podcast – Eric Wright talks DevOpsishFullStackishness and Woke IT

 

 

 

 

 

Joining us this week is Eric Wright, Director Technical Marketing/Evangelist at Turbonomic and podcaster/evangelist at Discoposse.com talking open source.

Highlights:

  • RANT on cloud terminology w/ new terms “DevOpsishFullStackishness” & “Woke IT”
  • Open source communities, vendors, and value of users
  • Edge Computing – definition, Turbonomic Role in cloud/edge
  • Edge and Cloud are Hybrid – embrace multiple paradigms including legacy
  • Discussion of Go language and RackN usage

Topic                                                                                  Time (Minutes.Seconds)

Introduction                                                                   0.0 – 2.30
Questioning in Open Source                                      2.30 – 3.38 (Rob’s Skill)
RANT on Cloud Terminology                                     3.38 – 14.30 (Hybrid IT is legitimate)
Software Defined Terminology                                 14.30 – 15.55 (Trademark Tech Terms)
Open Source Community & Vendors                       15.55 – 20.30
Using Open Source as Valuable as Contribute      20.30 – 24.30
Open Source Project Scope Creep                          24.30 – 26.13
Edge Computing                                                         26.13 – 28.57
Turbonomic Role in Edge                                           28.57 – 32.53 (Workload Automation)
Dynamic Mapping of Workloads at Edge                32.53 – 34.39
Sounds like Hybrid?                                                     34.39 – 42.31 (RackN does PXE in Go)
Ruby Containers into Go on a Switch                       42.31 – 46.35 (Language Snobs)
Wrap Up                                                                        46.35 – END

 

 

Podcast Guest: Eric Wright, Director Technical Marketing/Evangelist at Turbonomic

Before joining Turbonomic, Eric Wright served as a systems architect at Raymond James in Toronto. As a result of his work, Eric was named a VMware vExpert and Cisco Champion with a background in virtualization, OpenStack, business continuity, PowerShell scripting and systems automation. He’s worked in many industries, including financial services, health services and engineering firms. As the author behind DiscoPosse.com, a technology and virtualization blog, Eric is also a regular contributor to community-driven technology groups such as the Pluralsight Author, the leading provider of online training for tech and creative professionals. Eric’s latest course is “Introduction to OpenStack” you can check it out at pluralsight.com.

Week in Review: Automation and Scale are a Must for the Edge

Welcome to our new format for the RackN and Digital Rebar Weekly Review. It contains the same great information you are accustomed to; however, I have reorganized it to place a new section at the start with my thoughts on various topics. You can still find the latest news items related to Edge, DevOps and other relevant topics below.

Automation and Scale at the Edge

Edge computing presents significant challenges to operations teams as there will be hundreds of thousands of endpoints to provision, manage and secure. Unable to physically access each of these endpoints, operations must remotely access with a powerful automation tool to ensure service uptime.

RackN solutions are architected from the ground of to enable this remote automation. Here is Rob Hirschfeld, Co-Founder/CEO of RackN with more details.

Building an Operator Community

We are building an operators community sharing best practices and code to reuse across work sites to fully automate data centers. Working together operators can solve operational challenges for not just their infrastructure, but also find common patterns to leverage across a broad set of architectures.

Community is a powerful force in the software industry and there is no reason why those concepts cannot be leveraged by operators and DevOps teams to completely change the ROI of running a data center. RackN is founded on this belief that working together we can transform data center management via automation and physical ops.

Read More


News

  • Edge Computing

    ADVA Optical Networking will host a joint demonstration with BT to showcase end-to-end, multi-layer transport network slicing and assurance.

    The demonstration, which takes place at the Mobile World Congress (MWC) in Barcelona, will show how edge computing and network slicing techniques can help enable emerging 5G applications. It marks the beginning of a long-term research collaboration between the two companies, focused on network slicing implementations.

    AT&T on Tuesday announced a pair of steps in the carrier’s ongoing edge computing efforts.

    The company launched the first project at its previously announced edge test zone in Palo Alto, Calif., and joined a new open source project focused on edge cloud infrastructure.

  • DevOps

    TechRepublic spoke with Datadog chief product officer Amit Agarwal to explain why DevOps is so important, and where it’s headed.

    Sometimes, all it takes to get focus on an elusive subject like the DevOps process is a bit of a name change. Perhaps that will be the case here, when it comes to a new term I’ve only started hearing over the last few months: intent-based DevOps.
    I first heard it on a conference floor, and while many were talking about DevOps successes, others were wondering what it was going to take to achieve scale through the enterprise. Intent-based DevOps felt intriguing — kind of a “less is more” approach to a sweeping development and deployment strategy that still seems too large to be easily consumed.

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Open Source, Operators, and DevOps Come Together for Data Center Automation

Running data centers is a complex challenge as the typical environment consists of multiple hardware platforms, operating systems, and processes to manage. Operators face daily “fire drills” to keep the machines running while simultaneously trying to expand service offerings and learn new technologies. The adoption of virtualization and cloud has not simplified anything for IT teams and has only made their job more complicated.

Our founders have years of experience working on deploying and operating large, complex data center environments and clouds. They are also well versed in the open source community space and see the merger of community with operations leading to a better way forward for data center management.

We are building an operators community sharing best practices and code to reuse across work sites to fully automate data centers. Working together operators can solve operational challenges for not just their infrastructure, but also find common patterns to leverage across a broad set of architectures.

Community is a powerful force in the software industry and there is no reason why those concepts cannot be leveraged by operators and DevOps teams to completely change the ROI of running a data center. RackN is founded on this belief that working together we can transform data center management via automation and physical ops.

Join us today to help build the future of data center automation and provisioning technology.

We’re talking Immutable Containers at Container World

 

 

 

 

RackN is attending next week’s Container World in Santa Clara, CA and looks forward to talking not just Containers, but image-based provisioning, immutable infrastructure, DevOps, and other topics. Rob Hirschfeld and Shane Gibson are attending and speaking on Wednesday in two sessions (see below).

We are interested in meeting and talking with fellow technologists. Contact us this week so we can setup times to meet at the event.

Rob and Shane are also presenting next Wed the 28th at the Downtown San Jose DevOps Meetup at 6:30pm. The topic is Building Immutable Kubernetes Clusters.

Sessions

Keeping up with patches has never been more critical.  For hardware, that’s… hard.  What if servers were deployed 100% ready to run without any need for remote configuration or access?  What if we were able to roll a complete rebuild of an entire application stack from the BIOS up in minutes?  Those are key concepts behind a cloud and container deployment pattern called “immutable infrastructure.”  It’s called immutable because the servers are deployed from container images produced by CI/CD process and destroyed after use instead of being reconfigured.  It’s a container and cloud pattern that has finally made it to physical.

In this talk, we’ll cover the specific process and its advantages over traditional server configuration. Then we’ll dive deeply into open tools and processes that make it possible to drive immutable containers into your own infrastructure. The talk will include live demos and will discuss process and field challenges that attendees will likely face when they start implementation at home.  We’ll also cover the significant security, time and cost benefits of this approach to make pitching the idea effective.

Join us for a spirited discussion engineering containers for security, touching on such topics as:

  • The security implications/value of containers on VM or Bare Metal, and is one model significantly more secure than another
  • What are the implications for one vs. the other relative to application portability?
  • Role of immutable infrastructure in managing services and software deployments in the context of security.
  • Is there an automation strategy that makes the portability question moot, or is it still an issue?
  • Security via policy and automation and how do we achieve that automation?
  • How it impacts to portability? Is it better than, or an alternative to automation?