Success means putting People and Process above Tech

“I don’t care about the tech – what I really want to hear is how this product fits in our processes and helps our people get more done.”

That was the message my co-founder and I heard from an executive at a major bank last week.  For us, it was both a deja vu and a major relief because we’d just presented at the Cablelabs Summer Showcase about the importance of aligning people, process and technology together. The executive was pleased about how RackN had achieved that balance.

It wasn’t always that way: focusing on usability and simplicity first over features is scary.  

One of the most humbling startup lessons is that making great technology is not about the technology. Showing a 10x (or 100x!) improvement in provisioning speed misses the real problem for IT operators. Happily, we had some great early users who got excited about the vision for simple tooling that we built around Digital Rebar Provision v3.  Equally important was a deeply experienced team who insisted in building great tests, docs and support tooling from day 0.

We are thrilled to watch as new users are able to learn, adopt and grow their use of our open technology with minimal help from RackN.  Even without the 10x performance components RackN has added, they have been able to achieve significant time and automation improvements in their existing operational processes.  That means simpler processes, less IT complexity and more time for solving important problems.

The bank executive wanted the people and process benefits: our job with technology was to enable that first and then get out of the way.  It’s a much harder job than “make it faster” but, ultimately, much more rewarding.

If you’re interested in seeing how we’ve found that balance for bare metal automation, please check out our self-service trial at https://portal.RackN.io or contact us directly at info@rackn.com.

Catch up with the RackN and Digital Rebar Team at OpenStack Summit

We are heading out to Vancouver next week for the OpenStack Summit from May 21 – 24. Rob Hirschfeld, our Co-Founder/CEO will be available to meet onsite as well as help drive the OpenStack community forward. If you are interested in meeting, please contact me.

Rob has 2 sessions scheduled and we encourage you to attend.

Sessions

Security Considerations for Cloud Edge Computing
Date & Time: May 23 from 11:50 – 12:30pm

Location: Vancouver Convention Centre West – Level 2 – Room 205-207

Panel: (Moderator) Beth Cohen, Verizon : Rob Hirschfeld, RackN : Glen McGowan, Dell EMC : Shuquan Huang, 99cloud

Cloud Edge computing use cases range from IoT to VR/AR and any widely distributed application in between.  However, taking OpenStack out of the data center requires an entirely new approach to security when there is far less ability to restrict access and often the applications require a shared tenant model.

Avoiding Infrastructure at Rest – The Power of Immutable Infrastructure

Date & Time: May 23 from 3:30 – 4:10pm
Location: Vancouver Convention Center West – Level Three – Room 301

Keeping up with patches has never been more critical.  For hardware, that’s… hard.  What if servers were deployed 100% ready to run without any need for remote configuration or access?  What if we were able to roll a complete rebuild of an entire application stack from the BIOS up in minutes.  Those are key concepts behind a cloud deployment pattern called “immutable infrastructure”  because the servers are deployed from images produced by CI/CD process and destroyed after use instead of being reconfigured.

We’ll cover the specific process and it’s advantages.  Then we’ll dive deeply into open tools and processes that make it possible to drive immutable images into your own infrastructure.  The talk will include live demos and go discuss process and field challenges that attendees will likely face when they start implementation at home.  We’ll also cover the significant security, time and cost benefits of this approach to make pitching the idea effective.

Week in Review : The CTO Advisor talks RackN at InteropITX

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Rob Hirschfeld and The CTO Advisor from Interop ITX

During Interop ITX 2018, Keith Townsend had a chance to catch up with RackN CEO Rob Hirschfield to discuss the company. Learn how RackN orchestrates bare metal workloads to provide cloud capability to the data center.


News

RackN

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

Meet with RackN Next Week at GlueCon

Look for the RackN team next week in Colorado at GlueCon 2018. As a Bronze sponsor, we have a small booth for attendees to meet with our team and talk DevOps, Bare Metal, Immutability, Edge Computing and other topics. In addition, be sure to attend Rob Hirschfeld’s session on Wednesday.

If you are interested in meeting with us during the event please contact me to setup a meeting time.

Session

Making Bare Metal Go Cloud Native: The Power of Immutable Deploys
Speaker: Rob Hirschfeld, Co-Founder/CEO RackN
Wednesday May 16 from 2:50 – 3:30pm
Breakout 2 Track: DevOps

The benefits of automated cloud deployments for speed, reliability and security are undeniable.  The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of machines,  If this pattern is so great, shouldn’t we bring it into the physical layer too?

In this talk, we’ll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform.  Then we’ll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar.  Not only is this approach fast, it’s also more secure and robust for operators.

If you are running physical infrastructure, this talk will change how you think about your job in profound ways.

Get Ready, RackN is heading to Interop ITX

Next week our co-founders are headed to Las Vegas for Interop ITX. Both are speaking and are available to meet to discuss our technology, DevOps, etc. If you are interested in meeting please contact me to setup a time.  I would also like to acknowledge Rob Hirschfeld’s role as a Review Board member for the DevOps track.

Rob Hirschfeld is participating in a discussion panel and individual talk and Greg Althaus is running a 90 minute hands-on lab on immutable deployments.

TALKS

DevOps vs SRE vs Cloud Native

Speaker:                       Rob Hirschfeld
Date and Time:           Wednesday, May 2 from 1:00 – 1:50 pm
Location:                      Grand Ballroom G
Track:                            DevOps

DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE).

Our speaker has been creating leading edge infrastructure and cloud automation platforms for over 15 years. His recent work in Kubernetes operations has lead to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.

In this talk, we’ll explore this trend and discuss concrete ways to cope with the coming changes. We’ll look at the reasons while SRE is attractive and get specific about ways that teams can bootstrap their efforts and keep their DevOps Fu strong.

Immutable Deployments: Taking Physical Infrastructure from Automation to Autonomy

Speakers:                                Greg Althaus
Date and Time:                      Wednesday, May 2 from 3:00 – 4:30 pm
Location:                                 Montego C
Track:                                       Infrastructure
Format:                                    Hands-On-Session

Physical Infrastructure is the critical underpinning of every data center; however, it’s been very difficult to automate and manage. In this hands-on session, we’ll review the latest in physical layer automation that uses Cloud Native DevOps processes and tooling to make server (re)provisioning fully automatic.

Attendees will be guided through a full automation provisioning cycle using a mix of technologies including Terraform and Digital Rebar. We’ll use cloud based physical servers from Packet.net for the test cycle so that attendees get to work with real infrastructure even from the session.

By the end of the session, you’ll be able to setup your own data center provisioning infrastructure, create a pool of deployed servers, allocate those servers using Infrastructure as Code processes. Advanced students may be able to create and deploy complete images using locally captured images.

This session has a limited amount of seating, so an RSVP is required. Please RSVP here.

From What to How: Getting Started and Making Progress with DevOps

Speakers:                                Damon Edwards, Jayne Groll, Rob Hirschfeld, Mandy Hubbard
Date and Time:                      Thursday, May 3 from 1:00 – 1:50 pm
Location:                                 Grand Ballroom G
Track:                                       DevOps

Organizations are recognizing the benefits of DevOps but making strides toward implementation and meting goals may be more difficult than it seems. This panel discussion with multiple DevOps experts and practitioners will explores practices and principles attendees can apply to their own DevOps implementations, as well as metrics to put into place to track success. Track Chair Jayne Groll will moderate the discussion.

 

Week in Review: RackN talks Immutability and DevOps at SRECon Americas

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Immutable Deployments talk at SRECon Americas

Rob Hirschfeld presented at SRECon Americas this week, “Don’t Ever Change! Are Immutable Deployments Really Simpler, Faster and Safer?”

Configuration is fragile because we’re talking about mutating a system. Infrastructure as code, means building everything in place. Every one of our systems have to be configured and managed and that creates a dependency graph. We can lock things down, but we inevitably have to patch our systems.

Immutable infrastructure is another way of saying “pre-configured systems”. Traditional deployment models do configuration after deployment, but it’s better if we can do it beforehand. Immutability is a DevOps pattern. Shift configuration to the left of our pipeline; move it from the production to build stage.

Finish Reading Review from Tanya Reilly (@whereistanya)


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Week in Review: Data Center 2020 Blog Series from IBM Think 2018

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Data Center of 2020 Blog Series from IBM Think 2018
(Series by Rob Hirschfeld, CEO/Co-Founder, RackN)

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

RackN Portal Management Connection for the 10 Minute Demo

In my previous blog, I provided step by step directions to install Digital Rebar Provision on a new endpoint and create a new node using Packet.net for users without a local hardware setup. (Demo Tool on GitHub) In this blog, I will introduce the RackN Portal and connect it to the active setup running on Packet.net at the end of the demo process.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

DC2020: Is Exposing Bare Metal Practical or Dangerous?

One of IBM’s major announcements at Think 2018 was Managed Kubernetes on Bare Metal. This new offering combines elements of their existing offerings to expose some additional security, attestation and performance isolation. Bare metal has been a hot topic for cloud service providers recently with AWS adding it to their platform and Oracle using it as their primary IaaS. With these offerings as a backdrop, let’s explore the role of bare metal in the 2020 Data Center (DC2020).

Physical servers (aka bare metal) are the core building block for any data center; however, they are often abstracted out of sight by a virtualization layer such as VMware, KVM, HyperV or many others. These platforms are useful for many reasons. In this post, we’re focused on the fact that they provide a control API for infrastructure that makes it possible to manage compute, storage and network requests. Yet the abstraction comes at a price in cost, complexity and performance.

The historical lack of good API control has made bare metal less attractive, but that is changing quickly due to two forces.

These two forces are Container Platforms and Bare Metal as a Service or BMaaS (disclosure: RackN offers a private BMaaS platform called Digital Rebar). Container Platforms such as Kubernetes provide an application service abstraction level for data center consumers that eliminates the need for users to worry about traditional infrastructure concerns.  That means that most users no longer rely on APIs for compute, network or storage allowing the platform to handle those issues. On the other side, BMaaS VM infrastructure level APIs for the actual physical layer of the data center allow users who care about compute, network or storage the ability to work without VMs.  

The combination of containers and bare metal APIs has the potential to squeeze virtualization into a limited role.

The IBM bare metal Kubernetes announcement illustrates both of these forces working together.  Users of the managed Kubernetes service are working through the container abstraction interface and really don’t worry about the infrastructure; however, IBM is able to leverage their internal bare metal APIs to offer enhanced features to those users without changing the service offering.  These benefits include security (IBM White Paper on Security), isolation, performance and (eventually) access to metal features like GPUs. While the IBM offering still includes VMs as an option, it is easy to anticipate that becoming less attractive for all but smaller clusters.

The impact for DC2020 is that operators need to rethink how they rely on virtualization as a ubiquitous abstraction.  As more applications rely on container service abstractions the platforms will grow in size and virtualization will provide less value.  With the advent of better control of the bare metal infrastructure, operators have real options to get deep control without adding virtualization as a requirement.

Shifting to new platforms creates opportunities to streamline operations in DC2020.

Even with virtualization and containers, having better control of the bare metal is a critical addition to data center operations.  The ideal data center has automation and control APIs for every possible component from the metal up.

Learn more about the open source Digital Rebar community:

DC2020: Skeptics Guide to Blockchain in the Data Center

At Think 2018, Machine Learning and Blockchain technologies are beyond pervasive, they are assumed to be beneficial to ROI in every situation. That type of hype begs for closer review. In this post, we’ll look at a potentially real use of blockchain for operations.

There is so much noise about blockchain that it can be difficult to find a starting point. I’m leaving background reading as an exercise for the reader; instead, I want to focus on how blockchain creates a distributed ledger with shared trust. That’s a lot of buzz words! Basically, we’re talking about a system where nodes share data in a way that they use consensus with their peer to determine if the information is trustworthy.

The key concept in blockchain is moving from a central authority to a distributed authority.

In the data center, administrative trust is essential. The premises, networks, and access credentials all rely on the idea that we have a centralized authoritative group. Even PKI, which is designed for decentralized trust, relies on a centralized trust to sign keys. Looking objectively at the bundle of passwords, certificates, keys and isolation layers, there are gaping risks in this model. It only takes getting the right access to flip administrative control from an asset into a liability.

Blockchain allows us to decentralize trust in the data center by requiring systems to collaboratively validate administrative instructions.

In this model, we’d still have administrative controls and management; however, the nodes would be able to validate configuration changes with their peers or other administrative sources. For example, an out of process change (potential hack?) on a single node would be confirmed via consensus with other nodes instead of automatically trusting the source. The body of nodes protects from a bad administrative request. It also allows operators to quickly propagate configurations peer-to-peer instead of relying on a central hub and spoke model.

This is even more powerful if configuration is composited from multiple sources in a pipeline. In a multiple author system, each contributor will be involved in verifying that changes to the whole configuration. This ensures that downstream insertions are both communicated and accepted by upstream steps.  This works because blockchain is a distributed ledger. Changes made to the chain are passed back to all parties. Just like in a decentralized supply chain model, this ensures both validation and transparency.

Blockchain’s ability to provide both horizontal and vertical integrity for operations is an intriguing possibility.

I’m interested in hearing your thoughts about this application for blockchain. From a RackN and Digital Rebar perspective, these capabilities are well aligned with our composable approach to configuration. We’d be happy to talk with operators who want to look more deeply into this type of integration.

DC2020: Mono-clouds are easier! Why do Hybrid?

Background: This post was inspired by a mult-cloud session session at IBM Think2018 where I am attending as a guest of IBM. Providing hybrid solutions is a priority for IBM and it’s customers are clearly looking for multi-cloud options. In this way, IBM has made a choice to support competitive platforms. This post explores why they would do that.

There is considerable angst and hype over the terms multi-cloud and hybrid-cloud. While it would be much simpler if companies could silo into a single platform, innovation and economics requires a multi-party approach. The problem is NOT that we want to have choice and multiple suppliers. The problem is that we are moving so quickly that there is minimal interoperability and minimal efforts to create interoperability.

To drive interoperability, we need a strong commercial incentive to create an neutral ecosystem.

Even something with a clear ANSI standard like SQL has interoperability challenges. It also seems like the software industry has given up on standards in favor of APIs and rapid innovation. The reality on the ground is that technology is fundamentally heterogeneous and changing. For this reason, mono-anything is a myth and hybrid is really status quo.

If we accept multi-* that as a starting point, then we need to invest in portability and avoid platform assumptions when we build automation. Good design is to assume change at multiple points in your stack. Automation itself is a key requirement because it enables rapid iterative build, test and deploy cycles. It is not enough to automate for day 1, the key to working multi-* infrastructure is a continuous deployment pipeline.

Pipelines provide insurance for hybrid infrastructure by exposing issues quickly before they accumulate technical debt.

That means the utility of tools like Terraform, Ansible or Docker is limited to how often you exercise them. Ideally, we’d build abstraction automation layers above these primitives; however, this has proven very difficult in practice. The degrees of variation between environments and pace of innovation make it impossible to standardize without becoming very restrictive. This may be possible for a single company but is not practical for a vendor trying to support many customers with a single platform.

This means that hybrid, while required in the market, carries an integration tax that needs to be considered.

My objective for discussing Data Center 2020 topics is to find ways to lower that tax and improve the outcome. I’m interested in hearing your opinion about this challenge and if you’ve found ways to solve it.

Counterpoint Addendum: if you are in a position to avoid multi-* deployments (e.g. a start-up) then you should consider that option. There is measurable overhead of heterogeneous automation; however, I’ve found the tipping point away from a mono-stack can be surprising low and committing to a vertical stack does make applications less innovation resilient.