Week in Review: Data Center 2020 Blog Series from IBM Think 2018

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Data Center of 2020 Blog Series from IBM Think 2018
(Series by Rob Hirschfeld, CEO/Co-Founder, RackN)

When discussing the data center of the future, it’s critical that we start by breaking the concept of the data center as a physical site with guarded walls, raised floors, neat rows of servers and crash cart pushing operators. The Data Center of 2020 (DC2020) is a distributed infrastructure comprised of many data centers, cloud services and connected devices.

The primary design concept of DC2020 is integrated automation not actual infrastructures.

RackN Portal Management Connection for the 10 Minute Demo

In my previous blog, I provided step by step directions to install Digital Rebar Provision on a new endpoint and create a new node using Packet.net for users without a local hardware setup. (Demo Tool on GitHub) In this blog, I will introduce the RackN Portal and connect it to the active setup running on Packet.net at the end of the demo process.

Read More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

DC2020: Is Exposing Bare Metal Practical or Dangerous?

One of IBM’s major announcements at Think 2018 was Managed Kubernetes on Bare Metal. This new offering combines elements of their existing offerings to expose some additional security, attestation and performance isolation. Bare metal has been a hot topic for cloud service providers recently with AWS adding it to their platform and Oracle using it as their primary IaaS. With these offerings as a backdrop, let’s explore the role of bare metal in the 2020 Data Center (DC2020).

Physical servers (aka bare metal) are the core building block for any data center; however, they are often abstracted out of sight by a virtualization layer such as VMware, KVM, HyperV or many others. These platforms are useful for many reasons. In this post, we’re focused on the fact that they provide a control API for infrastructure that makes it possible to manage compute, storage and network requests. Yet the abstraction comes at a price in cost, complexity and performance.

The historical lack of good API control has made bare metal less attractive, but that is changing quickly due to two forces.

These two forces are Container Platforms and Bare Metal as a Service or BMaaS (disclosure: RackN offers a private BMaaS platform called Digital Rebar). Container Platforms such as Kubernetes provide an application service abstraction level for data center consumers that eliminates the need for users to worry about traditional infrastructure concerns.  That means that most users no longer rely on APIs for compute, network or storage allowing the platform to handle those issues. On the other side, BMaaS VM infrastructure level APIs for the actual physical layer of the data center allow users who care about compute, network or storage the ability to work without VMs.  

The combination of containers and bare metal APIs has the potential to squeeze virtualization into a limited role.

The IBM bare metal Kubernetes announcement illustrates both of these forces working together.  Users of the managed Kubernetes service are working through the container abstraction interface and really don’t worry about the infrastructure; however, IBM is able to leverage their internal bare metal APIs to offer enhanced features to those users without changing the service offering.  These benefits include security (IBM White Paper on Security), isolation, performance and (eventually) access to metal features like GPUs. While the IBM offering still includes VMs as an option, it is easy to anticipate that becoming less attractive for all but smaller clusters.

The impact for DC2020 is that operators need to rethink how they rely on virtualization as a ubiquitous abstraction.  As more applications rely on container service abstractions the platforms will grow in size and virtualization will provide less value.  With the advent of better control of the bare metal infrastructure, operators have real options to get deep control without adding virtualization as a requirement.

Shifting to new platforms creates opportunities to streamline operations in DC2020.

Even with virtualization and containers, having better control of the bare metal is a critical addition to data center operations.  The ideal data center has automation and control APIs for every possible component from the metal up.

Learn more about the open source Digital Rebar community:

Podcast – Oliver Gould on Service Mesh, Containers, and Edge

Joining us this week is Oliver Gould, CTO Buoyant who provides a service mesh abstraction view to micro-services and Kubernetes. Oliver and Rob also take a look at how applications are managed at the edge and highlights the future roadmap for Conduit.

Highlights

  • Defining microservices and Kubernetes from Buoyant viewpoint
  • Service mesh abstractions at a request level (load balance, get, put, …)
  • Conduit overview – client-side load balancing
  • Service mesh tool comparisons
  • Edge Computing discussion from service mesh view

Topic                                                                           Time (Minutes.Seconds)

Introduction                                                                0.0 – 1:39
Define Microservices                                                1:39 – 5.25
Define Kubernetes                                                     5.25 – 10.23 (Memory as a Service)
Service Mesh Abstractions                                       10.23 – 12.37 (L5 or L7)
Conduit Overview                                                      12.37 – 18.20 (Sidecar Container)
When do I need Service Mesh?                              18.20 – 19.55 (Complex Debugging)
Service Mesh Comparisons                                     19.55 – 22.31
Deployment Times / V2 to 3 for DRP                    22.31 – 25.13 (Kubernetes into Production)
Edge Computing                                                       25.13 – 27.04 (Define)
App in Cloud + Edge Device?                                  27.04 – 31.10 (POP = Point of Prescience)
Containers + Serverless                                            31.10 – 34.30 (Proxy in Browser)
Future Roadmap                                                       34.30 – 37.06 (Conduit.io)
Wrap Up                                                                     37.06 – END

Podcast Guest:  Oliver Gould, CTO Buoyant

Oliver Gould is the CTO of Buoyant, where he leads open source development efforts. Previously, he was a staff infrastructure engineer at Twitter, where he was the tech lead of the Observability, Traffic, and Configuration and Coordination teams. Oliver is the creator of linkerd and a core contributor to Finagle, the high-volume RPC library used at Twitter, Pinterest, SoundCloud, and many other companies.

December 8 – Weekly Recap Of Digital Rebar, RackN, And Industry News

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, Edge Computing, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet RackN (@rackngo)

Items of the Week

Industry News

Maybe we’re just too lazy to put in the work to become DevOps-minded, though, to the industry’s credit, the desire to “get DevOps” is real. Roughly 10 years after DevOps was coined as a thing, enterprises are madly scrambling to embrace it, as survey data uncovers. The problem is that too often we think it’s about hiring a few “DevOps engineers” and setting them free to… DevOp… or whatever.

Many industrial applications have been developed to utilize IoT devices and the data they produce.  They generally use cloud hosting, analytics and edge computing technology, often provided and connected via an IoT Platform – a set of tools and run-time systems hosted on the cloud that enable the development and deployment of a “complete IoT solution.”

With the advent of KubCon and CloudNativeCon in Austin, Texas, on Wednesday, assorted enterprise vendors have chosen this week to flog their latest devops-oriented wares, before the impending holiday torpor leaves IT folks too distracted, weary or inebriated to care.

Digital Rebar

RackN

Like other Gartner events, the Infrastructure and Operations (IO) show is all about enterprises maintaining systems.  There are plenty of hype chasing sessions, but the vibe is distinctly around working systems and practical implementations.  Think: sports coats not t-shirts.  In that way, it’s less breathless and wild-eyed than something like KubeCon (which is busy celebrating a bumper crop of 1.0 projects).  The very essence of this show is to project an aura of calm IT stewardship.

Join this webinar to learn more about the RackN Kubernetes installation integration using community tools like Kubeadm demonstrated at this week’s KubeCon event (Slides) in Austin, TX. Co-Founders Rob Hirschfeld and Greg Althaus of RackN will discuss this fast and simple approach to operating Kubernetes. Of course, we’ll also demonstrate the technology installing Kubernetes following the immutable infrastructure model highlighting the automated provisioning technology built on the open source Digital Rebar project.

Dec 14, 2017 1:30 PM CST

We are actively looking for feedback from customers and technologists before general availability of both RackN and the Terraform plug-in. It takes just a few minutes to get started and we offer direct engineering engagement on our community slack channel. Get started now by providing your email on our registration page so we can provide you all the necessary links.

L8ist Sh9y Podcast

Podcast Guest: Keith Townsend, The CTO Advisor

UPCOMING EVENTS – None until 2018

Webinar: Immutable Kubernetes with RackN Provisioning

Watch this webinar to learn more about the RackN Kubernetes installation integration using community tools like Kubeadm demonstrated at this week’s KubeCon event (Slides) in Austin, TX. Co-Founders Rob Hirschfeld and Greg Althaus of RackN will discuss this fast and simple approach to operating Kubernetes. Of course, we’ll also demonstrate the technology installing Kubernetes following the immutable infrastructure model highlighting the automated provisioning technology built on the open source Digital Rebar project.

After this webinar, you’ll be prepared to attempt this install strategy on your own.

Why attend this webinar?
* Benefits of the Immutable Infrastructure provisioning model
* Solve installation issues with Kubernetes using community Kubeadm tooling
* Overview of the RackN + Digital Rebar automated provisioning solution

Speakers:
Rob Hirschfeld : CEO/Co-Founder, RackN
Greg Althaus : CTO/Co-Founder, RackN

Day & Time:

Dec 14, 2017 1:30 PM CST

Watch the Webinar on YouTube

RackN and Digital Rebar All Set For KubeCon + CloudNativeCon

 

 

 

 

 

 

The RackN and Digital Rebar team are finalizing plans for next week’s KubeCon + CloudNativeCon in Austin, TX from Dec 6 – 8, 2017. Rob Hirschfeld is hosting 2 sessions and we are having a booth in the sponsor showcase. All the info you need is below and we look forward to seeing you in Austin.

SESSSIONS

SIG Cluster-Ops Update hosted by Rob Hirschfeld
Event Link: http://sched.co/CU8t
Thursday December 7 from 2:00 – 2:35pm

Operators of Kubernetes, Unite! SIG Cluster Ops was formed nearly two years ago with the goal of being an installer neutral place for operations to collaborate. Frankly, we’ve had challenges getting critical mass because operators cluster around their installer groups. This session will discuss re-chartering as a Working Group and review the mission of the group. We’ll also review plans for the next 6 months. If you’re hoping Kubernetes can limit the installer explosion then this session is a good one for you too.

Zero-Configuration Pattern on Kubernetes on Bare Metal by Rob Hirschfeld
Event Link: http://sched.co/CU8h
Friday December 8 from 11:55 – 12:30pm

In recent releases, we’ve enabled node admission and configuration APIs that eliminate configuration requirements for Kubernetes workers. This allows cluster operators to add and remove nodes from clusters without a configuration management tool driving the process. This fully automated node management behavior allows physical data centers to be much more cloud-lie and lights-out.

In this session, we’ll run this process as a demo and decompose the various parts that must work together for success. We’ll discuss the specific APIs and how to implement them in a coordinated way that ensures node security and minimizes workload disruption. We’ll also discuss how to improve node security by using trusted platform modules (TPM). By the end of the session, operators will be able to duplicate the steps on their own to learn the process.

While we focus on bare metal infrastructure for this session, the lessons learned are equally useable on cloud infrastructure.

SPONSOR SHOWCASE

Be sure to visit the RackN booth and talk Digital Rebar, Bare Metal, Infrastructure, DevOps, etc.

Hours:

  • Wednesday, December 6 from 10:30 – 8:30pm
  • Thursday, December 7 from 10:30 – 5:30pm
  • Friday, December 8 from 10:30 – 4:00pm

SOCIAL MEDIA

Be sure to follow @rackngo and @digitalrebar on Twitter during the event as we highlight all our activities.

Podcast with Krishnan Subramanian on Edge, the Kubernetes Ecosystem & the Composable Enterprise

 

 

 

 

 

 

 

 

 

 

In this week’s L8ist Sh9y podcast Krishnan Subramanian, Founder and Chief Research Advisor of Rishidot Research talks about Edge Computing, the Kubernetes Ecosystem and the Composable Enterprise. Key highlights:

  • “Multi-Cloud is the foundation of Modern Enterprise” – Krishnan
  • Kubernetes ecosystem and the possibility that Serverless could replace it
  • IT innovation requires a composable and layered approach, without this approach IT will find themselves trapped in a hard-wired infrastructure unable to move forward

Topic                                 Time (Minutes.Seconds)

Introduction                                  0.0 – 1.28
Edge Computing                         1.28 – 4.25
What is the Edge?                       4.25 – 6.06
Use Cases Not For Cloud           6.06 – 8.50 (Networking and 5G)
Distributed Scale of Edge          8.50 – 10.03
Multi-Cloud Progress                 10.03 – 12.07
Supporting Diff Infra Types?      12.07 – 16.40
Multi-Cloud & Kubernetes         16.40 – 20.54
Kubernetes Ecosystem              20.54 – 28.00 (Serverless can replace)
Ecosystem Gaps                          28.00 – 29.44
Best of Bread IT                           29.44 – 32.25 (Composable Enterprise)
IT Moves to Smaller Units         32.25 – 35.30
Back to Edge                                5.30 – 41.45
Conclusion                                   41.45 – 42.35

 

Podcast Guest: Krishnan Subramanian
Founder and Chief Research Advisor, Infrastructure, Application Platforms and DevOps

Krishnan Subramanian (a.k.a Krish) is a well-known expert in the field of cloud computing. He is the founder and Chief Research Advisor at Rishidot Research, a boutique analyst firm focused on Modern Enterprise. Their open data-based research helps enterprise decision makers on their enterprise modernization strategy. His Modern Enterprise model helps enterprises innovate rapidly by transforming their IT as the core part of the innovation team. He was a speaker and panelist at various cloud computing conferences and he was also an advisor for Glue conference in 2011 and Cloud Connect Santa Clara in 2012. He has also organized industry-leading conferences like Deploycon and Cloud2020. He is also an advisor to cloud computing startups. He can be reached on Twitter @krishnan.

 

Putting a little ooooh! in orchestration

The RackN team is proud of saying that we left the Orchestration out when we migrated from Digital Rebar v2 to v3. That would mean more if anyone actually agreed on what orchestration means… In this our case, I think we can be pretty specific: Digital Rebar v3 does not manage work across multiple nodes. At this point, we’re emphatic about it because cross machine actions add a lot of complexity and require application awareness that quickly blossoms into operational woe, torture and frustration (aka WTF).

That’s why Digital Rebar focused on doing a simple yet powerful job doing multi-boot workflow on a single machine.

In the latest releases (v3.2+), we’ve delivered an easy to understand stage and task running system that is simple to extend, transparent in operation and extremely fast. There’s no special language (DSL) to learn or database to master. And if you need those things, then we encourage you to use the excellent options from Chef, Puppet, SaltStack, Ansible and others. This is because our primary design focus is planning work over multiple boots and operating system environments instead of between machines. Digital Rebar shines when you need 3+ reboots to automatically scrub, burn-in, inventory, install and then post-configure a machine.

But we may have crossed an orchestration line with our new cluster token capability.

Starting in the v3.4 release, automation authors will be able to use a shared profile to coordinate work between multiple machines. This is not a Digital Rebar feature per se – it’s a data pattern that leverages Digital Rebar locking, profiles and parameters to share information between machines. This allows scripts to elect leaders, create authoritative information (like tokens) and synchronize actions. The basic mechanism is simple: we create a shared machine profile that includes a token that allows editing the profile. Normally, machines can only edit themselves so we have to explicitly enable editing profiles with a special use token. With this capability, all the machines assigned to the profile can update the profile (and only that profile). The profile becomes an atomic, secure shared configuration space.

For example, when building a Kubernetes cluster using Kubeadm, the installation script needs to take different actions depending on which node is first. The first node needs to initialize the cluster master, generate a token and share its IP address. The subsequent nodes must wait until the master is initialized and then join using the token. The installation pattern is basically a first-in leader election while all others wait for the leader. There’s no need for more complex sequencing because the real install “orchestration” is done after the join when Kubernetes starts to configure the nodes.

Our experience is that recent cloud native systems are all capable of this type of shotgun start where all the nodes start in parallel with the minimal bootstrap coordination that Digital Rebar can provide.

Individually, the incremental features needed to enable cluster building were small additions to Digital Rebar. Together, they provide a simple yet powerful management underlay. At RackN, we believe that simple beats complex everyday and we’re fighting hard to make sure operations stays that way.

Sirens of Open Infrastructure beacons to OpenStack Community

OpenStack is a real platform doing real work for real users.  So why does OpenStack have a reputation for not working?  It falls into the lack of core-focus paradox: being too much to too many undermines your ability to do something well.  In this case, we keep conflating the community and the code.

I have a long history with the project but have been pretty much outside of it (yay, Kubernetes!) for the last 18 months.  That perspective helps me feel like I’m getting closer to the answer after spending a few days with the community at the latest OpenStack Summit in Sydney Australia.  While I love to think about the why, the what the leaders are doing about it is very interesting too.

Fundamentally, OpenStack’s problem is that infrastructure automation is too hard and big to be solved within a single effort.  

It’s so big that any workable solution will fail for a sizable number of hopeful operators.  That does not keep people from the false aspiration that OpenStack code will perfectly fit their needs (especially if they are unwilling to trim their requirements).

But the problem is not inflated expectations for OpenStack VM IaaS code, it’s that we keep feeding them.  I have been a long time champion for a small core with a clear ecosystem boundary.  When OpenStack code claims support for other use cases, it invites disappointment and frustration.

So why is OpenStack foundation moving to expand its scope as an Open Infrastructure community with additional focus areas?  It’s simple: the community is asking them to do it.

Within the vast space of infrastructure automation, there are clusters of aligned interest.  These clusters are sufficiently narrow that they can collaborate on shared technologies and practices.  They also have an partial overlap (Venn) with adjacencies where OpenStack is already present.  There is a strong economic and social drive for members in these overlapped communities to bridge together instead of creating new disparate groups.  Having the OpenStack foundation organize these efforts is a natural and expected function.

The danger of this expansion comes from also carrying the expectation that the technology (code) will also be carried into the adjacencies.  That’s my my exact rationale the original VM IaaS needs to be smaller.  The wealth of non-core projects crosses clusters of interests.  Instead of allowing these clusters to optimize their needs around shared interests, the users get the impression that they must broadly adopt unneeded or poorly fit components.  The idea of “competitive” projects should be reframed because they may overlap in function but not ui use-case fit.

It’s long past time to give up expectations that OpenStack is a “one-stop-shop” of infrastructure automation.  In my opinion, it undermines the community mission by excluding adjacencies.

I believe that OpenStack must work to embrace its role as an open infrastructure community; however, it must also do the hard work to create welcoming space for adjacencies.  These adjacencies will compete with existing projects currently under the OpenStack code tent.  The community needs to embrace that the hard work done so far may simply be sunk cost for new use cases. 

It’s the OpenStack community and the experience, not the code, that creates long term value.

November 10 – Weekly Recap of all things Digital Rebar and RackN

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, SRE, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

Items of the Week

Digital Rebar

Digital Rebar Releases V3.2 – Stage Workflow

In v3.2, Digital Rebar continues to refine the groundbreaking provisioning workflow introduced in v3.1. Updates to the workflow make it easier to consume by external systems like Terraform. We’ve also improved the consistency and performance of both the content and service.

The release of workflow and the addition of inventory means that Digital Rebar v3 effectively replaces all key functions of v2 with a significantly smaller footprint, minimal learning curve and improved performance. One v2 major feature, multi-node coordination, is not on any roadmap for v3 because we believe those use case are well serviced by upstack integrations like Terraform and Ansible. Full Post

RackN

 

 

 

 

Joining this week’s L8ist Sh9y Podcast is Zach Smith, CEO of Packet and long-time champion of bare metal hardware. Rob Hirschfeld and Zach discuss the trends in bare metal, the impact of AWS changing the way developers view infrastructure, and issues between networking and server groups in IT organizations. (Blog with Topics and Times)

OpenStack Summit Sydney

Rob Hirschfeld and Ihor Dvoretskyi presented “Building Kubernetes based highly Customizable Environments on OpenStack with Kubespray.” Full Post

https://www.slideshare.net/RackN/slideshelf

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

If you are attending any of these events please reach out to Rob Hirschfeld to setup time to learn more about our solutions or discuss the latest industry trends.

OTHER NEWSLETTERS