Podcast –Nick Alesandro on Blockchain, Edge, and Cloud Oh My!

Joining us this week is Nick Alesandro, VP of Product at Overclock Labs; creators of The Akash Network.

About Overclock Labs

  • We believe the Cloud should be distributed and decentralized so that no one provider can control the internet.
  • We believe the cloud should be globally fault-tolerant to avoid any single points of failure.
  • We believe the Cloud should be simple, automated, and accessible for all.

About Akash Network

Decentralized protocol for provisioning, scaling and securing cloud workloads: The world’s only on-chain auction marketplace for off-chain container deployments

Highlights

  • 0 min 37 sec: Introduction of Guest
  • 2 min 28 sec: High level description of the technology
    • Cloud centralization is a problem; it needs to be decentralized
    • Take over 100% or a partial amount of a machine added to the cloud network with special CoreOS based system
  • 5 min 13 sec: Akash Network
    • Donate your idle servers into an available system for usage in a cloud managed by Overclock Labs
    • 2 Components – Blockchain for marketplace / Deployment platform
  • 6 min 52 sec: How does marketplace work; who wants to use this network?
    • Focus is on developers who want to use these systems
    • #1 Reason – Cheaper than standard public clouds ; #2 Reason – Its distributed globally not at fixed known sites
  • 10 min 26sec: Blockchain as decentralized ledger to avoid central store
    • Join the infrastructure without a formal registration on a single database; only listening for bids not publishing what they offer
    • History of the infrastructure is public for customers to evaluate
  • 12 min 31 sec: How do I know where I am pushing my workloads? Can I trust the infrastructure provider?
    • Issues arise with receiving workloads that are unknown to you
  • 16 min 38 sec: What do I do to add a rack of servers into Akash network?
    • What you need to do vs what you should do
    • Management Server, Network Isolation, Monitoring
    • Seasonal Load Model ~ Electric Grid Analogy
  • 20 min 23 sec: Identify Geography and Latency to Customers?
  • 21 min 23sec: What do I ensure I am not getting dropped or pulled into a bidding trap?
    • Workload goes down it automates a bid elsewhere in the network
    • Conditions can be set should you need a long running workload and it is terminated by the host
    • Do you expect providers to monitor machines or do you as a service?
  • 27 min 45 sec: Kubernetes Cluster across providers?
    • Kubernetes Federation
    • Why Kubernetes? Writing our own Kubernetes using Kubernetes
  • 31 min 55 sec: Why not do this with Virtual Machines?
    • Containers makes sense
  • 32 min 40 sec: How long have you been working on this project?
    • Why of the project? DISCO → Scheduler
    • Decentralization is the key
    • Can build a private infrastructure using their system
  • 37 min 01 sec: Wrap-Up

Podcast – Jordan Rinke on Open Source, Kubernetes, and Edge Computing

Joining us this week is Jordan Rinke, Principal Software Engineer, Walmart Labs. Jordan offers his views on various technologies and open source projects as it relates to the scale and connectivity issues faced by Walmart.

Highlights

  • Technical Gaps in Kubernetes Technologies and Installer Issues
  • Tooling and Orchestration Focus for Kubernetes and Other Tools
  • Core OS Model for Bootstrapping Kubernetes
  • Discussion on Immutability: Middle Ground for Jordan
  • Edge Computing – Emerging markets lead to disconnected edge sites
  • Data location challenges in edge and cloud services
  • Skills issues for medium sized clusters

Topic                                                                                    Time (Minutes.Seconds)

Introduction                                                                            0.0 – 1.08
Jet and Walmart Integration                                                1.08 – 1.57
Open Source & Walmart                                                      1.57 – 3.18
Kubernetes Challenge & Opportunities                             3.18 – 6.25
Open Source Installation Tool Sprawl                                6.25 – 9.53
Kubernetes to Bootstrap Kubernetes (CoreOS Model)   9.53 – 12.28
Ephemeral Hardware and Immutability                            12.28 – 15.30
Edge Computing                                                                   15.30 – 20.18
Dynamic Data Locations                                                      20.18 – 22.44
Medium Scale Clusters                                                        22.44 – 26.39 (On-Prem Kubernetes)
Wrap Up (OpenStack Bus Tour)                                          26.39 – END

Podcast Guest: Jordan Rinke

Technically inclined executive with 7 years of team leadership and startup growth experience:
Leading teams from 4 to 20 people in size on highly technical tactical and responsive issues. Managing the teams that have helped a number of startups secure funding from $50k to $1.5MM+ and effectively utilizing that investment to grow a sustainable energetic culture and product portfolio.

Before that I accrued 10 years of dev/eng experience (6 years of fortune 50 company experience, 4 years at one of the world’s largest cloud providers) doing OS deployment (DevOps before it was a buzz word) and driver integration for environments with over 150,000 devices giving me a unique perspective on large scale deployment scenarios.

LinuxKit and Three Concerns with Physical Provisioning of Immutable Images

DR ProvisionAt Dockercon this week, Docker announced an immutable operating system called LinuxKit which is powered by a Packer-like utility called Moby that RackN CTO, Greg Althaus, explains in the video below.

For additional conference notes, check out Rob Hirschfeld’s Dockercon retro blog post.

Three Concerns with Immutable O/S on Physical

With a mix of excitement and apprehension, the RackN team has been watching physical deployment of immutable operating systems like CoreOS Container Linux and RancherOS.  Overall, we like the idea of a small locked (aka immutable) in-memory image for servers; however, the concept does not map perfectly to hardware.

Note: if you want to provision these operating systems in a production way, we can help you!

These operating systems work on a “less is more” approach that strips everything out of the images to make them small and secure.  

This is great for cloud-first approaches where VM size has a material impact in cost.  It’s particularly matched for container platforms where VMs are constantly being created and destroyed.  In these cases, the immutable image is easy to update and saves money.

So, why does that not work as well on physical?

First:  HA DHCP?!  It’s not as great a map for physical systems where operating system overhead is pretty minimal.  The model requires orchestrated rebooting of your hardware.  It also means that you need a highly available (HA) PXE Provisioning infrastructure (like we’re building with Digital Rebar).

Second: Configuration. That means that they must rely on having cloud-init injected configuration.  In a physical environment, there is no way to create cloud-init like injections without integrating with the kickstart systems (a feature of Digital Rebar Provision).  Further, hardware has a lot more configuration options (like hard drives and network interfaces) than VMs.  That means that we need a robust and system-by-system way to manage these configurations.

Third:  No SSH.  Yes another problem with these minimal images is that they are supposed to eliminate SSH.   Ideally, their image and configuration provides everything required to run the image without additional administration.  Unfortunately, many applications assume post-boot configuration.  That means that people often re-enable SSH to use tools like Ansible.  If it did not conflict with the very nature of the “do-not configure-the-server” immutable model, I would suggest that SSH is a perfectly reasonable requirement for operators running physical infrastructure.

In Summary, even with those issues, we are excited about the positive impact this immutable approach can have on data center operations.

With tooling like Digital Rebar, it’s possible to manage the issues above.  If this appeals to you, let us know!

Hey Dockercon, let’s get Physical!

IMG_20170419_121918Overall, Dockercon did a good job connecting Docker users with information.  In some ways, it was a very “let’s get down to business” conference without the open source collaboration feel of previous events.  For enterprise customers and partners, that may be a welcome change.

Unlike past Dockercons, the event did not have major announcements or a lot of non-Docker ecosystem buzz.  That said, I miss that the event did not have major announcements or a lot of non-Docker ecosystem buzz.

One item that got me excited was an immutable operating system called LinuxKit which is powered by a Packer-like utility called Moby (ok, I know it does more but that’s still fuzzy to me).

RackN CTO, Greg Althaus, was able to turn around a working LinuxKit Kubernetes demo (VIDEO) overnight.  This short video explains Moby & LinuxKit plus uses the new Digital Rebar Provision in an amazing integration.

Want to hear more about immutable operating systems?  Check out our post on RackN’s site about three challenges of running things like LinuxKit, CoreOS Container Linux and RancherOS on metal.

Oh, and YES, that was my 15-year-old daughter giving a presentation at Dockercon about workplace diversity.  I’ll link the video when they’ve posted them.

https://www.slideshare.net/KateHirschfeld/slideshelf

Cloudcast.net gem about Cluster Ops Gap

15967Podcast juxtaposition can be magical.  In this case, I heard back-to-back sessions with pragmatic for cluster operations and then how developers are rebelling against infrastructure.

Last week, I was listening to Brian Gracely’s “Automatic DevOps” discussion with  John Troyer (CEO at TechReckoning, a community for IT pros) followed by his confusingly titled “operators” talk with Brandon Phillips (CTO at CoreOS).

John’s mid-recording comments really resonated with me:

At 16 minutes: “IT is going to be the master of many environments… If you have an environment is hybrid & multi-cloud, then you still need to care about infrastructure… we are going to be living with that for at least 10 years.”

At 18 minutes: “We need a layer that is cloud-like, devops-like and agile-like that can still be deployed in multiple places.  This middle layer, Cluster Ops, is really important because it’s the layer between the infrastructure and the app.”

The conversation with Brandon felt very different where the goal was to package everything “operator” into Kubernetes semantics including Kubernetes running itself.  This inception approach to running the cluster is irresistible within the community because the goal of the community is to stop having to worry about infrastructure.  [Brian – call me if you want to a do podcast of the counter point to self-hosted].

Infrastructure is hard and complex.  There’s good reason to limit how many people have to deal with that, but someone still has to deal with it.

I’m a big fan of container workloads generally and Kubernetes specifically as a way to help isolate application developers from infrastructure; consequently, it’s not designed to handle the messy infrastructure requirements that make Cluster Ops a challenge.  This is a good thing because complexity explodes when platforms expose infrastructure details.

For Kubernetes and similar, I believe that injecting too much infrastructure mess undermines the simplicity of the platform.

There’s a different type of platform needed for infrastructure aware cluster operations where automation needs to address complexity via composability.  That’s what RackN is building with open Digital Rebar: a the hybrid management layer that can consistently automate around infrastructure variation.

If you want to work with us to create system focused, infrastructure agnostic automation then take a look at the work we’ve been doing on underlay and cluster operations.

 

As Docker rises above (and disrupts) clouds, I’m thinking about their community landscape

Watching the lovefest of DockerConf last week had me digging up my April 2014 “Can’t Contain(erize) the Hype” post.  There’s no doubt that Docker (and containers more broadly) is delivering on it’s promise.  I was impressed with the container community navigating towards an open platform in RunC and vendor adoption of the trusted container platforms.

I’m a fan of containers and their potential; yet, remotely watching the scope and exuberance of Docker partnerships seems out of proportion with the current capabilities of the technology.

The latest update to the Docker technology, v1.7, introduces a lot of important network, security and storage features.  The price of all that progress is disruption to ongoing work and integration to the ecosystem.

There’s always two sides to the rapid innovation coin: “Sweet, new features!  Meh, breaking changes to absorb.”

Docker Ecosystem Explained

Docker Ecosystem Explained

There remains a confusion between Docker the company and Docker the technology.  I like how the chart (right) maps out potential areas in the Docker ecosystem.  There’s clearly a lot of places for companies to monetize the technology; however, it’s not as clear if the company will be able to secede lucrative regions, like orchestration, to become a competitive landscape.

While Docker has clearly delivered a lot of value in just a year, they have a fair share of challenges ahead.  

If OpenStack is a leading indicator, we can expect to see vendor battlegrounds forming around networking and storage.  Docker (the company) has a chance to show leadership and build community here yet could cause harm by giving up the arbitrator role be a contender instead.

One thing that would help control the inevitable border skirmishes will be clear definitions of core, ecosystem and adjacencies.  I see Docker blurring these lines with some of their tools around orchestration, networking and storage.  I believe that was part of their now-suspended kerfuffle with CoreOS.

Thinking a step further, parts of the Docker technology (RunC) have moved over to Linux Foundation governance.  I wonder if the community will drive additional shared components into open governance.  Looking at Node.js, there’s clear precedent and I wonder if Joyent’s big Docker plans have them thinking along these lines.