Podcast – Dave Blakey of Snapt on Radically Different ADC

Joining us this week is Dave Blakey, CEO and Co-Founder Snapt.

About Snapt

Snapt develops high-end solutions for application delivery. We provide load balancing, web acceleration, caching and security for critical services.

 Highlights

  • 1 min 28 sec: Introduction of Guest
  • 1 min 59 sec: Overview of Snapt
    • Software solution
  • 3 min 1 sec: New Approach to Firewalls and Load Balancers
    • Driven by customers with micro-services, containers, and dynamic needs
    • Fast scale and massive volume needs
    • Value is in quality of service and visibility into any anomaly
  • 7 min 28 sec: Engaging with DevOps teams for Customer interactions
    • Similar tools across multiple clouds and on-premises drives needs
    • 80% is visibility and 20% is scalability
    • Podcast – Honeycomb Observability
  • 13 min 09 sec: Kubernetes and Istio
    • Use cases remain the same independent of the technology
    • Difference is in the operations not the setup
    • Istio is an API for Snapt to plug into
  • 17 min 29 sec: How do you manage globally delivered application stack?
    • Have to go deep into app services to properly meet demand where needed
    • Immutable deployments?
  • 25 min 24 sec: Eliminate Complexity to Create Operational Opportunity
  • 26 min 29 sec: Corporate Culture Fit in Snapt Team
    • Built Snapt as they needed a product like Snapt
    • Feature and Complexity Creep
  • 28 min 48 sec: Does platform learn?
  • 31 min 20 sec: Lessons about system communication times
    • Lose 25% of audience per 1 second of website load time
  • 34 min 34 sec: Wrap-Up

Podcast Guest:  Dave Blakey, CEO and Co-Founder Snapt.

Dave Blakey founded Snapt in 2012 and currently serves as the company’s CEO.

Snapt now provides load balancing and acceleration to more than 10,000 clients in 50 countries. High-profile clients include NASA, Intel, and various other forward-thinking technology companies.

Today, Dave has evolved into a leading open-source software-defined networking thought leader, with deep domain expertise in high performance (carrier grade) network systems, management, and security solutions.

He is a passionate advocate for advancing South Africa’s start-up ecosystem and expanding the global presence of the country’s tech hub.

Podcast – Erica Windisch on Observability of Serverless, Edge Computing, and Abstraction Boundaries

Joining us this week is Erica Windisch, Founder/CTO at IOpipe, a high fidelity metrics and monitoring service which allows you to see inside AWS Lambda functions for better insights into the daily operations and development of severless applications.

Highlights

  • Intro of AWS Lambda and IOpipe
  • Discussion of Observability and Opaqueness of Serverless
  • Edge Computing Definition and Vision
  • End of Operating Systems and Abstraction Boundaries

Topic                                                                                Time (Minutes.Seconds)

Introduction                                                                    0.0 – 1.16
Vision of technology future                                         1.16 – 3.04 (Containers ~ Docker)
Complexity of initial experience with new tech       3.04 – 5.38 (Devs don’t go deep in OS)
Why Lambda?                                                                5.38 – 8.14 (Deploy functions)
What IOpipe does?                                                        8.14 – 10.54 (Observability for calls)
Lambda and Integration into IOpipe                          10.54 – 13.48 (Overhead)
Observability definition                                                 13.48 – 17.25
Opaque system with Lambda                                     17.25 – 21.13
Serverless framework still need tools to see inside   21.13 – 24.20 (Distributed Issues Day 1)
Edge computing definition                                           24.20 – 26.56 (Microprocessor in Everything)
Edge infrastructure vision                                            26.56 – 29.32 (TensorFlow example)
Portability of containers vs functions                         29.32 – 31.00 (Linux is Dying)
Abstraction boundaries                                                31.00 – 33.50 (Immutable Infra Panel)
Is Serverless the portability unit for abstraction?     33.50 – 39.46 (Amazon Greengrass)
Wrap Up                                                                          39.46 – END

 

Podcast Guest: Erica Windisch, Founder/CTO at IOpipe

Erica Windisch is the founder and CTO of IOpipe, a company that builds, connects, and scales code. She was previously a software and security engineer at Docker. Before joining Docker, worked as a principal engineer at Cloudscaling. Studied at Florida Institute of Technology.

 

Podcast – Oliver Gould on Service Mesh, Containers, and Edge

Joining us this week is Oliver Gould, CTO Buoyant who provides a service mesh abstraction view to micro-services and Kubernetes. Oliver and Rob also take a look at how applications are managed at the edge and highlights the future roadmap for Conduit.

Highlights

  • Defining microservices and Kubernetes from Buoyant viewpoint
  • Service mesh abstractions at a request level (load balance, get, put, …)
  • Conduit overview – client-side load balancing
  • Service mesh tool comparisons
  • Edge Computing discussion from service mesh view

Topic                                                                           Time (Minutes.Seconds)

Introduction                                                                0.0 – 1:39
Define Microservices                                                1:39 – 5.25
Define Kubernetes                                                     5.25 – 10.23 (Memory as a Service)
Service Mesh Abstractions                                       10.23 – 12.37 (L5 or L7)
Conduit Overview                                                      12.37 – 18.20 (Sidecar Container)
When do I need Service Mesh?                              18.20 – 19.55 (Complex Debugging)
Service Mesh Comparisons                                     19.55 – 22.31
Deployment Times / V2 to 3 for DRP                    22.31 – 25.13 (Kubernetes into Production)
Edge Computing                                                       25.13 – 27.04 (Define)
App in Cloud + Edge Device?                                  27.04 – 31.10 (POP = Point of Prescience)
Containers + Serverless                                            31.10 – 34.30 (Proxy in Browser)
Future Roadmap                                                       34.30 – 37.06 (Conduit.io)
Wrap Up                                                                     37.06 – END

Podcast Guest:  Oliver Gould, CTO Buoyant

Oliver Gould is the CTO of Buoyant, where he leads open source development efforts. Previously, he was a staff infrastructure engineer at Twitter, where he was the tech lead of the Observability, Traffic, and Configuration and Coordination teams. Oliver is the creator of linkerd and a core contributor to Finagle, the high-volume RPC library used at Twitter, Pinterest, SoundCloud, and many other companies.

Making Server Deployment 10x Faster – the ROI on Immutable Infrastructure

Author’s note: We’re looking for RackN Beta participants who want to help refine next generation deployment capabilities like the one described below.  We have these processes working today – our goal is to make them broadly reusable and standardized.

We’ve been posting [Go CI/CD and Immutable Infrastructure for Edge Computing Management] and podcasting [Discoposse: The Death of Configuration Management, Immutable Deployment Challenges for DevOps] about the concept of immutable infrastructure because it offers simpler and more repeatable operations processes. Delivering a pre-built image with software that’s already installed and mostly configured can greatly simplify deployment (see cloud-init).  It is simpler because all of the “moving parts” of the image can be pre-wired together and tested as a unit.  This model is default for containers, but it’s also widely used in cloud deployments where it’s easy to push an AMI or VHD to the cloud as a master image.

It takes work and expertise to automate building these immutable images, so it’s important to understand the benefits of simplicity, repeatability and speed.

  • Simplicity: Traditional configuration approaches start from an operating system base and then run configuration scripts to install the application and its prerequisites.  This configuration process requires many steps that are sequence dependent and have external dependencies.  Even small changes will break the entire system and prevent deployments.  By doing this as an image, deploy time integration or configuration issues fare eliminated.
  • Repeatability: Since the deliverable is an image, all environments are using the exact same artifact from dev, test and production.  That consistency reduces error rates and encourages cross-team collaboration because all parties are invested in the providence of the images.  In fact, immutable images are a great way to ensure that development and operations are at the table because neither team can create a custom environment.
  • Speed: Post-deployment configuration is slow.  If your installation has to pull patches, libraries and other components every time you install it then you’ll spend a lot of time waiting for downloads.  Believe it or not, the overhead of downloading a full image is small compared to the incremental delays of configuring an application stack.  Even the compromise of pre-staging items and then running local only configuration still take a surprisingly long time.

These benefits have been relatively easy to realize with Docker containers (it’s built in!) or VM images; however, they are much harder to realize with physical systems.  Containers and VMs provide a consistent abstraction that is missing in hardware.  Variations in networking, storage or even memory can cause images deployments to fail.

But… if we could do image based deployments to metal then we’d be able to gain these significant advantages.  We’d also be able to create portability of images between cloud and physical infrastructure.  Between the pure speed of direct images to disk (compared to kickstart or pre-seed) and the elimination of post-provision configuration, immutable metal deploys can be 5x to 10x faster.  

Deployment going from 30 minutes down to 6 or even 3.  That’s a very big deal.

That’s exactly why RackN has been working to create a standardized, repeatable process for immutable deployments.  We have this process working today with some expert steps required in image creation.  

If this type of process would help your operations team then please contact us and join the RackN Beta Program with advanced extensions for Digital Rebar Provision.

Note: There are risks to this approach as well.  There is no system wide patch or update mechanism except creating a new image and redeploying.  That means it takes more time to generate and roll an emergency patch to all systems.  Also, even small changes require replacing whole images.  These are both practical concerns; however, they are mitigated by maintaining a robust continuous deployment process where images are being constantly refreshed.

How about a CaaPuccino? Krish and Rob discuss containers, platforms, hybrid issues around Kubernetes and OpenStack.

CaaPuccino: A frothy mix of containers and platforms.

Check out Krish Subramanian’s (@krishnan) Modern Enterprise podcast (audio here) today for a surprisingly deep and thoughtful discussion about how frothy new technologies are impacting Modern Enterprise IT. Of course, we also take some time to throw some fire bombs at the end. You can use my notes below to jump to your favorite topics.

The key takeaways are that portability is hard and we’re still working out the impact of container architecture.

The benefit of the longer interview is that we really dig into the reasons why portability is hard and discuss ways to improve it. My personal SRE posts and those on the RackN blog describe operational processes that improve portability. These are real concerns for all IT organizations because mixed and hybrid models are a fact of life.

If you are not actively making automation that works against multiple infrastructures then you are building technical debt.

Of course, if you just want the snark, then jump forward to 24:00 minutes in where we talk future of Kubernetes, OpenStack and the inverted intersection of the projects.

Krish, thanks for the great discussion!

Rob’s Podcast Notes (39 minutes)

2:37: Rob intros about Digital Rebar & RackN

4:50: Why our Kubernetes is JUST UPSTREAM

5:35: Where are we going in 5 years > why Rob believes in Hybrid

  • Should not be 1 vendor who owns everything
  • That’s why we work for portability
  • Public cloud vision: you should stop caring about infrastructure
  • Coming to an age when infrastructure can be completely automated
  • Developer rebellion against infrastructure

8:36: Krish believes that Public cloud will be more decentralized

  • Public cloud should be part of everyone’s IT plan
  • It should not be the ONLY thig

9:25: Docker helps create portability, what else creates portability? Will there be a standard

  • Containers are a huge change, but it’s not just packaging
  • Smaller units of work is important for portability
  • Container schedulers & PaaS are very opinionated, that’s what creates portability
  • Deeper into infrastructure loses portability (RackN helps)
  • Rob predicts that Lambda and Serverless creates portability too

11:38: Are new standards emerging?

  • Some APIs become dominate and create de facto APIs
  • Embedded assumptions break portability – that’s what makes automation fragile
  • Rob explains why we inject configuration to abstract infrastructure
  • RackN works to inject attributes instead of allowing scripts to assume settings
  • For example, networking assumptions break portability
  • Platforms force people to give up configuration in ways that break portability

14:50: Why did Platform as a Service not take off?

  • Rob defends PaaS – thinks that it has accomplished a lot
  • Challenge of PaaS is that it’s very restrictive by design
  • Calls out Andrew Clay Shafer’s “don’t call it a PaaS” position
  • Containers provide a less restrictive approach with more options.

17:00: What’s the impact on Enterprise? How are developers being impacted?

  • Service Orientation is a very important thing to consider
  • Encapsulation from services is very valuable
  • Companies don’t own all their IT services any more – it’s not monolithic
  • IT Service Orientation aligns with Business Processes
    Rob says the API economy is a big deal
  • In machine learning, a business’ data may be more valuable than their product

19:30: Services impact?

  • Service’s have a business imperative
  • We’re not ready for all the impacts of a service orientation
  • Challenge is to mix configuration and services
  • Magic of Digital Rebar is that it can mix orchestration of both

22:00: We are having issues with simple, how are we going to scale up?

  • Barriers are very low right now

22:30: Will Kubernetes help us solve governance issues?

  • Kubernetes is doing a go building an ecosystem
  • Smart to focus on just being Kubernetes
  • It will be chaotic as the core is worked out

24:00: Do you think Kubernetes is going in the right direction?

  • Rob is bullish for Kubernetes to be the dominant platform because it’s narrow and specific
  • Google has the right balance of control
  • Kubernetes really is not that complex for what it does
  • Mesos is also good but harder to understand for users
  • Swarm is simple but harder to extend for an ecosystem
  • Kubernetes is a threat to Amazon because it creates portability and ecosystem outside of their platform
  • Rob thinking that Kubernetes could create platform services that compete with AWS services like RDS.
  • It’s likely to level the field, not create a Google advantage

27:00: How does Kubernetes fit into the Digital Rebar picture?

  • We think of Kubernetes as a great infrastructure abstraction that creates portability
  • We believe there’s a missing underlay that cannot abstract the infrastructure – that’s what we do.
  • OpenStack deployments broken because every data center is custom and different – vendors create a lot of consulting without solving the problem
  • RackN is creating composability UNDER Kubernetes so that those infrastructure differences do not break operation automation
  • Kubernetes does not have the constructs in the abstraction to solve the infrastructure problem, that’s a different problem that should not be added into the APIs
  • Digital Rebar can also then use the Kubernetes abstractions?

30:20: Can OpenStack really be managed/run on top of Kubernetes? That seems complex!

  • There is a MESS in the message of Kubernetes under OpenStack because it sends the message that Kubernetes is better at managing application than OpenStack
  • Since OpenStack is just an application and Kubernetes is a good way to manage applications
  • When OpenStack is already in containers, we can use Kubernetes to do that in a logical way
  • “I’m super impressed with how it’s working” using OpenStack Helm Packs (still needs work)
  • Physical environment still has to be injected into the OpenStack on Kubernetes environment

35:05 Does OpenStack have a future?

  • Yes! But it’s not the big “data center operating system” future that we expected in 2010. Rob thinks it a good VM management platform.
  • Rob provides the same caution for Kubernetes. It will work where the abstractions add value but data centers are complex hybrid beasts
  • Don’t “square peg a data center round hole” – find the best fit
  • OpenStack should have focused on the things it does well – it has a huge appetite for solving too many problems.

Containers, Private Clouds, GIFEE, and the Underlay Problem

ITRevoluion

Gene Kim (@RealGeneKim) posted an exclusive Q&A with Rob Hirschfeld (@zehicle) today on IT Technology: Rob Hirschfeld on Containers, Private Clouds, GIFEE, and the Remaining “Underlay Problem.” 

Questions from the post:

  • Gene Kim: Tell me about the landscape of docker, OpenStack, Kubernetes, etc. How do they all relate, what’s changed, and who’s winning?
  • GK: I recently saw a tweet that I thought was super funny, saying something along the lines “friends don’t let friends build private clouds” — obviously, given all your involvement in the OpenStack community for so many years, I know you disagree with that statement. What is it that you think everyone should know about private clouds that tell the other side of the story?
  • GK: We talked about how much you loved the book Site Reliability Engineering: How Google Runs Production Systems by Betsy Beyer, which I also loved. What resonated with you, and how do you think it relates to how we do Ops work in the next decade?
  • GK: Tell me what about the work you did with Crowbar, and how that informs the work you’re currently doing with Digital Rebar?

Read the full Q&A here.

What does it take to Operate Open Platforms? Answers in Datanauts 72

Did I just let OpenStack ops off the hook….?  Kubernetes production challenges…?  

ix34grhy_400x400I had a lot of fun in this Datanauts wide ranging discussion with unicorn herders Chris Wahl and Ethan Banks.  I like the three section format because it gives us a chance to deep dive into distinct topics and includes some out-of-band analysis by the hosts; however, that means you need to keep listening through the commercial breaks to hear the full podcast.

Three parts?  Yes, Chris and Ethan like to save the best questions for last.

In Part 1, we went deep into the industry operational and business challenges uncovered by the OpenStack project. Particularly, Chris and I go into “platform underlay” issues which I laid out in my “please stop the turtles” post. This was part of the build-up to my SRE series.

In Part 2, we explore my operations-focused view of the latest developments in container schedulers with a focus on Kubernetes. Part of the operational discussion goes into architecture “conceits” (or compromises) that allow developers to get the most from cloud native design patterns. I also make a pitch for using proven tools to run the underlay.

In Part 3, we go deep into DevOps automation topics of configuration and orchestration. We talk about the design principles that help drive “day 2” automation and why getting in-place upgrades should be an industry priority.  Of course, we do cover some Digital Rebar design too.

Take a listen and let me know what you think!

On Twitter, we’ve already started a discussion about how much developers should care about infrastructure. My opinion (posted here) is that one DevOps idea where developers “own” infrastructure caused a partial rebellion towards containers.

yes, we are papering over Container ops [from @TheNewStack #DockerCon]

thenewstackIn this brief 7 minute interview made at DockerCon 16, Alex Williams and I cover a lot of ground ranging from operations’ challenges in container deployment to the early seeds of the community frustration with Docker 1.12 embedding swarm.

I think there’s a lot of pieces we’re still wishing away that aren’t really gone. (at 4:50)

Rather than repeat TheNewStack summary; I want to highlight the operational and integration gaps that we continue to ignore.

It’s exciting to watch a cluster magically appear during a keynote demo, but those demos necessarily skip pass the very real provisioning, networking and security work needed to build sustained clusters.

These underlay problems are general challenges that we can address in composable, open and automated ways.  That’s the RackN goal with Digital Rebar and we’ll be showcasing how that works with some new Kubernetes automation shortly.

Here is the interview on SoundCloud or youtube:

 

Why Fork Docker? Complexity Wack-a-Mole and Commercial Open Source

Update 12/14/16: Docker announced that they would create a container engine only project, ContinainerD, to decouple the engine from management layers above.  Hopefully this addresses this issues outlined in the post below.

Monday, The New Stack broke news about a possible fork of the Docker Engine and prominently quoted me saying “Docker consistently breaks backend compatibility.”  The technical instability alone is not what’s prompting industry leaders like Google, Red Hat and Huawei to take drastic and potentially risky community action in a central project.

So what’s driving a fork?  It’s the intersection of Cash, Complexity and Community.

hamsterIn fact, I’d warned about this risk over a year ago: Docker is both a core infrastucture technology (the docker container runner, aka Docker Engine) and a commercial company that manages the Docker brand.  The community formed a standard, runC, to try and standardize; however, Docker continues to deviate from (or innovate faster) that base.

It’s important for me to note that we use Docker tools and technologies heavily.  So far, I’ve been a long-time advocate and user of Docker’s innovative technology.  As such, we’ve also had to ride the rapid release roller coaster.

Let’s look at what’s going on here in three key areas:

1. Cash

The expected monetization of containers is the multi-system orchestration and support infrastructure.  Since many companies look to containers as leading the disruptive next innovation wave, the idea that Docker is holding part of their plans hostage is simply unacceptable.

So far, the open source Docker Engine has been simply included without payment into these products.  That changed in version 1.12 when Docker co-mingled their competitive Swarm product into the Docker Engine.  That effectively forces these other parties to advocate and distribute their competitors product.

2. Complexity

When Docker added cool Swarm Orchestration features into the v1.12 runtime, it added a lot of complexity too.  That may be simple from a “how many things do I have to download and type” perspective; however, that single unit is now dragging around a lot more code.

In one of the recent comments about this issue, Bob Wise bemoaned the need for infrastructure to be boring.  Even as we look to complex orchestration like Swarm, Kubernetes, Mesos, Rancher and others to perform application automation magic, we also need to reduce complexity in our infrastructure layers.

Along those lines, operators want key abstractions like containers to be as simple and focused as possible.  We’ve seen similar paths for virtualization runtimes like KVM, Xen and VMware that focus on delivering a very narrow band of functionality very well.  There is a lot of pressure from people building with containers to have a similar experience from the container runtime.

This approach both helps operators manage infrastructure and creates a healthy ecosystem of companies that leverage the runtimes.

Note: My company, RackN, believes strongly in this need and it’s a core part of our composable approach to automation with Digital Rebar.

3. Community

Multi-vendor open source is a very challenging and specialized type of community.  In these communities, most of the contributors are paid by companies with a vested (not necessarily transparent) interest in the project components.  If the participants of the community feel that they are not being supported by the leadership then they are likely to revolt.

Ultimately, the primary difference between Docker and a fork of Docker is the brand and the community.  If there companies paying the contributors have the will then it’s possible to move a whole community.  It’s not cheap, but it’s possible.

Developers vs Operators

One overlooked aspect of this discussion is the apparent lock that Docker enjoys on the container developer community.  The three Cs above really focus on the people with budgets (the operators) over the developers.  For a fork to succeed, there needs to be a non-Docker set of tooling that feeds the platform pipeline with portable application packages.

In Conclusion…

The world continues to get more and more heterogeneous.  We already had multiple container runtimes before Docker and the idea of a new one really is not that crazy right now.  We’ve already got an explosion of container orchestration and this is a reflection of that.

My advice?  Worry less about the container format for now and focus on automation and abstractions.

 

OpenStack Interop, Container Security, Install & Open Source Posts

In case you missed it, I posted A LOT of content this week on other sites covering topics for OpenStack Interop, Container Security, Anti-Universal Installers and Monetizing Open Source.  Here are link-bait titles & blurbs from each post so you can decide which topics pique your interest.

Thirteen Ways Containers are More Secure than Virtual Machines on TheNewStack.com

Last year, conventional wisdom had it that containers were much less secure than virtual machines (VMs)! Since containers have such thin separating walls; it was easy to paint these back door risks with a broad brush.  Here’s a reality check: Front door attacks and unpatched vulnerabilities are much more likely than these backdoor hacks.

It’s Time to Slay the Universal Installer Unicorn on DevOps.com 

While many people want a universal “easy button installer,” they also want it to work on their unique snowflake of infrastructures, tools, networks and operating systems.  Because there is so much needful variation and change, it is better to give up on open source projects trying to own an installer and instead focus on making their required components more resilient and portable.

King of the hill? Discussing practical OpenStack interoperability on OpenStack SuperUser

Can OpenStack take the crown as cloud king? In our increasingly hybrid infrastructure environment, the path to the top means making it easier to user to defect from the current leaders (Amazon AWS; VMware) instead of asking them to blaze new trails. Here are my notes from a recent discussion about that exact topic…

Have OpenSource, Will Profit?! 5 thoughts from Battery Ventures OSS event on RobHirschfeld.com

As “open source eats software” the profit imperative becomes ever more important to figure out.  We have to find ways to fund this development or acknowledge that software will simply become waste IP and largess from mega brands.  The later outcome is not particularly appealing or innovative.