Podcast – Dave Blakey of Snapt on Radically Different ADC

Joining us this week is Dave Blakey, CEO and Co-Founder Snapt.

About Snapt

Snapt develops high-end solutions for application delivery. We provide load balancing, web acceleration, caching and security for critical services.

 Highlights

  • 1 min 28 sec: Introduction of Guest
  • 1 min 59 sec: Overview of Snapt
    • Software solution
  • 3 min 1 sec: New Approach to Firewalls and Load Balancers
    • Driven by customers with micro-services, containers, and dynamic needs
    • Fast scale and massive volume needs
    • Value is in quality of service and visibility into any anomaly
  • 7 min 28 sec: Engaging with DevOps teams for Customer interactions
    • Similar tools across multiple clouds and on-premises drives needs
    • 80% is visibility and 20% is scalability
    • Podcast – Honeycomb Observability
  • 13 min 09 sec: Kubernetes and Istio
    • Use cases remain the same independent of the technology
    • Difference is in the operations not the setup
    • Istio is an API for Snapt to plug into
  • 17 min 29 sec: How do you manage globally delivered application stack?
    • Have to go deep into app services to properly meet demand where needed
    • Immutable deployments?
  • 25 min 24 sec: Eliminate Complexity to Create Operational Opportunity
  • 26 min 29 sec: Corporate Culture Fit in Snapt Team
    • Built Snapt as they needed a product like Snapt
    • Feature and Complexity Creep
  • 28 min 48 sec: Does platform learn?
  • 31 min 20 sec: Lessons about system communication times
    • Lose 25% of audience per 1 second of website load time
  • 34 min 34 sec: Wrap-Up

Podcast Guest:  Dave Blakey, CEO and Co-Founder Snapt.

Dave Blakey founded Snapt in 2012 and currently serves as the company’s CEO.

Snapt now provides load balancing and acceleration to more than 10,000 clients in 50 countries. High-profile clients include NASA, Intel, and various other forward-thinking technology companies.

Today, Dave has evolved into a leading open-source software-defined networking thought leader, with deep domain expertise in high performance (carrier grade) network systems, management, and security solutions.

He is a passionate advocate for advancing South Africa’s start-up ecosystem and expanding the global presence of the country’s tech hub.

Podcast – James Ferguson talks Kubernetes and the future as an Application Platform

Joining us this week is James Ferguson, Director of Cloud Consulting, JBC Labs.

About JBC Labs

Looking to automate your cloud and on-prem solutions? Needing to see faster CI/CD on AWS, GCP or Azure? Or perhaps ETL processing that makes your team and users of information more productive? JBC Labs has provided solutions for over 20 years to companies big and small. Our solutions make you run faster, more fluid, and provide a spark to drive your innovation. Our team are certified cloud architects and solution providers ready to help you today.

Highlights
• Overview of JBC Labs’ Jump Box Central Kubernetes Solution
• State of Kubernetes Today
• Concept of Kubernetes as an Application Platform
• Functions as a Service
• Service Mesh and Kubernetes

Topic                                                                                              Time (Minutes.Seconds)
Introduction                                                                                  0.0 – 1.24
Jump Box Central Name?                                                          1.24 – 2.06
Where are People Looking for Help in Kubernetes?            2.06 – 4.13
What Problem are you Looking to Solve?                              4.13 – 6.15
What Else Needed to Make Kubernetes Successful?          6.15 – 10.18
Sell Services for Kubernetes?                                                   10.18 – 12.09
Kubernetes Delivery Platform for Apps                                  12.09 – 15.37 (Hospital Example)
Why Function as a Service?                                                      15.37 – 19.05
Does Variety of Options Slow Down Adoption?                    19.05 – 20.05
Is FaaS an App for Kubernetes?                                               20.05 – 22.29 (PB & Chocolate)
Role of Service Mesh                                                                 22.29 – 29.06
Contact Information                                                                   29.06 – END

James Ferguson, Director of Cloud Consulting, JBC Labs.

James Ferguson has been involved in IT, Software Development and Business Management since 1992. During James Career he has created the worlds first mobile agnostic application for SAP and Oracle in the cloud, featured in Gartner and Forrester. Founded two companies and led many others. Industries James has helped companies ranging from Real Estate, Oil and Gas, Utilities, Finance, Insurance and Marketing. James currently helps customers as a principal architect and thought leader for the fortune 500 and SMB. James can be found on Linkedin, email, mobile, or out in the back country hiking.

DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

.IO! .IO! It’s off to a Service Mesh you should go [Gluecon 2017 notes]

TL;DR: If you are containerizing your applications, you need to be aware of this “service mesh” architectural pattern to help manage your services.

Gluecon turned out to be all about a microservice concept called a “service mesh” which was being promoted by Buoyant with Linkerd and IBM/Google/Lyft with Istio.  This class of services is a natural evolution of the rush to microservices and something that I’ve written microservice technical architecture on TheNewStack about in the past.

servicemeshA service mesh is the result of having a dependency grid of microservices.  Since we’ve decoupled the application internally, we’ve created coupling between the services.  Hard coding those relationships causes serious failure risks so we need to have a service that intermediates the services.  This pattern has been widely socialized with this zipkin graphic (Srdan Srepfler’s microservice anatomy presentation)

IMHO, it’s healthy to find service mesh architecturally scary.

One of the hardest things about scaling software is managing the dependency graph.  This challenge is unavoidable from early days of Windows “DLL Hell” to the mixed joy/terror of working with Ruby Gem, Python Pip and Node.js NPM.  We get tremendous acceleration from using external modules and services, but we also pay a price to manage those dependencies.

For microservice and Cloud Native designs, the service mesh is that dependency management price tag.

A service mesh is not just a service injected between services.  It’s simplest function is to provide a reverse proxy so that multiple services can be consolidated under a single end-point.  That quickly leads to needing load balancers, discovery and encrypted back-end communication.  From there, we start thinking about circuit breaker patterns, advanced logging and A/B migrations.  Another important consideration is that service meshes are for internal services and not end-user facing, that means layers of load balancers.

It’s easy to see how a service mesh becomes a very critical infrastructure component.

If you are working your way through containerization then these may seem like very advanced concepts that you can postpone learning.  That blissful state will not last for long and I highly suggest being aware of the pattern before your development teams start writing their own versions of this complex abstraction layer.  Don’t assume this is a development concern: the service mesh is deeply tied to infrastructure and operations.

The service mesh is one of those tricky dev/ops intersections and should be discussed jointly.

Has your team been working with a service mesh?  We’d love to hear your stories about it!

Related Reading:

Czan we consider Ansible Inventory as simple service registry?

... "docker exec configure file" is a sad but common pattern ...

np2utaoe_400x400Interesting discussions happen when you hang out with straight-talking Paul Czarkowski. There’s a long chain of circumstance that lead us from an Interop panel together at Barcelona (video) to bemoaning Ansible and Docker integration early one Sunday morning outside a gate in IAD.

What started as a rant about czray ways people find of injecting configuration into containers (we seemed to think file mounting configs was “least horrific”) turned into an discussion about how to retro-fit application registry features (like consul or etcd) into legacy applications.

Ansible Inventory is basically a static registry service.

While we both acknowledge that Ansible inventory is distinctly not a registry service, the idea is a useful way to help explain the interaction between registry and configuration.  The most basic goal of a registry (there are others!) is to have system components be able to find and integrate with other system components.  In that sense, the inventory creates allows operators to pre-wire this information in advance in a functional way.

The utility quickly falls apart because it’s difficult to create re-runable Ansible (people can barely pronounce idempotent as it is) that could handle incremental updates.  Also, a registry provides many other important functions like service health and basic cross node storage that are import.

It may not be perfect, but I thought it was @pczarkowski insight worth passing on.  What do you think?

Hybrid DevOps: Union of Configuration, Orchestration and Composability

Steven Spector and I talked about “Hybrid DevOps” as a concept.  Our discussion led to a ‘there’s a picture for that!’ moment that often helped clarify the concept.  We believe that this concept, like Rugged DevOps, is additive to existing DevOps thinking and culture.  It’s about expanding our thinking to include orchestration and composability.

Hybrid DevOps 3 components (1)Here’s our write-up: Hybrid DevOps: Union of Configuration, Orchestration and Composability

Composability is Critical in DevOps: let’s break the monoliths

This post was inspired by my DevOps.com Git for DevOps post and is an evolution of my “Functional Ops (the cake is a lie)” talks.

git_logo2016 is the year we break down the monoliths.  We’ve spent a lot of time talking about monolithic applications and microservices; however, there’s an equally deep challenge in ops automation.

Anti-monolith composability means making our automation into function blocks that can be chained together by orchestration.

What is going wrong?  We’re building fragile tightly coupled automation.

Most of the automation scripts that I’ve worked with become very long interconnected sequences well beyond the actual application that they are trying to install.  For example, Kubernetes needs etcd as a datastore.  The current model is to include the etcd install in the install script.  The same is true for SDN install/configuation and post-install test and dashboard UIs.  The simple “install Kubernetes” quickly explodes into a kitchen sink of related adjacent components.

Those installs quickly become fragile and bloated.  Even worse, they have hidden dependencies.  What happens when etcd changes.  Now, we’ve got to track down all the references to it burried in etcd based applications.  Further, we don’t get the benefits of etcd deployment improvements like secure or scale configuration.

What can we do about it?  Resist the urge to create vertical silos.

It’s temping and fast to create automation that works in a very prescriptive way for a single platform, operating system and tool chain.  The work of creating abstractions between configuration steps seems like a lot of overhead.  Even if you create those boundaries or reuse upstream automation, you’re likely to be vulnerable to changes within that component.  All these concerns drive operators to walk away from working collaboratively with each other and with developers.

Giving up on collaborative Ops hurts us all and makes it impossible to engineer excellent operational tools.  

Don’t give up!  Like git for development, we can do this together.