I expect 2016 to be a confusing year for everyone in IT. For 2015, I predicted that new uses for containers are going to upset cloud’s apple cart; however, the replacement paradigm is not clear yet. Consequently, I’m doing a prognostication mix and match: five predictions and seven items on a “container technology watch list.”
TL;DR:In 2016, Hybrid IT arrives on Containers’ wings.
Considering my expectations below, I think it’s time to accept that all IT is heterogeneous and stop trying to box everything into a mono-cloud. Accepting hybrid as current state unblocks many IT decisions that are waiting for things to settle down.
Here’s the memo: “Stop waiting. It’s not going to converge.”
Container Adoption Seen As Two Stages: We will finally accept that Containers have strength for both infrastructure (first stage adoption) and application life-cycle (second stage adoption) transformation. Stage one offers value so we will start talking about legacy migration into containers without shaming teams that are not also rewriting apps as immutable microservice unicorns.
OpenStack continues to bump and grow. Adoption is up and open alternatives are disappearing. For dedicated/private IaaS, OpenStack will continue to gain in 2016 for basic VM management. Both competitive and internal pressures continue to threaten the project but I believe they will not emerge in 2016. Here’s my complete OpenStack 2016 post?
Amazon, GCE and Azure make everything else questionable. These services are so deep and rich that I’d question anyone who is not using them. At least one of them simply have to be part of everyone’s IT strategy for financial, talent and technical reasons.
Cloud API becomes irrelevant. Cloud API is so 2011! There are now so many reasonable clients to abstract various Infrastructures that Cloud APIs are less relevant. Capability, interoperability and consistency remain critical factors, but the APIs themselves are not interesting.
I’m planning posts about all these key container ecosystems for 2016. I think they are all significant contributors to the emerging application life-cycle paradigm.
Service Containers (& VMs): There’s an emerging pattern of infrastructure managed containers that provide critical host services like networking, logging, and monitoring. I believe this pattern will provide significant value and generate it’s own ecosystem.
Networking & Storage Services: Gaps in networking and storage for containers need to get solved in a consistent way. Expect a lot of thrash and innovation here.
Container Orchestration Services: This is the current battleground for container mind share. Kubernetes, Mesos and Docker Swarm get headlines but there are other interesting alternatives.
Containers on Metal: Removing the virtualization layer reduces complexity, overhead and cost. Container workloads are good choices to re-purpose older servers that have too little CPU or RAM to serve as VM hosts. Who can say no to free infrastructure?! While an obvious win to many, we’ll need to make progress on standardized scale and upgrade operations first.
Immutable Infrastructure: Even as this term wins the “most confusing” concept in cloud award, it is an important one for container designers to understand. The unfortunate naming paradox is that immutable infrastructure drives disciplines that allow fast turnover, better security and more dynamic management.
Microservices: The latest generation of service oriented architecture (SOA) benefits from a new class of distribute service registration platforms (etcd and consul) that bring new life into SOA.
Paywall Registries: The important of container registries is easy to overlook because they seem to be version 2.0 of package caches; however, container layering makes these services much more dynamic and central than many realize. (more? Bernard Golden and I already posted about this)
What two items did not make the 2016 cut? 1) Special purpose container-focused operating systems like CoreOS or RancherOS. While interesting, I don’t think these deployment technologies have architectural level influence. 2) Container Security via VMs. I’m seeing patterns where containers may actually be more secure than VMs. This is FUD created by people with a vested interest in virtualization.
Did I miss something? I’d love to know what you think I got right or wrong!
Nearly 10 TIMES faster system resets – that’s the result of fully enabling an multi-container immutable deployment on Digital Rebar.
I’ve been having a “containers all the way down” month since we launched Digital Rebar deployment using Docker Compose. I don’t want to imply that we rubbed Docker on the platform and magic happened. The RackN team spent nearly a year building up the Consul integration and service wrappers for our platform before we were ready to fully migrate.
During the Digital Rebar migration, we took our already service-oriented code base and broke it into microservices. Specifically, the Digital Rebar parts (the API and engine) now run in their own container and each service (DNS, DHCP, Provisioning, Logging, NTP, etc) also has a dedicated container. Likewise, supporting items like Consul and PostgreSQL are, surprise, managed in dedicated containers too. All together, that’s over nine containers and we continue to partition out services.
We use Docker Compose to coordinate the start-up and Consul to wire everything together. Both play a role, but Consul is the critical glue that allows Digital Rebar components to find each other. These were not random choices. We’ve been using a Docker package for over two years and using Consul service registration as an architectural choice for over a year.
Service registration plays a major role in the functional ops design because we’ve been wrapping datacenter services like DNS with APIs. Consul is a separation between providing and consuming the service. Our previous design required us to track the running service. This worked until customers asked for pluggable services (and every customer needs pluggable services as they scale).
Besides being a faster to reset the environment, there are several additional wins:
more transparent in how it operates – it’s obvious which containers provide each service and easy to monitor them as individuals.
easier to distribute services in the environment – we can find where the service runs because of the Consul registration, so we don’t have to manage it.
possible to have redundant services – it’s easy to spin up new services even on the same system
make services pluggable – as long as the service registers and there’s an API, we can replace the implementation.
no concern about which distribution is used – all our containers are Ubuntu user space but the host can be anything.
changes to components are more isolated – changing one service does not require a lot of downloading.
Docker and microservices are not magic but the benefits are real. Be prepared to make architectural investments to realize the gains.
Progress and investment have been substantial and, happily, organic. Like many platforms, it’s success relies on a reasonable balance between strong opinions about “right” patterns and enough flexibility to accommodate exceptions.
From a well patterned foundation, development teams find acceleration. This seems to be helping CloudFoundry win some high-profile enterprise adopters.
The interesting challenge ahead of the project comes from building more complex autonomous deployments. With the challenge of horizontal scale of arguably behind them, CF users are starting to build more complex architectures. This includes dynamic provisioning of the providers (like data bases, object stores and other persistent adjacent services) and connecting to containerized “micro-services.” (see Matt Stine’s preso)
While this is a natural evolution, it adds an order of magnitude more complexity because the contracts between previously isolated layers are suddenly not reliable.
For example, what happens to a CF deployment when the database provider is field upgraded to a new version. That could introduce breaking changes in dependent applications that are completely opaque to the data provider. These are hard problems to solve.
Happily, that’s exactly the discussions that we’re starting to have with container orchestration systems. It’s also part of the dialog that I’ve been trying to drive with Functional Operations (FuncOps Preso) on the physical automation side. I’m optimistic that CloudFoundry patterns will help make this problem more tractable.
OpenCrowbar has been using Consul more and more deeply. We’ve reached the point where we must register services on Consul to pass automated tests.
Consequently, I had to write a little Consul client in Erlang.
The client is very basic, but it seems to perform all of the required functions. It relies on some other libraries in OpenCrowbar’s BDD but they are relatively self-contained. Pulls welcome if you’d like to help build this out.
After writing pages of notes about the impact of Docker, microservice architectures, mainstreaming of Ops Automation, software defined networking, exponential data growth and the explosion of alternative hardware architecture, I realized that it all boils down to the death of cloud as we know it.
OK, we’re not killing cloud per se this year. It’s more that we’ve put 10 pounds of cloud into a 5 pound bag so it’s just not working in 2015 to call it cloud.
Cloud was happily misunderstood back in 2012 as virtualized infrastructure wrapped in an API beside some platform services (like object storage).
That illusion will be shattered in 2015 as we fully digest the extent of the beautiful and complex mess that we’ve created in the search for better scale economics and faster delivery pipelines. 2015 is going to cause a lot of indigestion for CIOs, analysts and wandering technology executives. No one can pick the winners with Decisive Leadership™ alone because there are simply too many possible right ways to solve problems.
Here’s my list of the seven cloud disrupting technologies and frameworks that will gain even greater momentum in 2015:
Docker – I think that Docker is the face of a larger disruption around containers and packaging. I’m sure Docker is not the thing alone. There are a fleet of related technologies and Docker replacements; however, there’s no doubt that it’s leading a timely rethinking of application life-cycle delivery.
New languages and frameworks – it’s not just the rapid maturity of Node.js and Go, but the frameworks and services that we’re building (like Cloud Foundry or Apache Spark) that change the way we use traditional languages.
Microservice architectures – this is more than containers, it’s really Functional Programming for Ops (aka FuncOps) that’s a new generation of service oriented architecture that is being empowered by container orchestration systems (like Brooklyn or Fleet). Using microservices well seems to redefine how we use traditional cloud.
Mainstreaming of Ops Automation – We’re past “if DevOps” and into the how. Ops automation, not cloud, is the real puppies vs cattle battle ground. As IT creates automation to better use clouds, we create application portability that makes cloud disappear. This freedom translates into new choices (like PaaS, containers or hardware) for operators.
Software defined networking – SDN means different things but the impacts are all the same: we are automating networking and integrating it into our deployments. The days of networking and compute silos are ending and that’s going to change how we think about cloud and the supporting infrastructure.
Exponential data growth – you cannot build applications or infrastructure without considering how your storage needs will grow as we absorb more data streams and internet of things sources.
Explosion of alternative hardware architecture – In 2010, infrastructure was basically pizza box or blade from a handful of vendors. Today, I’m seeing a rising tide of alternatives architectures including ARM, Converged and Storage focused from an increasing cadre of sources including vendors sharing open designs (OCP). With improved automation, these new “non-cloud” options become part of the dynamic infrastructure spectrum.
Today these seven items create complexity and confusion as we work to balance the new concepts and technologies. I can see a path forward that redefines IT to be both more flexible and dynamic while also being stable and performing.
Want more 2015 predictions? Here’s my OpenStack EOY post about limiting/expanding the project scope.