Podcast – Antonio Pellegrino on Infrastructure as Software at the Edge

Joining us this week is Antonio Pellegrino, Founder & CEO at Mutable.

About Mutable
Mutable helps software developers create scalable and fast web services by automating DevOps and providing edge technology all around the world.

 Highlights

  • 0 min 35 sec: Introduction of Guest
  • 1 min 09 sec: Are you products edge-focused or just a market to promote to?
    • Building products to meet customers need for containers including edge
  • 2 min 58 sec: Sounds like a platform?
    • Developers create policies on latency, hardware needs, etc
  • 5 min 51 sec: Be aware of the location and structure of deployment hardware
    • Includes management and tracking of network traffic
    • Services see everything as one network even though it isn’t
  • 7 min 49 sec: Why not Kubernetes instead?
    • Kubernetes built for single network
  • 9 min 16 sec: Edge vs Core
    • Temporary services is a big factor in edge
    • HBO and Games of Throne example for supporting access to the feed
  • 13 min 45 sec: Edge is not infinitely elastic
    • Facial recognition example on iPhone
  • 17 min 53 sec: Algorithm to spin up services when needed not always available
  • 19 min 52 sec: How does software learn what to spin up and down
  • 21 min 22 sec: How do you manage storage?
    • Follows standard S3 storage model
  • 24 min 15 sec: NiCs are a core component ~ language for packet management
    • Sounds like a micro-kernel
    • Why people don’t leverage this concept more often?
  • 28 min 32 sec: Manipulate and manage networking at application layer
    • Mutable is building a full stack
    • Operator approach vs developer approach on networking
  • 33 min 21 sec: Control Plane challenges
  • 34 min 27 sec: What is the starting experience with Mutable like?
  • 35 min 55 sec: How is cable industry responding to edge computing?
    • Opportunity to deploy 5G
  • 39 min 23 sec: Wrap-Up

Podcast Guest: Antonio Pellegrino, CEO and founder if Mutable

Antonio is the CEO and founder of Mutable, the next generation platform as a service for microservices and distributed computing. He is a serial entrepreneur, and ran one of the largest e-sports streaming companies of its time. Antonio has built startups centered around developer tools for the past 8 years, saving their customers millions of dollars. Mutable has been a driver in microservices while distributed computing as we now it, allowing developers to push the application to the edge.

The 451 Group Cloudscape report strikes chord misses harmony (DevOps, Hybrid Cloud, Orchestration)

It’s impossible to resist posting about this month’s  451 Group Cloudscape report when it calls me out by name as a leading cloud innovator:

… ProTier founders Dave McCrory and Rob Hirschfeld. ProTier [note: now part of Quest] was, indeed, the first VMware ecosystem vendor to be tracked by The 451 Group. In the face of a skeptical world, these entrepreneurs argued that virtualization needed automation in order to realize its full potential, and that the test lab was the low-hanging fruit. Subsequent events have more than vindicated their view (pg. 33).

It’s even better when the report is worth reading and offers insights into forces shaping the industry.  It’s nice to be “more than vindicated” on an amazing journey we started over 10 years ago!

Rather than recite 451’s points (hybrid cloud = automation + orchestration + devops + pixie dust), I’d rather look at the problem different way as a counterpoint.

The problem is “how do we deal with applications that are scattered over multiple data centers?”

I do not think orchestration is the complete answer.  Current orchestration is too focused on moving around virtual machines (aka workloads).

Ultimately, the solution lies in application architecture; however, I feel that is also a misdirection because cloud is redefining what an “application architecture” means.

Applications are a dynamic mix of compute, storage, and connectivity.

We’re entering an age when all of these ingredients will be delivered as elastic services that will be managed by the applications themselves.  The concept of self management is an extension of DevOps principles that fuse application function and deployment. There are missing pieces, but I’m seeing the innovation moving to fill those gaps.

If you want to see the future of cloud applications then look at the network and storage services that are emerging.  They will tell you more about the future than orchestration.

 

Why cloud compute will be free

Today at Dell, I was presenting to our storage teams about cloud storage (aka the “storage banana”) and Dave “Data Gravity” McCrory reminded me that I had not yet posted my epiphany explaining “why cloud compute will be free.”  This realization derives from other topics that he and I have blogged but not stated so simply.

Overlooking that fact that compute is already free at Google and Amazon, you must understand that it’s a cloud eat cloud world out there where losing a customer places your cloud in jeopardy.  Speaking of Jeopardy…

Answer: Something sought by cloud hosts to make profits (and further the agenda of our AI overlords).

Question: What is lock-in?

Hopefully, it’s already obvious to you that clouds are all about data.  Cloud data takes three primary forms:

  1. Data in transformation (compute)
  2. Data in motion (network)
  3. Data at rest (storage)

These three forms combine to create cloud architecture applications (service oriented, externalized state).

The challenge is to find a compelling charge model that both:

  1. Makes it hard to leave your cloud AND
  2. Encourages customers to use your resources effectively (see #1 in Azure Top 20 post)

While compute demands are relatively elastic, storage demand is very consistent, predictable and constantly grows.  Data is easily measured and difficult to move.  In this way, data represents the perfect anchor for cloud customers (model rule #1).  A host with a growing data consumption foot print will have a long-term predictable revenue base.

However, storage consumption along does not encourage model rule #2.  Since storage is the foundation for the cloud, hosts can fairly judge resource use by measuring data egress, ingress and sidegress (attrib @mccrory 2/20/11).  This means tracking not only data in and out of the cloud, but also data transacted between the providers own cloud services.  For example, Azure changes for both data at rest ($0.15/GB/mo) and data in motion ($0.01/10K).

Consequently, the financially healthiest providers are the ones with most customer data.

If hosting success is all about building a larger, persistent storage footprint then service providers will give away services that drive data at rest and/or in motion.  Giving away compute means eliminating the barrier for customers to set up web sites, develop applications, and build their business.  As these accounts grow, they will deposit data in the cloud’s data bank and ultimately deposit dollars in their piggy bank.

However, there is a no-free-lunch caveat:  free compute will not have a meaningful service level agreement (SLA).  The host will continue to charge for customers who need their applications to operate consistently.  I expect that we’ll see free compute (or “spare compute” from the cloud providers perspective) highly used for early life-cycle (development, test, proof-of-concept) and background analytic applications.

The market is starting to wake up to the idea that cloud is not about IaaS – it’s about who has the data and the networks.

Oh, dem golden spindles!  Oh, dem golden spindles!

Exploding the Cloud Storage Banana

Storage Banana shows how cloud persistence is functionally diverse and optimized

Internally, my group (specifically Dave McCrory & Greg Althaus) has been kicking around some new ways of expressing clouds in an effort to help reconcile Dell’s traditional and cloud focused businesses.  We’ve found it challenging to translate CAP theorem and

externalized application state into more enterprise-ready concepts.

Our latest effort led to a pleasantly succinct explanation of why cloud storage is different than enterprise storage.  Ultimately, it’s a matter of control and optimization.  Cloud persistence (Cache, Queue, Tables, Objects) is functionally diverse in order to optimize for price and performance while enterprise storage (SAN, NAS, SQL) is control and centralization driven.  Unfortunately for enterprises, the data genie is out of the Pandora’s box with respect to architectures that drive much lower cost and higher performance.

The background on this irresistible transformation begins with seeing storage as a spectrum of services as per the table below.

Enterprise:

Consistent

 

Block (SAN) iSCSI, Infiband:

Amazon EBS, EqualLogic, EMC Symmeterix

File (NAS) NFS, CIFS:

NetApp, PowerVault, EMC Clariion

Database (ACID) MS SQL, Oracle 11g, MySQL, Postgres
Cloud:

Distributed

Partitioned

Object DX/Caringo, OpenStack Swift, EMC Atmos
Map/Reduce Hadoop DFS
Key Value Cassandra, CouchDB, Riak, Reddis, Mongo
Queue (Bus) RabbitMQ, ActiveMQ, ZeroMQ, OpenMQ, Celery
Cloud:

Transitory

 

Messaging AMPQ, MSMQ (.NET)
Shared RAM MemCache, Tokyo Cabinet

From this table, I approximated the relative price and performance for each component in the storage spectrum.

The result was the “cloud storage banana” graph.  In this graph, enterprise storage is clustered in the “compromise” quadrant where there’s a high price for relatively low performance.  The cloud persistence refuses to be clustered at all.  To save cost and enable distributed data, applications will use cheap but slow object storage.  This drives the need for high speed RAM based cache and distributed buses. These approaches are required when developers build fault tolerance at the application level.

Enterprises have enjoyed the false luxury of perceived hardware reliability.  Where these assumptions are removed, applications are freed to scale more gracefully and consider resource cost in their consumption plans.

When we compare the enterprise Pandora’s box storage to the cloud persistence banana, a more general pattern emerges.  The cloud persistence pattern represents a fragmentation of monolithic, IT controlled services into a more functional driven architecture.  In this case, we see desire for speed, distribution and cost forcing change to application design patterns.

We also see similar dispersion patterns driving changes in compute and networking conventions.

So next time your corporate IT refuses to deploy Rabbit MQ or MemCacheD, just remember my mother’s sage advice for cloud architects: “time flies like an arrow, fruit flies like an banana.”