January 19: Weekly Review of Digital Rebar and RackN with DevOps and Edge News

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, Edge Computing, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet RackN (@rackngo)

Items of the Week

Industry News

In 2017, more companies than ever before decided to start their DevOps journey. As with anything new, there’s a learning curve: The trick is identifying missteps before they become bad habits, because habits can be hard to break.

As you refine your DevOps strategies for the new year, it’s important to take a critical look back and seek out these troublemakers. These issues may not be obvious – so we asked business leaders and DevOps practitioners to help, by sharing their wisdom on the worst DevOps behaviors standing in the way of success.

Read on for the top 10 offenders. If you’re guilty of any of these, now is the time to kick these bad habits to the curb and maximize DevOps success in 2018.

CONNECTED devices now regularly double as digital hoovers: equipped with a clutch of sensors, they suck in all kinds of information and send it to their maker for analysis. Not so the wireless earbuds developed by Bragi, a startup from Munich. They keep most of what they collect, such as the wearers’ vital signs, and crunch the data locally. “The devices are getting smarter as they are used,” says Nikolaj Hviid, its chief executive.

Bragi’s earplugs are at the forefront of a big shift in the tech industry. In recent years ever more computing has been pushed into the “cloud”, meaning networks of big data centres. But the pendulum has already started to swing: computing is moving back to the “edge” of local networks and intelligent devices.

There are so many terms floating around IT worlds today. Just as you start to figure out DevOps, DevSecOps or Secure DevOps jumps onto your radar. It’s certainly not a new term by today’s standard of “new,” but it doesn’t have the same notoriety that DevOps has.

DevSecOps is as simple as it sounds, it is the conscious integration of security into the DevOps process. With the news about Meltdown and Spectre, having the most efficient security processes is critical. The mindset of both DevOps and DevSecOps is essentially the same, increase collaboration and efficiency. One question you might be asking is, what is the benefit of DevSecOps versus DevOps alone?

Digital Rebar

RackN

An interesting paradox in technology is our desire to obsess over the latest shiny (Note our L8istSh9y Podcast) object promising the moon; however, we tend to hold on to our reliable, dependable solutions that become outdated.  A great example of this reliance on outdated technology is the well-known Linux provisioning tool Cobbler.

Cobbler was built specially for Linux in the pre-cloud days with version 2.2.3-1 released in June 2012. The product continues on a schedule of 2 releases a year with the last update in September 2017. There is no commercial support, minimal development and hardly anyone keeping the lights on.  In today’s security landscape, that’s not a safe place for a critical infrastructure service.

The Digital Rebar community has taken the learnings from the Cobbler community.

We’ve built a SaaS-based platform that brings the efficiency and automation of the cloud into your existing infrastructure. It’s called RackN – making provisioning, control, and orchestration simple. We built it to give organizations like yours the benefits you see others getting through public clouds like AWS and Google. Things like compliance, repeatability, scalability, security, and speed. It’s a platform made to overcome the difficult operational challenges of physical infrastructure.

Obtain access to the latest RackN technology with support and training from the RackN team. Additional services for customized engagements are available. Start your 30-day trial of RackN software today.

L8ist Sh9y Podcast

In this week’s podcast, we speak with Dave McCrory, VP of Engineering for Machine Learning at GE Digital. He focuses on several interesting topics:

• Data Gravity Overview
• Data “Training” – Monetization – Application Usage in Edge
• Multi-Tenancy in Edge?

UPCOMING EVENTS

Follow the latest info on RackN and Digital Rebar events at www.rackn.com/events

Podcast: Dave McCrory on Data Gravity, Data Inertia, and Edge

In this week’s podcast, we speak with Dave McCrory, VP of Engineering for Machine Learning at GE Digital. He focuses on several interesting topics:

  • Data Gravity Overview
  • Data “Training” – Monetization – Application Usage in Edge
  • Multi-Tenancy in Edge?

Topic                                                            Time (Minutes.Seconds)

Introduction                                                  0.0 –  0.33
Data Gravity                                                  0.33 – 4.36 (CTO Advisor Podcast)
Latency vs Volume of Data                        4.36 – 9.00  (Data Gravity Mathematics)
Day Job at GE                                               9.00 – 11.25
Training the Data in the Field                    11.25 – 14.38
Core Data Centers                                       14.38 – 18.03
Half-Life on a Data Model                          18.03 – 19.27
Keep the Data? Plane Example                19.27 – 24.58 (Data Inertia)
Monetize Data in Motion                            24.58 – 29.45 (Uber Credit Card)
Data at the Edge for App Usage               29.45 – 36.40 (Augmented Reality Example)
Portability of Processing and Platforms   36.40 – 41.45
Scale Needs Multi-Tenant                          41.45 – 46.00
Wrap-Up                                                        46.00 – END

Podcast Guest
Dave McCrory, VP of Engineering for Machine Learning at GE Digital

Currently I’m the VP of Engineering for the ML division of GE Digital. Our group creates scalable, production ready solutions for the Internal Business Units of GE. We focus on solving complex Industrial IoT problems using Machine Learning in industries such as Aviation, Energy, Healthcare, and Oil & Gas to name a few.

Follow Dave at https://blog.mccrory.me/

Why cloud compute will be free

Today at Dell, I was presenting to our storage teams about cloud storage (aka the “storage banana”) and Dave “Data Gravity” McCrory reminded me that I had not yet posted my epiphany explaining “why cloud compute will be free.”  This realization derives from other topics that he and I have blogged but not stated so simply.

Overlooking that fact that compute is already free at Google and Amazon, you must understand that it’s a cloud eat cloud world out there where losing a customer places your cloud in jeopardy.  Speaking of Jeopardy…

Answer: Something sought by cloud hosts to make profits (and further the agenda of our AI overlords).

Question: What is lock-in?

Hopefully, it’s already obvious to you that clouds are all about data.  Cloud data takes three primary forms:

  1. Data in transformation (compute)
  2. Data in motion (network)
  3. Data at rest (storage)

These three forms combine to create cloud architecture applications (service oriented, externalized state).

The challenge is to find a compelling charge model that both:

  1. Makes it hard to leave your cloud AND
  2. Encourages customers to use your resources effectively (see #1 in Azure Top 20 post)

While compute demands are relatively elastic, storage demand is very consistent, predictable and constantly grows.  Data is easily measured and difficult to move.  In this way, data represents the perfect anchor for cloud customers (model rule #1).  A host with a growing data consumption foot print will have a long-term predictable revenue base.

However, storage consumption along does not encourage model rule #2.  Since storage is the foundation for the cloud, hosts can fairly judge resource use by measuring data egress, ingress and sidegress (attrib @mccrory 2/20/11).  This means tracking not only data in and out of the cloud, but also data transacted between the providers own cloud services.  For example, Azure changes for both data at rest ($0.15/GB/mo) and data in motion ($0.01/10K).

Consequently, the financially healthiest providers are the ones with most customer data.

If hosting success is all about building a larger, persistent storage footprint then service providers will give away services that drive data at rest and/or in motion.  Giving away compute means eliminating the barrier for customers to set up web sites, develop applications, and build their business.  As these accounts grow, they will deposit data in the cloud’s data bank and ultimately deposit dollars in their piggy bank.

However, there is a no-free-lunch caveat:  free compute will not have a meaningful service level agreement (SLA).  The host will continue to charge for customers who need their applications to operate consistently.  I expect that we’ll see free compute (or “spare compute” from the cloud providers perspective) highly used for early life-cycle (development, test, proof-of-concept) and background analytic applications.

The market is starting to wake up to the idea that cloud is not about IaaS – it’s about who has the data and the networks.

Oh, dem golden spindles!  Oh, dem golden spindles!

Cloud Gravity – launching apps into the clouds

Dave McCrory‘s Cloud Gravity series (Data Gravity & Escape Velocity) brings up some really interesting concepts and has lead to some spirited airplane discussions while Dell shuttled us to an end of year strategy meeting.  Note: whoever was on American 34 seats 22A/C – we apologize if we were too geek-rowdy for you.

Dave’s Cloud Gravity is the latest unfolding of how clouds are evolving as application architectures before more platform capable.  I’ve explored these concepts in previous posts (Storage Banana, PaaS vs IaaS, CAP Chasm) to show how cloud applications are using services differently than traditional applications.

Dave’s Escape Velocity post got me thinking about how cleanly Data Gravity fits with cloud architecture change and CAP theorem.

My first sketch shows how traditional applications are tightly coupled with the data they manipulate.  For example, most apps work directly on files or a database direct connection.  These apps rely on very consistent and available data access.  They are effectively in direct contact with their data much like a building resting on it’s foundation.  That works great until your building is too small (or too large).  In that case, you’re looking a substantial time delay before you can expand your capcity.

Cloud applications have broken into orbit around their data.  They still have close proximity to the data but they do their work via more generic network connections.  These connections add some latency, but allow much more flexible and dynamic applications.  Working within the orbit analogy, it’s much much easier realign assets in orbit (cloud servers) to help do work than to move buildings around on the surface.

In the cloud application orbital analogy, components of applications may be located in close proximity if they need fast access to the data.  Other components may be located farther away depending on resource availability, price or security.  The larger (or more valuable) the data, the more likely it will pull applications into tight orbits.

My second sketch extends to analogy to show that our cloud universe is not simply point apps and data sources.  There truly a universe of data on the internet with hugh sources (Facebook, Twitter, New York Stock Exchange, my blog, etc) creating gravitational pull that brings other data into orbit around them.  Once again, applications can work effectively on data at stellar distances but benefit from proximity (“location does not matter, but proximity does”).

Looking at data gravity in this light leads me to expect a data race where clouds (PaaS and SaaS) seek to capture as much data as possible.