Podcast – Syed Zaaem Hosain on Edge, IoT, and Reality

Joining us this week is Syed Zaeem Hosain, CTO and Founder of Aeris from the KeyBanc Emerging Tech Summit.

About Aeris
Aeris is a technology partner with a proven history of helping companies unlock the value of IoT. For more than a decade, we’ve powered critical projects for some of the most demanding customers of IoT services. Aeris strives to fundamentally improve businesses by dramatically reducing costs, accelerating time-to-market, and enabling new revenue streams. Built from the ground up for IoT and globally tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler.

Highlights

  • 0 min 48 sec: Introduction of Guest
  • 1 min 27 sec: Edge is already here for Aeris – Mobile Data Presence
    • Support customers who have a need for long distance data transport over cellular
    • Focused on device connectivity
    • Edge devices will have processing power of their own
  • 4 min 31 sec: Car as Edge Data Center Issues
    • Better to move processing off the car? Cost issue for sending data via cellular
    • Tire Pressure System Example
    • 5G Cost may not be dramatically lower as people expect
  • 7 min 24sec: Can’t Send all Data Back ~ Need Local Machine Learning
    • Great deal of irrelevant data (e.g. Tire pressure)
    • Can send lots of data to train models as well – Airplane example
  • 12 min 04 sec: Dave McCrory Podcast on Airplane Use Case / Data Gravity
    • Security in Data Gathering Algorithms – Must validate the source of data
    • Use of aggregated data to monitor data validity
  • 17 min 11 sec: Sharing of Data in Edge Models
    • Issues with Security, Ownership, etc of data
    • Windshield wipers on cars for weather info
    • Source of data – how participate in money chain?
  • 21 min 45 sec: Billing for Pennies is a Problem
    • Billing systems are in issue to track revenue
    • ROI in IoT space is an open issue
  • 23 min 36 sec: Blockchain can help here?
  • 25 min 21 sec: What is the ROI for adding more devices into IoT model
    • Medical sensors (skin monitoring, pressure points in eye for monitoring)
    • Human privacy is a massive issue in this space
  • 34 min 02 sec: BOOK – Definitive Guide to IoT for Business (Free)
  • 35 min 37 sec: Wrap-Up

Podcast Guest: Syed Zaeem Hosain, CTO and Founder of Aeris

Mr. Hosain is responsible for the architecture and future direction of Aeris’ networks and technology strategy. He joined Aeris in 1996 as Vice President, Engineering and is a member of the founding executive team of Aeris. Mr. Hosain has more than 38 years of experience in the semiconductor, computer, and telecommunications industries, including product development, architecture design, and technical management. Prior to joining Aeris, he held senior engineering and management positions at Analog Devices, Cypress Semiconductor, CAD National, and ESS Technology. Mr. Hosain is Chairman of the International Forum on ANSI‐41 Standards Technology (IFAST) and Chairman of the IoT M2M Council (IMC). He holds a Bachelor of Science degree in Computer Science and Engineering from the Massachusetts Institute of Technology, Cambridge, MA.

Podcast – Val Bercovici on why Lawyers and Insurance Companies drive to good IT Practices

Joining us this week is Val Bercovici, Founder & CEO of PencilDATA at KeyBanc Emerging Tech Summit.

About PencilDATA

PencilDATA is a software-as-a-service startup embracing data governance allowing users to see and manipulate data easily. It achieves this with a blockchain ledger that accounts for all data activity.

Highlights

  • 0 min 48 sec: Introduction from Val Bercovici
  • 4 min 47 sec: What do you use Blockchain for?
    • Core to the value but is just an enabler
  • 5 min 52 sec: Don’t confused Blockchain with ICO/Crypto-currency
    • Blockchain rollouts in 2018 have paused ~ Gartner quote
    • HOMEWORK – Podcast with BlockChain Technology Partners
    • PencilDATA has built the S3 of Blockchains
  • 10 min 08 sec: Blockchain Service or Acting on BC with distributed service?
    • Useful Zero Trust Solution
    • Autonomous Vehicle Taxi Service Use Case
    • Same workflow and processes needed for medical equipment
  • 15 min 59 sec: Why Distributed Ledgers for these Use Cases?
    • Centralized authorities can be a single point of failure
  • 19 min 51 sec: Data security issues around Veracity & Authentication
    • Data poison can train AI poorly
    • Explainable AI ~ Data Reproduce-ability; prove data is valid
  • 25 min 54 sec: Salesforce Use Cases
    • Live on the Salesforce App Exchange
    • Launching at Dreamforce 2018 with Customer References
  • 27 min 18 sec: How PencilDATA provide value to Salesforce Customers?
    • SaaS Turnkey Solution that hides the behind the scenes work
    • Costs for Ethereum usage
  • 37 min 14 sec: Industrial IoT Use Case
  • 37 min 31 sec: Wrap-Up

Podcast Guest: Val Bercovici, Founder & CEO of PencilDATA

Valentin (Val) Bercovici, a longtime NetApp executive and former SolidFire CTO / co-founder at Peritus.ai. Val is now Founder & CEO of PencilDATA – an early leader in Tamper-Proofing Digital Transformation.

Previously, Bercovici led teams driving change across his 19 years with NetApp/SolidFire. Val’s teams have played an integral role in successfully growing the company beyond a pure storage play into the Cloud, Analytics & DevOps eras. Bercovici, a pioneer in the Cloud industry, introduced the first International Cloud Standard to the marketplace as CDMI (ISO INCITS 17826) in 2012 and has several patents granted & pending around data center applications of augmented reality.

Podcast – Ash Young talks Everything in your PC is IoT

Joining us this week is Ash Young, Chief Evangelist of Cachengo and OPNFV Ambassador. Cachengo builds smart, predictive storage for machine learning.

NOTE – We had a microphone problem that is solved at the 9 minute 19 second mark of the podcast. Start there if you find the clicking noise an issue

Highlights

  • 1 min 34 sec: Time to Change Basic Storage Architecture
    • Converged Protocol Appliances & Nothing has changed form early 90s
  • 7 min 8 sec: Sounds like Hadoop?
    • Underlying hardware still used proprietary protocols
  • 9 min 19 sec: Single Drive Cluster – it’s built?
    • 24 Servers and 24 Drives in a 1U ; has done 48 drives
    • Working on a new design for 96 drives in a 1U
  • 11 min 52 sec: Truly a Distributed Storage Array
    • Storage focused microservers
  • 13 min 24 sec: Limitations in Operations with Hardware
    • Hinders Innovation
  • 15 min 40 sec: Lessons Learned on Managing Devices
    • Over-dependence on tunneling protocols requiring full networking (e.g. VPN)
    • Move to peer-to-peer network slicing
  • 17 min 28 sec: Software Defined Networking Topology
    • Introduce devices to each other and get out of the way
  • 18 min 33sec: Every Storage Node is Part of the Network
    • Moves into a world of networking challenges
    • Ipv4 cannot support this model
  • 21 min 06 sec: Networking Magic in the Model
    • Peer to Peer w/ Broker Introduction and then Removal from Traffic
    • Scale out for Edge Computing Requires this New Model
    • 5G Energy Cost Savings are a Must
  • 27 min 28 sec: Issues of Powering On/Off Machines to Save Money
    • Creating a massive array of smaller GPUs for Machine Learning
    • Build a fast, cheap, lower power storage system to get started in the model
  • 34 min 09 sec: Doesn’t fit the model that Edge infrastructure will be Cloud patterned
    • Rob makes a point to listeners to consider various ideas in future Edge infrastructure
  • 36 min 48 sec: State of Open Source?
    • Consortium’s and open source standards
    • Creating the lowest common denominator free thing so competitors can build differentiation on top of it for revenue
    • Not a fan of open core models
  • 41 min 44 sec: Does Open Source include Supporting Implementation?
    • Look at the old WINE project financing
    • You can’t just deploy people onsite for free<
  • 48 min 24 sec: Wrap-Up

Podcast Guest: Ash Young,Chief Evangelist of Cachengo

Technology leader with over 20 years experience, primarily in storage. Created the first open source NAS (network attached storage) stack, the first unified block/file storage stack for Linux, the first storage management software, and the list goes on.

Since 2012, I have been heavily involved in NFV (Network Functions Virtualization). I wrote a bunch of the standards and was editor for the Compute/Storage Domain in the Infrastructure Working Group for NFV. And then I started up the open source effort to close the gaps for achieving our vision of the NFVI. This was the precursor to OPNFV.

The best way to understand what I do is to imagine being a high-level marketing exec who comes up with a whiz bang product and business idea, including business plan, competitive analysis, MRD, everything, but now comes the hand-off with your engineering organization, only to hear a litany of nos. Well, I got tired of being told “No, it can’t be done” or “No, we don’t know how to do it”, so I started doing it myself. I call this skill “Rapid Prototyping”, and over the years I have found it to be a very missing gap in the product development process. When Marketing comes up with ideas, we need a way to very efficiently validate the technology and business concepts before we commit to a lengthy engineering cycle.

I’m just one person, working in a company of over 180,000 people and in a very dynamic industry. My ability to get creative and to influence businesses is never a dull moment; and I will probably be 100 years old and still writing open source software.

Podcast – Aaron Delp on Focus of Data, IoT, and Open Source

Joining us this week is Aaron Delp, Director of Technical Solutions, Cohesity. Aaron and Brian Gracely manage the well-known podcast, The Cloudcast, with over 340 podcasts.

Highlights

  • Data returns to the data center in data transformation
  • Best of breed world and impact of refresh cycles on hardware and software
  • Data in the edge and hardware processors at the edge
  • Latency issues for long haul data center(s) storage & Metadata about location
  • Fragmented market coming for multi-vendor IoT processing?
  • Is open source a good model for vendors? Issue on monetization of open source
  • Commercial drivers impact on open source sustainability
  • Community vs Ecosystem

Topic                                                                                   Time (Minutes.Seconds)
Introduction                                                                        0.0 – 2.15
Cohesity                                                                              2.15 – 2.51
Cloudcast                                                                           2.51 – 4.13 (Over 340 Podcasts)
Data Side of Data Center                                                 4.13 – 7.05
Data Transformation                                                         7.05 – 9.01
Complexity is Enemy of Operations                              9.01 – 10.51
Best of Breed World                                                         10.51 – 15.06 (Refresh Cycle is Over)
Where Place Data? Edge                                                 15.06 – 20.01 (IoT & Edge Podcast)
All about Latency and Long Haul to Data Center       20.01 – 24.44 (Localized Metadata)
Multi-Vendor IoT Processing                                           24.44 – 30.09
Is Open Source good for Vendors/Users                     30.09 – 37.02 (Docker & John Willis Podcast)
Open Source Sustainability                                             37.02 – 41.11
Open Source Focus on Small Core w/ Ecosystem     41.11 – 43.13
How does Open Source help Cohesity?                        43.13 – 46.17
Wrap Up                                                                              47.17 – END

Podcast Guest: Aaron Delp, Director of Technical Solutions, Cohesity

Aaron Delp leads the Technical Solutions Marketing team for Cohesity, which is responsible for building industry leading reference architectures and solutions around Cohesity’s hyperconverged secondary storage platform. Prior to Cohesity, Aaron led solutions teams to launch multiple infrastructure platforms into the market and developed the solutions ecosystem around each.

In his free time Aaron enjoys running, rock climbing (when his elbow isn’t acting up) and publishes a top 100 Technology podcast on iTunes, The Cloudcast, covering all things cloud computing.

 

 

Getting Edge-y at OpenStack Summit – 5 ways it’s an easy concept with hard delivery

The 2018 Vancouver OpenStack Summit is very focused on IT infrastructure at the Edge. It’s a fitting topic considering the telcos’ embrace for the project; however, building the highly distributed, small footprint management needed for these environments is very different than OpenStack’s architectural priorities. There is a significant risk that the community’s bias towards it’s current code base (which still has work needed to service hyper-scale and enterprise data centers) will undermine progress in building suitable Edge IT solutions.

There are five significant ways that Edge is different than “traditional” datacenter.  We often discuss this on our L8istSh9y podcast and it’s time to summarize them in a blog post.

IT infrastructure at the Edge is different than “edge” in general. Edge is often used as a superset of Internet of Things (IoT), personal devices (phones) and other emerging smart devices. Our interest here is not the devices but the services that are the next hop back supporting data storage, processing, aggregation and sharing. To scale, these services need to move from homes to controlled environments in shared locations like 5G towers, POP and regional data centers.

Unlike built-to-purpose edge devices, the edge infrastructure will be built on generic commodity hardware.

Here are five key ways that managing IT infrastructure at the edge is distinct from anything we’ve built so far:

  • Highly Distributed – Even at hyper-scale, we’re used to building cloud platforms in terms of tens of data centers; however, edge infrastructure sites will number in the thousands and millions!  That’s distinct management sites, not servers or cores. Since the sites will not have homogeneous hardware specifications, the management of these sites requires zero-touch management that is vendor neutral, resilient and secure.  
  • Low Latency Applications – Latency is the reason why Edge needs to be highly distributed.  Edge applications like A/R, V/R, autonomous robotics and even voice controls interact with humans (and other apps) in ways that require microsecond response times.  This speed of light limitation means that we cannot rely on hyper-scale data centers to consolidate infrastructure; instead, we have to push that infrastructure into the latency range of the users and devices.
  • Decentralized Data – A lot of data comes from all of these interactive edge devices.  In our multi-vendor innovative market, data from each location could end up being sprayed all over the planet.  Shared edge infrastructure provides an opportunity to aggregate this data locally where it can be shared and (maybe?) controlled. This is a very hard technical and business problem to solve.  While it’s easy to inject blockchain as a possible solution, the actual requirements are still evolving.
  • Remote, In-Environment Infrastructure – To make matters even harder, the sites are not traditional raised floor data centers with 24×7 attendants: most will be small, remote and unstaffed sites that require a truck roll for services.  Imagine an IT shed at the base of a vacant lot cell tower behind rusted chain link fences guarded by angry squirrels and monitored by underfunded FCC regulators.
  • Multi-Tenant and Trusted – Edge infrastructure will be a multi-tenant environment because it’s simple economics driving as-a-Service style resource sharing. Unlike buy-on-credit-card public clouds, the participants in the edge will have deeper, trusted relationships with the service providers.  A high degree of trust is required because distributed application and data management must be coordinated between the Edge infrastructure manager and the application authors.  This level of integration requires a deeper trust and inspect than current public clouds require.

These are hard problems!  Solving them requires new thinking and tools that while cloud native in design, are not cloud tools.  We should not expect to lift-and-shift cloud patterns directly into edge because the requirements are fundamentally different.  This next wave of innovation requires building for an even more distributed and automated architecture.

I hope you’re as excited as we are about helping build infrastructure at the edge.  What do you think the challenges are? We’d like to hear from you!

Podcast – Chetan Venkatesh talks Edge, IoT, and Dishwashers as a Service

Joining us this week is Chetan Venkatesh, CEO/President of Macrometa, a stealth startup. Chetan is actively engaged in the data issues for edge computing and provides insight into the reality of edge computing and its changes in application development and delivery.

Sample posts from Chetan:

Highlights

  • Overview of Edge Computing and Chetan’s 3 Edges
  • Internet of things, gateways and data aggregation
  • Can Telcos compete against cloud providers?
  • How apps handle massive scale? Developer’s support? Distributed architecture?
  • Multi-tenancy impact on edge infrastructure?
  • Re-think where data resides to support user location
  • What is possible with IoT is unknown
  • What are the first movers in the edge computing space?

Topic                                                                                   Time (Minutes.Seconds)

Introduction                                                                                 0.0 – 1.20
Background Info (Journey to Edge)                                         1.20 – 4:18
Edge Computing Definition (3 Edges)                                     4:18 – 7.28
Car as IoT?                                                                                    7.28 – 9.05
IoT Gateways                                                                                9.05 – 10.57
Data Aggregation                                                                        10.57 – 14.51
Can Telcos Build the Infrastructure like Cloud Providers?   14.51 – 19.03
App Management / Data State Problem                                19.03 – 29.00 (Redis Lab Podcast)
Distributed Edge Infrastructure – Multi-Tenant?                    29.00 – 32.16
Where Data Reside for Edge? (Not Blockchain)                     32.16 – 37.59
Value of Data & IoT Possibilities                                                37.59 – 40.14
First Movers in this Space                                                          40.14 – 44.33 (Dishwasher as a Service)
Wrap Up                                                                                        44.33 – END

Podcast Guest: Chetan Venkatesh, CEO/President Macrometa

Founder and executive focused on enterprise data center, cloud infrastructure, and software products/companies. Strong competency in helping early-stage teams find & exploit product-market fit, early customer wins to repeat sales and scaling startups from pre-revenue to growth stage and profitability. Experienced in building strong and productive teams in sales, product development, product management, gaining early lighthouse customer wins, stage centric positioning with customers & partners, venture scaling and capital raise (equity and debt) Chetan Venkatesh was the Founder, President and Chief Executive Officer of Atlantis.

RackN and Digital Rebar Philosophy of Provisioning

Re-defining physical automation to make it highly repeatable and widely consumable while also meeting the necessarily complex and evolving heterogeneous data center environment is the challenge the RackN team is solving. To meet this challenge, we have developed a unique philosophy in how we build our technology; both open source Digital Rebar and the additional RackN packages.

  • Stand-alone Provisioning
  • Building Software from the API
  • Single Golang Executable
  • Modular Components – Composable Content
  • Operator Defined Workflows
  • Immutable Infrastructure
  • Distributed or Consolidated Architectures

Stand-alone Provisioning

It is critical that Digital Rebar Provision (DRP) provides operators the maximum flexibility in terms of where to run the service (Server, Top-of-Rack Switch, ARM, Intel, etc) as well as removal of any dependencies that might restrict its deployment.  Each environment has it’s own unique Infrastructure DNA; the hardware, operating systems, and application stacks that drive the Infrastructure underlay.

Building Software from the API

The Digital Rebar Provision solution is built with an API first mentality.  Features and enhancements are implemented as an API (making it a first-class citizen), and the CLI is dynamically generated from the API which insures 100% coverage of API implementations within the CLI.  

This methodology also allows for the CLI to directly follow the structure and syntax of the API, making it easy for an Operator or Developer to understand and flexibly interchange the API and CLI syntax.  

At RackN we believe in strongly in the 12-Factor App methodology for designing modern software.  DRP is a direct reflection of these principles.

Single Golang Executable

DRP is built with Golang which is a modern Procedural language that is easily cross-compiled for multiple operating systems and processor architectures.  As a benefit, the DRP service and CLI tool (dr-provision and drpcli respectively) can run on platforms that range from small Raspberry Pi embedded systems, network switches at the Top-of-Rack, huge Hyper Converged Infrastructure (HCI) servers, to everything in between.  It is currently compiled and runs on Linux (arm, intel, 32 bit, and 64 bit), Mac OS X (64 bit), and Windows (64 bit).

The dr-provision binary is very small and lightweight, requiring almost zero external dependencies.  Current external dependencies are unzip, pk7zip, and bsdtar, and these dependencies should be removed in a future version.  At only 30 MByte in size, it requires fairly little resources to run.  

Modular Components ~ Composable Content

Modular architecture allows us to create complex solutions from a set of simple building blocks that offer functionality that is well tested. Breaking complex problems down in to small components, and then allowing strong templating capabilities creates a structure that allows for strong reuse patterns.   This approach permeates all of the “Content” components that create the foundational building blocks for composable provisioning activities.  

Operator Defined Workflows

Each environment has a unique set of services, applications, tooling, and practices for managing the Infrastructure.  Taking the concepts of Composable Content, we allow an operator or developer a flexible structure in which they have control in determining how loosely or tightly to integrate the DRP provisioning services in to their environment.  Every customer environment has a unique set of tools, and this methodology allows for smooth integration with those operational principles

Immutable Infrastructure    

Maintaining hardware and software in a massive data center or cloud is a significant challenge without the additional overhead of ensuring that patches are properly applied. Any changes to an active solution can introduce complications on a live system which is a major barrier to having security updates and other patches completed in a timely manner.

A better method is to only deploy a “golden image” to the live system and rather than patch each individual instance, simply tear down the instance and replace with a new copy of the “golden image.”  All patches can be applied and tested to create a new golden image which is easily rolled out in the create – destroy- re-create model of  immutability.

Distributed or Consolidated Architectures

Traditional data center and lab environments utilize centralized provisioning services.  While DRP has strong support for this scale-up or consolidated model, shifting patterns in application and service deployment topology dictates an evolving provisioning service solution.  Current Internet-of-Things (IoT), Edge, and Fog architectures distribute resources across disperse environments.

In the traditional model, a large scale operator might support a handful of datacenters with 10s of thousands of hosts in each facility.   These new trending architecture patterns can encompass 1000s of different locations, each hosting a few dozen to a few hundred hosts.  This shift creates significant burden on operational and infrastructure management tooling to support the complexities of these scale-out designs.

With strong multi-endpoint management tooling, the RackN portal can easily support both models for provisioning.  Long-lived scale-up environments with a service that is updated, upgraded, managed, loved, and cared for can exist seamlessly alongside environments with a create/destroy pattern that treats 1000s of provisioning endpoints as disposable assets.