Podcast – Ash Young talks Everything in your PC is IoT

Joining us this week is Ash Young, Chief Evangelist of Cachengo and OPNFV Ambassador. Cachengo builds smart, predictive storage for machine learning.

NOTE – We had a microphone problem that is solved at the 9 minute 19 second mark of the podcast. Start there if you find the clicking noise an issue

Highlights

  • 1 min 34 sec: Time to Change Basic Storage Architecture
    • Converged Protocol Appliances & Nothing has changed form early 90s
  • 7 min 8 sec: Sounds like Hadoop?
    • Underlying hardware still used proprietary protocols
  • 9 min 19 sec: Single Drive Cluster – it’s built?
    • 24 Servers and 24 Drives in a 1U ; has done 48 drives
    • Working on a new design for 96 drives in a 1U
  • 11 min 52 sec: Truly a Distributed Storage Array
    • Storage focused microservers
  • 13 min 24 sec: Limitations in Operations with Hardware
    • Hinders Innovation
  • 15 min 40 sec: Lessons Learned on Managing Devices
    • Over-dependence on tunneling protocols requiring full networking (e.g. VPN)
    • Move to peer-to-peer network slicing
  • 17 min 28 sec: Software Defined Networking Topology
    • Introduce devices to each other and get out of the way
  • 18 min 33sec: Every Storage Node is Part of the Network
    • Moves into a world of networking challenges
    • Ipv4 cannot support this model
  • 21 min 06 sec: Networking Magic in the Model
    • Peer to Peer w/ Broker Introduction and then Removal from Traffic
    • Scale out for Edge Computing Requires this New Model
    • 5G Energy Cost Savings are a Must
  • 27 min 28 sec: Issues of Powering On/Off Machines to Save Money
    • Creating a massive array of smaller GPUs for Machine Learning
    • Build a fast, cheap, lower power storage system to get started in the model
  • 34 min 09 sec: Doesn’t fit the model that Edge infrastructure will be Cloud patterned
    • Rob makes a point to listeners to consider various ideas in future Edge infrastructure
  • 36 min 48 sec: State of Open Source?
    • Consortium’s and open source standards
    • Creating the lowest common denominator free thing so competitors can build differentiation on top of it for revenue
    • Not a fan of open core models
  • 41 min 44 sec: Does Open Source include Supporting Implementation?
    • Look at the old WINE project financing
    • You can’t just deploy people onsite for free<
  • 48 min 24 sec: Wrap-Up

Podcast Guest: Ash Young,Chief Evangelist of Cachengo

Technology leader with over 20 years experience, primarily in storage. Created the first open source NAS (network attached storage) stack, the first unified block/file storage stack for Linux, the first storage management software, and the list goes on.

Since 2012, I have been heavily involved in NFV (Network Functions Virtualization). I wrote a bunch of the standards and was editor for the Compute/Storage Domain in the Infrastructure Working Group for NFV. And then I started up the open source effort to close the gaps for achieving our vision of the NFVI. This was the precursor to OPNFV.

The best way to understand what I do is to imagine being a high-level marketing exec who comes up with a whiz bang product and business idea, including business plan, competitive analysis, MRD, everything, but now comes the hand-off with your engineering organization, only to hear a litany of nos. Well, I got tired of being told “No, it can’t be done” or “No, we don’t know how to do it”, so I started doing it myself. I call this skill “Rapid Prototyping”, and over the years I have found it to be a very missing gap in the product development process. When Marketing comes up with ideas, we need a way to very efficiently validate the technology and business concepts before we commit to a lengthy engineering cycle.

I’m just one person, working in a company of over 180,000 people and in a very dynamic industry. My ability to get creative and to influence businesses is never a dull moment; and I will probably be 100 years old and still writing open source software.

a Ready State analogy: “roughed in” brings it Home for non-ops-nerds

I’ve been seeing great acceptance on the concept of ops Ready State.  Technologists from both ops and dev immediately understand the need to “draw a line in the sand” between system prep and installation.  We also admit that getting physical infrastructure to Ready State is largely taken for granted; however, it often takes multiple attempts to get it right and even small application changes can require a full system rebuild.

Since even small changes can redefine the ready state requirements, changing Ready State can feel like being told to tear down your house so you remodel the kitchen.

Foundation RawA friend asked me to explain “Ready State” in non-technical terms.  So far, the best analogy that I’ve found is when a house is “Roughed In.”  It’s helpful if you’ve ever been part of house construction but may not be universally accessible so I’ll explain.

Foundation PouredGetting to Rough In means that all of the basic infrastructure of the house is in place but nothing is finished.  The foundation is poured, the plumbing lines are placed, the electrical mains are ready, the roof on and the walls are up.  The house is being built according to architectural plans and major decisions like how many rooms there are and the function of the rooms (bathroom, kitchen, great room, etc).  For Ready State, that’s like having the servers racked and setup with Disk, BIOS, and network configured.

Framed OutWhile we’ve built a lot, rough in is a relatively early milestone in construction.  Even major items like type of roof, siding and windows can still be changed.  Speaking of windows, this is like installing an operating system in Ready State.  We want to consider this as a distinct milestone because there’s still room to make changes.  Once the roof and exteriors are added, it becomes much more disruptive and expensive to make.

Roughed InOnce the house is roughed in, the finishing work begins.  Almost nothing from roughed in will be visible to the people living in the house.  Like a Ready State setup, the users interact with what gets laid on top of the infrastructure.  For homes it’s the walls, counters, fixtures and following.  For operators, its applications like Hadoop, OpenStack or CloudFoundry.

Taking this analogy back to where we started, what if we could make rebuilding an entire house take just a day?!  In construction, that’s simply not practical; however, we’re getting to a place in Ops where automation makes it possible to reconstruct the infrastructure configuration much faster.

While we can’t re-pour the foundation (aka swap out physical gear) instantly, we should be able to build up from there to ready state in a much more repeatable way.

Crowbar lays it all out: RAID & BIOS configs officially open sourced

MediaToday, Dell (my employer) announced a plethora of updates to our open source derived solutions (OpenStack and Hadoop). These solutions include the latest bits (Grizzly and Cloudera) for each project. And there’s another important notice for people tracking the Crowbar project: we’ve opened the remainder of its provisioning capability.

Yes, you can now build the open version of Crowbar and it has the code to configure a bare metal server.

Let me be very specific about this… my team at Dell tests Crowbar on a limited set of hardware configurations. Specifically, Dell server versions R720 + R720XD (using WSMAN and iIDRAC) and C6220 + C8000 (using open tools). Even on those servers, we have a limited RAID and NIC matrix; consequently, we are not positioned to duplicate other field configurations in our lab. So, while we’re excited to work with the community, caveat emptor open source.

Another thing about RAID and BIOS is that it’s REALLY HARD to get right. I know this because our team spends a lot of time testing and tweaking these, now open, parts of Crowbar. I’ve learned that doing hard things creates value; however, it’s also means that contributors to these barclamps need to be prepared to get some silicon under their fingernails.

I’m proud that we’ve reached this critical milestone and I hope that it encourages you to play along.

PS: It’s worth noting is that community activity on Crowbar has really increased. I’m excited to see all the excitement.

In scale-out infrastructure, tools & automation matter

WiseScale out platforms like Hadoop have different operating rules.  I heard an interesting story today in which the performance of the overall system was improved 300% (run went from 15 mins down to 5 mins) by the removal of a node.

In a distributed system that coordinates work between multiple nodes, it only takes one bad node to dramatically impact the overall performance of the entire system.

Finding and correcting this type of failure can be difficult.  While natural variability, hardware faults or bugs cause some issues, the human element is by far the most likely cause.   If you can turn down noise injected by human error then you’ve got a chance to find the real system related issues.

Consequently, I’ve found that management tooling and automation are essential for success.  Management tools help diagnose the cause of the issue and automation creates repeatable configurations that reduce the risk of human injected variability.

I’d also like to give a shout out to benchmarks as part of your tooling suite.  Without having a reasonable benchmark it would be impossible to actually know that your changes improved performance.

Teaming Related Post Script: In considering the concept of system performance, I realized that distributed human systems (aka teams) have a very similar characteristic.  A single person can have a disproportionate impact on overall team performance.

Crowbar cuts OpenStack Grizzly (“pebbles”) branch & seeks community testing

Pebbles CutThe Crowbar team (I work for Dell) continues to drive towards “zero day” deployment readiness. Our Hadoop deployments are tracking Dell | Cloudera Hadoop-powered releases within a month and our OpenStack releases harden within three months.

During the OpenStack summit, we cut our Grizzly branch (aka “pebbles”) and switched over to the release packages. Just a reminder, we basically skipped Folsom. While we’re still tuning out issues on OpenStack Networking (OVS+GRE) setup, we’re also looking for community to start testing and tuning the Chef deployment recipes.

We’re just sprints from release; consequently, it’s time for the Crowbar/OpenStack community to come and play! You can learn Grizzly and help tune the open source Ops scripts.

While the Crowbar team has been generating a lot of noise around our Crowbar 2.0 work, we have not neglected progress on OpenStack Grizzly.  We’ve been building Grizzly deploys on the 1.x code base using pull-from-source to ensure that we’d be ready for the release. For continuity, these same cookbooks will be the foundation of our CB2 deployment.

Features of Crowbar’s OpenStack Grizzly Deployments

  • We’ve had Nova Compute, Glance Image, Keystone Identity, Horizon Dashboard, Swift Object and Tempest for a long time. Those, of course, have been updated to Grizzly.
  • Added Block Storage
    • importable Ceph Barclamp & OpenStack Block Plug-in
    • Equalogic OpenStack Block Plug-in
  • Added Quantum OpenStack Network Barclamp
    • Uses OVS + GRE for deployment
  • 10 GB networking configuration
  • Rabbit MQ as its own barclamp
  • Swift Object Barclamps made a lot of progress in Folsom that translates to Grizzly
    • Apache Web Service
    • Rack awareness
    • HA configuration
    • Distribution Report
  • “Under the covers” improvements for Crowbar 1.x
    • Substantial improvements in how we configure host networking
    • Numerous bug fixes and tweaks
  • Pull from Source via the Git barclamp
    • Grizzly branch was switched to use Ubuntu & SUSE packages

We’ve made substantial progress, but there are still gaps. We do not have upgrade paths from Essex or Folsom. While we’ve been adding fault-tolerance features, full automatic HA deployments are not included.

Please build your own Crowbar ISO or check our new SoureForge download site then join the Crowbar List and IRC to collaborate with us on OpenStack (or Hadoop or Crowbar 2). Together, we will make this awesome.

Big Data to tame Big Government? The answer is the Question.

Today my boss at Dell, John Igoe, is part of announcing of the report from the TechAmerica Federal Big Data Commission (direct pdf), I was fully expecting the report to be a real snoozer brimming with corporate synergies and win-win externalities. Instead, I found myself reading a practical guide to applying Big Data to government. Flipping past the short obligatory “what is…” section, the report drives right into a survey of practical applications for big data spanning nearly every governmental service. Over half of the report is dedicated to case studies with specific recommendations and buying criteria.

Ultimately, the report calls for agencies to treat data as an asset. An asset that can improve how government operates.

There are a few items that stand out in this report:

  1. Need for standards on privacy and governance. The report calls out a review and standardization of cross agency privacy policy (pg 35) and a Chief Data Officer position each agency (pg 37).
  2. Clear tables of case studies on page 16 and characteristics on page 11 that help pin point a path through the options.
  3. Definitive advice to focus on a single data vector (velocity, volume or variety) for initial success on page 28 (and elsewhere)

I strongly agree with one repeated point in the report: although there is more data available, our ability to comprehend this data is reduced. The sheer volume of examples the report cites is proof enough that agencies are, and will be continue to be, inundated with data.

One short coming of this report is that it does not flag the extreme storage of data scientists. Many of the cases discussed assume a ready army of engineers to implement these solutions; however, I’m uncertain how the government will fill positions in a very tight labor market. Ultimately, I think we will have to simply open the data for citizen & non-governmental analysis because, as the report clearly states, data is growing faster than capability to use it.

I commend the TechAmerica commission for their Big Data clarity: success comes from starting with a narrow scope. So the answer, ironically, is in knowing which questions we want to ask.

Do Be Dense! Dell C8000 unit merges best of bladed and rackable servers

“Double wide” is not a term I’ve commonly applied to servers, but that’s one of the cool things about this new class of servers that Dell, my employer, started shipping today.

My team has been itching for the chance to start cloud and big data reference architectures using this super dense and flexible chassis. You’ll see it included in our next Apache Hadoop release and we’ve already got customers who are making it the foundation of their deployments (Texas Adv Computing Center case study).

If you’re tracking the latest big data & cloud hardware then the Dell PowerEdge C8000 is worth some investigation.

Basically, the Dell C8000 is a chassis that holds a flexible configuration of compute or storage sleds. It’s not a blade frame because the sleds minimize shared infrastructure. In our experience, cloud customers like the dedicated i/o and independence of sleds (as per the Bootstrapping clouds white paper). Those attributes are especially well suited for Hadoop and OpenStack because they support a “flat edges” and scale out design. While i/o independence is valued, we also want shared power infrastructure and density for efficiency reasons. Using a chassis design seems to capture the best of both worlds.

The novelty for the Dell PowerEdge C8000 is that the chassis are scary flexible. You are not locked into a pre-loaded server mix.

There are a plethora of sled choices so that you can mix choices for power, compute density and spindle counts. That includes double-wide sleds positively brimming with drives and expanded GPU processers. Drive density is important for big data configurations that are disk i/o hungry; however, our experience is the customer deployments are highly varied based on the planned workload. There are also significant big data trends towards compute, network, and balanced hardware configurations. Using the C8000 as a foundation is powerful because it can cater to all of these use-case mixes.

That reminds me! Mike Pittaro (our team’s Hadoop lead architect) did an excellent Deploy Hadoop using Crowbar video.

Interested in more opinions about the C8000? Check out Barton George & David Meyer.

Crowbar’s early twins: Cloudera Hadoop & OpenStack Essex

I’m proud to see my team announce the twin arrival of the Dell | Cloudera Apache Hadoop (Manager v4) and Dell OpenStack-Powered Cloud (Essex) solutions.

Not only are we simultaneously releasing both of these solutions, they reflect a significant acceleration in pace of delivery.  Both solutions had beta support for their core technologies (Cloudera 4 & OpenStack Essex) when the components were released and we have dramatically reduced the lag from component RC to solution release compared to past (3.7 & Diablo) milestones.

As before, the core deployment logic of these open source based solutions was developed in the open on Crowbar’s github.  You are invited to download and try these solutions yourself.   For Dell solutions, we include validated reference architectures, hardware configuration extensions for Crowbar, services and support.

The latest versions of Hadoop and OpenStack represent great strides for both solutions.   It’s great to be able have made them more deployable and faster to evaluate and manage.

Crowbar Celebrates 1st Anniversary

Nearly a year ago at OSCON 2011, my team at Dell opened sourced “Crowbar, an OpenStack installer.” That first Github commit was a much more limited project than Crowbar today: there was no separation into barclamps, no distinct network configuration, one operating system option and the default passwords were all “openstack.” We simply did not know if our effort would create any interest.

The response to Crowbar has been exciting and humbling. I most appreciate those who looked at Crowbar and saw more than a bare metal installer. They are the ones who recognized that we are trying to solve a bigger problem: it has been too difficult to cope with change in IT operations.

During this year, we have made many changes. Many have been driven by customer, user and partner feedback while others support Dell product delivery needs. Happily, these inputs are well aligned in intent if not always in timing.

  • Introduction of barclamps as modular components
  • Expansion into multiple applications (most notably OpenStack and Apache Hadoop)
  • Multi-Operating System
  • Working in the open (with public commits)
  • Collaborative License Agreements

Dell‘s understanding of open source and open development has made a similar transformation. Crowbar was originally Apache 2 open sourced because we imagined it becoming part of the OpenStack project. While that ambition has faded, the practical benefits of open collaboration have proven to be substantial.

The results from this first year are compelling:

  • For OpenStack Diablo, coordination with the Rackspace Cloud Builder team enabled Crowbar to include the Keystone and Dashboard projects into Dell’s solution
  • For OpenStack Essex, the community focused work we did for the March Essex Hackday are directly linked to our ability to deliver Dell’s OpenStack-Powered Essex solution over two months earlier than originally planned.
  • For Apache Hadoop distributions for 3.x and 4.x with implementation of Cloudera Manager and eco system components.
  • We’ve amassed hundreds of mail subscribers and Github followers
  • Support for multiple releases of RHEL, Centos & Ubuntu including Ubuntu 12.04 while it was still in beta.
  • SuSE does their own port of Crowbar to SuSE with important advances in Crowbar’s install model (from ISO to package).

We stand on the edge of many exciting transformations for Crowbar’s second year. Based on the amount of change from this year, I’m hesitant to make long term predictions. Yet, just within next few months there are significant plans based on Crowbar 2.0 refactor. We have line of site to changes that expand our tool choices, improve networking, add operating systems and become more even production ops capable.

That’s quite a busy year!

Crowbar deploying Dell | Cloudera 4 | Apache Hadoop

Hopefully you wrote “Cloudera 3.7” in pencil on your to-do list because the Dell Crowbar team has moved to CHD4 & Cloudera Enterprise 4.0. This aligns with the Cloudera GA announcement on Tuesday 6/5 and continues our drive keep Crowbar deployments both fresh and spicy.

With the GA drop, the Crowbar Cloudera Barclamps are effectively at release candidate state (ISO). The Cloudera Barclamps include a freemium version of Cloudera Enterprise 4 that supports up to 50 nodes.

I’m excited about this release because it addresses concerns around fault tolerance, multi-tenant and upgrade.

These tools are solving real world problems ranging from data archival, ad hoc analysis and click stream analysis. We’ve invested a lot of Crowbar development effort in making it fast and easy to build a Hadoop cluster. Now, Cloudera makes it even easier to manage and maintain.