Big Data to tame Big Government? The answer is the Question.

Today my boss at Dell, John Igoe, is part of announcing of the report from the TechAmerica Federal Big Data Commission (direct pdf), I was fully expecting the report to be a real snoozer brimming with corporate synergies and win-win externalities. Instead, I found myself reading a practical guide to applying Big Data to government. Flipping past the short obligatory “what is…” section, the report drives right into a survey of practical applications for big data spanning nearly every governmental service. Over half of the report is dedicated to case studies with specific recommendations and buying criteria.

Ultimately, the report calls for agencies to treat data as an asset. An asset that can improve how government operates.

There are a few items that stand out in this report:

  1. Need for standards on privacy and governance. The report calls out a review and standardization of cross agency privacy policy (pg 35) and a Chief Data Officer position each agency (pg 37).
  2. Clear tables of case studies on page 16 and characteristics on page 11 that help pin point a path through the options.
  3. Definitive advice to focus on a single data vector (velocity, volume or variety) for initial success on page 28 (and elsewhere)

I strongly agree with one repeated point in the report: although there is more data available, our ability to comprehend this data is reduced. The sheer volume of examples the report cites is proof enough that agencies are, and will be continue to be, inundated with data.

One short coming of this report is that it does not flag the extreme storage of data scientists. Many of the cases discussed assume a ready army of engineers to implement these solutions; however, I’m uncertain how the government will fill positions in a very tight labor market. Ultimately, I think we will have to simply open the data for citizen & non-governmental analysis because, as the report clearly states, data is growing faster than capability to use it.

I commend the TechAmerica commission for their Big Data clarity: success comes from starting with a narrow scope. So the answer, ironically, is in knowing which questions we want to ask.

Analyze This! Big Data | Apache Hadoop | Dell | Cloudera | Crowbar

This article about Target using buying patterns to expose a teen was pregnant before she told her parents puts big data analysis into everyday terms better than the following 555 words (of course, I recommend that you read both).

Recently, I had the pleasure of being one of our team presenting Dell’s BIG DATA story at an internal conference. From the questions and buzz, it’s clear that the big data is big news this year. My team is at the center of that storm because we are responsible for the Dell | Cloudera Apache™ Hadoop™ solution. The solution is significant because we’ve integrated many pieces necessary to build and sustain a Hadoop cluster: that includes Dell servers, the Cloudera Hadoop distribution, the Crowbar framework and Services to make it useful.

Big Data Analytics spins data straws into information gold.

Before I jump into technical details, it’s worth stating the big data analytics value proposition. The problem is that we are awash in a tsunami of data: we’ve grown beyond the neat rows and columns of application databases, data today include source like website click logs and emails to call records and cash register receipts to including social media tweets and posts. While much of the data is unstructured noise, there is also incredibility valuable information.  (video of my Hadoop “escalator pitch”)

Value is not just hidden inside the bulk data; it lies in correlations between sets of the data.

The big data analytics value proposition is to provide a system to hold a lot of loosely structured information (thus “big data”) and then sift and correlate the information (thus “analytics”). The result is a technology that helps us make data driven decisions. In many applications, the analysis is fed directly back into applications so they can alter behavior in near real-time. For example, an online retail store could offer you purple bunny slippers as you browse for crowbars in the hardware section knowing that you’re reading this post. That is the type of correlations on disparate data that I’m talking about.

This is really two problems: storing a lot of data and then computing over it.

Hadoop, the leading open source big data analytics project, is a suite of applications that implement and extend two core capabilities: a distributed file system (HDFS) and the map-reduce (M-R) algorithm. My point is not to define Hadoop (others have done better and here); instead, I want to highlight that it’s a combination big data analysis is a merger of storage and compute. When learning about any big data analysis solution, you cannot decouple how the data is stored from how the data is analyzed – storage and compute are fundamentally linked.

For that reason, the architecture of a Hadoop cluster is different than either a traditional database or compute cluster. The IO and the resiliency patterns are different. Since Hadoop is a distributed system, hardware redundancy is less important and eliminating IO bottlenecks is paramount. For this reason, our Hadoop clusters use a lot of local, non-RAID drives with a target of delivering a 1:1 CPU core to spindle ratio (ratios are tuned based on planned loads).

Imagine that you are looking for correlations in web click data. To do that analysis, Hadoop need to spend a lot of time cracking open log files, sifting for specific data and then reporting back its results. That process involves thousands of jobs each doing disk IO, CPU & RAM workload and then network transfer; consequently, contention between network and disk demands reduces performance.

Wow… that’s a lot of description and just scratching the surface of Big Data Analytics. I’ll going to have to add the technical details about the Dell solution architecture (Hardware) and software components (Cloudera & Crowbar) in another post.

2/9 Webcast about mixing GPUs & Big Data Analysis

Here’s something from my employer (Dell) that may be interesting to you: it’s about using GPUs for Big Data Analytics.   I meant to discuss/post this earlier, but… oh well.  Here’s the information

Premieres LIVE: 2pm EST (11 AM PST) TODAY  Free  – Register Now!         

What You’ll Learn:

  • Not just for video games any more: GPUs for simulation and parallel processing
  • Impact on business workflows in seismic processing, interpretation and reservoir modeling
  • ROI: 5x performance in 5 days
  • Cost-effective and flexible cluster configurations
  • Show me the metrics: Tangible results from a variety of customers

Need More Details?

Cloudcast interview with Dell Cloud Solutions Team (quotes with time stamps)

Lloyds BarbershopTheCloudcast.net – Thank you for such a great series of questions. Wow, nearly 36+ minutes of cloudicious interview about the work my team at Dell is doing!

Thanks to our hosts for putting together a great series! They are:

Highlights from Episode 16: Dell, Dude you’re getting a cloud

  • 3:40 JBG “we are listening to our customers tell us what they want to accomplish”
  • 4:40 RAH “humility is part of [what we’re] doing … cloud is about learning and collaboration”
  • 6:40 RAH “OpenStack filled a niche. It was the first open source community cloud. … Not just open source, its open community.”
  • 7:15 RAH “We’re beyond critical mass. We’re seeing acceleration… we are transitioning into a community development.”
  • 7:30 RAH “It’s accelerating. It happening so fast.”
  • 8:00 RAH “We felt it was really important for people to be able to use it. We felt that it was important to get away from just people developing into people using. “
  • 8:57 – RAH “Cloud is not just one thing. You have to have all the pieces.”
  • 10: 15 – RAH “Cloud is always ready, never finished”
  • 10:50 – RAH “OpenStack is an alternative to public cloud including hosting providers seeking to offer their own cloud”
  • 12:40 – AD “Dell has been in the Big data space for many years now”
  • 20:15 – JBG “There’s a legacy of great partnerships that we leverage”
  • 20:48 – JBG “Conflicts have not come up because we are focused on the customer”
  • 21:30 – RAH “Shout out to Greg Althaus for solving these problems in such an elegant way. And we rewrote it 3 times”
  • 22:02 – RAH “Crowbar started from our frustration of bringing up a cloud quickly … so we took a DevOps approach.”
  • 22:41 – RAH “You had to have a system view AND a boot strapping view simultaneously”
  • 23:50 – JBG “Crowbar was born out of necessity because we were setting up and blowing away our clouds over and over and over again. “
  • 24:40 – JBG “We realized there were not many people thinking about all the pieces before OpenStack was installed”
  • 25:20 – RAH “We don’t think customers have all the answers before we show up. This is not unique to OpenStack.”
  • 28:20 – JBG “We’re seeing the community pick up Crowbar as a way to deploy”

Big Questions? Big Answers with Dell BigData solution (plus Crowbar gets RHEL)

In my enthusiasm for all things Dell + OpenStack, I have neglected to talk about my team’s interesting Big Data work with Apache Hadoop.  Hadoop is a suite of open source projects for analyzing large data sets of unstructured data.  Initially, Hadoop centered around use of the map-reduce algorithm; however, it’s grown way beyond that as the community has worked to solve problems related to data storage, discovery, and scheduling.

Big Data clouds are well suited to my team because the model (non-redundant/cloud) and scale (hyper) of their deployments.  It should be no surprise that builders of analysis clouds have the same goals (maximizing operational ROI per compute unit) as builders of other types of clouds.

Our Hadoop solution relies on the same core principles (CloudOps) and technologies (Crowbar) as our OpenStack solution.  Like our other cloud solutions, we are working closely with a proven leader: Cloudera.  Now that we’ve formally announced our solution and partnership, I can talk a about what we’re doing on the Big Data front.

One extra thing that I’m proud to announce, we’ll be adding Red Hat Enterprise Linux (RHEL) support to Crowbar to support our Hadoop solution.  This support is not just at the node level: we are making Crowbar admin run on either platform too!  This is significant for two reasons:

  1. It expands the number of platforms and support options for Crowbar users
  2. It provides the framework to support more varieties of node operating environment (e.g.: XenServer, BSD, DRDOS, etc)

For more information, check out: