Podcast – Rob Lalonde on HPC in the Cloud, Machine Learning and Autonomous Cars

Joining us this week is Rob Lalonde, VP & General Manager, Navops at Univa.

About Univa

Univa is the leading independent provider of software-defined computing infrastructure and workload orchestration solutions.

Univa’s intelligent cluster management software increases efficiency while accelerating enterprise migration to hybrid clouds. We help hundreds of companies to manage thousands of applications and run billions of tasks every day.

 Highlights

  • 1 min 6 sec: Introduction of Guest
  • 1 min 43 sec: HPC in the Cloud?
    • Huge migration of workloads to public clouds for additional capacity
    • Specialized resources like GPUs, massive memory machines, …
  • 3 min 29 sec: Cost perspective of cloud vs local HPC hardware
    • Primarily a burst to cloud model today
  • 5 min 10 sec: Good for machine learning or analytics?
  • 5 min 40 sec: What does Univa and Navops do?
    • Cloud cluster automation
  • 7 min 35 sec: Role of Scheduling
    • Job layer & infrastructure layer
    • Diversity of jobs across organizations
  • 9 min 30 sec: Machine learning impact on HPC
    • Survey of Users ~ Results
      • Machine learning not yet in production ~ still research
      • HPC very much linked to machine learning
      • Cloud and Hybrid cloud usage is very high
      • GPUs usage for machine language
    • 15 min 09 sec: GPU discussion
      • Similar to early cloud stories
    • 16 min 00 sec: Concurrency in operations in HPC & machine learning
      • Workload dependency ~ weather modeling
    • 18 min 12 sec: People bring workloads in-house after running in cloud?
      • Sophistication in what workloads work best where
      • HPC is very efficient ~ 1 Million Cores on Amazon : Successful when AWS called about taking all their resources for other customers 🙂
    • 23 min 56 sec: Autonomous cars discussion
      • Processing in the car or offloaded?
      • Oil and Gas exploration example (Edge Infrastructure example)
        • Pre-process data on ship then upload via satellite to find information required
      • 29 min 12 sec: Is Kubernetes in the HPC / Machine Learning world?
        • KubeFlow project
      • 35 min 8 sec: Wrap-Up

Podcast Guest:  Rob Lalonda, VP & General Manager, Navops

Rob Lalonde brings over 25 years of executive management experience to lead Univa’s accelerating growth and entry into new markets. Rob has held executive positions in multiple, successful high tech companies and startups. He possesses a unique and multi-disciplined set of skills having held positions in Sales, Marketing, Business Development, and CEO and board positions. Rob has completed MBA studies at York University’s Schulich School of Business and holds a degree in computer science from Laurentian University.

Are Clouds using Dark Cycles?

Or “Darth Vader vs Godzilla”

Way way back in January, I’d heard loud and clear that companies where not expecting to mix cloud computing loads.  I was treated like a three-eyed Japanese tree slug for suggesting that we could mixing HPC and Analytics loads with business applications in the same clouds.  The consensus was that companies would stand up independent clouds for each workload.  The analysis work was too important to interrupt and the business applications too critical to risk.

It has always rankled me that all those unused compute cycles (“the dark cycles”) could be put to good use.  It’s appeals to my eco-geek side to make best possible use of all those idle servers.   Dave McCrory and I even wrote some cloud patents around this.

However, I succumbed to the scorn and accepted the separation.

Now all of a sudden, this idea seems to be playing Godzilla to a Tokyo shaped cloud data center.  I see several forces merging together to resurrect mixing workloads.

  1. Hadoop (and other map-reduce Analytics) are becoming required business tools
  2. Public clouds are making it possible to quickly (if not cheaply) setup analytic clouds
  3. Governance of virtualization is getting better
  4. Companies want to save some $$$

This trend will only continue as Moore’s Law improves the compute density for hardware.  Since our designs are leading towards scale out designs that distribute applications over multiple nodes; it is not practical to expect an application to consume all the power of a single computer.

That leaves a lot of lonely dark cycles looking for work.

Now all of a sudden, this idea seems to be playing Godzilla to a Tokyo shaped cloud data center.  I see several forces merging together to resurrect mixing workloads.

  1. Hadoop (and other map-reduce Analytics) are becoming required business tools
  2. Public clouds are making it possible to quickly (if not cheaply) setup analytic clouds
  3. Governance of virtualization is getting better
  4. Companies want to save some $$$

This trend will only continue as Moore’s Law improves the compute density for hardware.  Since our designs are leading towards scale out designs that distribute applications over multiple nodes; it is not practical to expect an application to consume all the power of a single computer.

That leaves a lot of lonely dark cycles looking for work.