Week in Review: Plugin Model for Digital Rebar Provision Example

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Enhancing Digital Rebar Potential with Plugins – Honeycomb Example

The open source Digital Rebar Provision (DRP) solution provides a basic set of features that are enhanced with plugins offering additional services to customers. These plugins are provided by the Digital Rebar community, customers, partners, and RackN delivering significant value over and above the base provisioning capability of DRP.

RackN and Honeycomb developed a unique plugin during SRECon Americas a few weeks back allowing DRP events to be visible within the Honeycomb tool. Offering partners like Honeycomb an opportunity to integrate with DRP provides partners with a methodology to offer their services to the Digital Rebar community. For the community, having a simple plugin capability allows for use of pre-existing infrastructure tools.

Full Post

Immutable Deployments Hands-On Lab from Interop ITX

Greg Althaus presented a hands-on lab to 50 attendees to use Digital Rebar Provision and the RackN Portal to provision a pre-built Linux image for rapid node availability.


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Enhancing Digital Rebar Potential with Plugins – Honeycomb Example

The open source Digital Rebar Provision (DRP) solution provides a basic set of features that are enhanced with plugins offering additional services to customers. These plugins are provided by the Digital Rebar community, customers, partners, and RackN delivering significant value over and above the base provisioning capability of DRP.

RackN and Honeycomb developed a unique plugin during SRECon Americas a few weeks back allowing DRP events to be visible within the Honeycomb tool. Offering partners like Honeycomb an opportunity to integrate with DRP provides partners with a methodology to offer their services to the Digital Rebar community. For the community, having a simple plugin capability allows for use of pre-existing infrastructure tools.

In this video example, we install the Honeycomb Plugin into a Digital Rebar Provision endpoint and activate the plugin to record and transfer events to the Honeycomb system. This demonstration also shows the process to add the plugin from the catalog and install it.

We encourage all partners interested in developing a plugin to contact RackN for discussions on joint development.  For operators, register for a new account on the RackN Portal to deploy a DRP endpoint and begin a modern cloud-native approach to provisioning.

For more information:

Podcast – Baruch Sadogursky on Pipeline, Immutability, and Edge

Joining us this week is Baruch Sadogursky, Head of Developer Relations at JFrog. Baruch is an industry veteran in management of complex software and is a fantastic event speaker; I highly recommend attending his sessions at a future event. Short promotion for JFrog Swamp Up (May 16 – 18, 2018)

Highlights

  • Short overview of JFrog and its relationship to CI/CD pipelines
  • Discussion of immutability (shifting left) in deployment paradigms
  • Metadata and the impact of scale (Toyota Manufacturing Model)
  • How can I update software components with confidence?
  • Distributed programming and impact of edge computing

Topic                                                                                        Time (Minutes.Seconds)

Introduction                                                                            0.0 – 2.17
JFrog Artifactory                                                                    2.17 – 3.26
Pipeline (Starting)                                                                  3.26 – 7.53
Immutability (Shifting Left)                                                  7.53 – 11.51
Metadata (Surrounds the Artifact)                                     11.51 – 16.45
Impact of Scale (Excessive File Names)                           16.45 – 23.30 (Toyota Model)
Updating Software Components                                       23.30 – 28.32 (Pain is Instructional)
Edge Computing (IoT is Next Frontier)                              28.32 – 38.01
Wrap Up                                                                                 38.01 – END

Podcast Guest: Baruch Sadogursky

Baruch Sadogursky (a.k.a JBaruch) is the Developer Advocate at JFrog. His passion is speaking about technology. Well, speaking in general, but doing it about technology makes him look smart, and 17 years of hi-tech experience sure helps. When he’s not on stage (or on a plane to get there), he learns about technology, people and how they work, or more precisely, don’t work together.

He is a CNCF ambassador, Developer Champion, and a professional conference speaker on DevOps, Java and Groovy topics, and is a regular at the industry’s most prestigious events including JavaOne (where he was awarded a Rock Star award), DockerCon, Devoxx, DevOps Days, OSCON, Qcon and many others. His full speaker history is available on Lanyrd: http://lanyrd.com/profile/jbaruch/sessions/

You can follow him @jbaruch on Twitter.

Week in Review: Operational Paralysis is Real

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Mobilize your Ops Team Against Operational Paralysis  

Many IT departments struggle with keeping “the lights on” as legacy hardware and software consume significant resources preventing the team from taking advantage of new technologies to modernize their infrastructure. These legacy issues not only consume resources but also cause challenges to find qualified experts to keep them operational as the older the technology the less likely to find experienced support.

Freezing older technology in place without capable support or an understanding of how the product works is certainly not an industry best practice; however, it is commonly accepted in many large IT organizations. RackN has built a single, open source platform to manage not just new technologies but also legacy services allowing IT teams to actively engage the older technology without fear.

Full Post


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Mobilize your Ops Team Against Operational Paralysis

Many IT departments struggle with keeping “the lights on” as legacy hardware and software consume significant resources preventing the team from taking advantage of new technologies to modernize their infrastructure. These legacy issues not only consume resources but also cause challenges to find qualified experts to keep them operational as the older the technology the less likely to find experienced support. Even worse, new employees are typically not interested in working on old technology while the IT press obsesses on what comes next.

Freezing older technology in place without capable support or an understanding of how the product works is certainly not an industry best practice; however, it is commonly accepted in many large IT organizations. RackN has built a single, open source platform to manage not just new technologies but also legacy services allowing IT teams to actively engage the older technology without fear.

Issue: Expertise & the Unknown

  • Existing Infrastructure – legacy technology abounds in modern enterprise infrastructure with few employees capable of maintaining
  • State of the Art vs the Past – new employees are experienced in the latest technology and not interested in working on legacy solutions

Impact: Left Behind

  • Stuck in the Past – IT teams are unwilling to touch old technology that just works
  • Employee Exodus – limited future for employees maintaining the past

RackN Solution: Stagnation to Action

  • Operations Excellence – RackN’s foundational management ensures IT can operate services regardless of platform (e.g. data center, public cloud, etc)
  • Operational Paralysis – RackN delivers a single platform for IT to single platform capable of supporting existing solutions, newly arriving technologies as well as prepare for future innovation down the road.

The RackN team is ready to unlock your operational potential by preventing paralysis:

Get Ready, RackN is heading to Interop ITX

Next week our co-founders are headed to Las Vegas for Interop ITX. Both are speaking and are available to meet to discuss our technology, DevOps, etc. If you are interested in meeting please contact me to setup a time.  I would also like to acknowledge Rob Hirschfeld’s role as a Review Board member for the DevOps track.

Rob Hirschfeld is participating in a discussion panel and individual talk and Greg Althaus is running a 90 minute hands-on lab on immutable deployments.

TALKS

DevOps vs SRE vs Cloud Native

Speaker:                       Rob Hirschfeld
Date and Time:           Wednesday, May 2 from 1:00 – 1:50 pm
Location:                      Grand Ballroom G
Track:                            DevOps

DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE).

Our speaker has been creating leading edge infrastructure and cloud automation platforms for over 15 years. His recent work in Kubernetes operations has lead to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.

In this talk, we’ll explore this trend and discuss concrete ways to cope with the coming changes. We’ll look at the reasons while SRE is attractive and get specific about ways that teams can bootstrap their efforts and keep their DevOps Fu strong.

Immutable Deployments: Taking Physical Infrastructure from Automation to Autonomy

Speakers:                                Greg Althaus
Date and Time:                      Wednesday, May 2 from 3:00 – 4:30 pm
Location:                                 Montego C
Track:                                       Infrastructure
Format:                                    Hands-On-Session

Physical Infrastructure is the critical underpinning of every data center; however, it’s been very difficult to automate and manage. In this hands-on session, we’ll review the latest in physical layer automation that uses Cloud Native DevOps processes and tooling to make server (re)provisioning fully automatic.

Attendees will be guided through a full automation provisioning cycle using a mix of technologies including Terraform and Digital Rebar. We’ll use cloud based physical servers from Packet.net for the test cycle so that attendees get to work with real infrastructure even from the session.

By the end of the session, you’ll be able to setup your own data center provisioning infrastructure, create a pool of deployed servers, allocate those servers using Infrastructure as Code processes. Advanced students may be able to create and deploy complete images using locally captured images.

This session has a limited amount of seating, so an RSVP is required. Please RSVP here.

From What to How: Getting Started and Making Progress with DevOps

Speakers:                                Damon Edwards, Jayne Groll, Rob Hirschfeld, Mandy Hubbard
Date and Time:                      Thursday, May 3 from 1:00 – 1:50 pm
Location:                                 Grand Ballroom G
Track:                                       DevOps

Organizations are recognizing the benefits of DevOps but making strides toward implementation and meting goals may be more difficult than it seems. This panel discussion with multiple DevOps experts and practitioners will explores practices and principles attendees can apply to their own DevOps implementations, as well as metrics to put into place to track success. Track Chair Jayne Groll will moderate the discussion.

 

Podcast – Mark Imbriaco on SRE, Edge, and Open Source Sustainability

Joining us this week is Mark Imbriaco, Global CTO DevOps, Pivotal. Mark’s view of ops and open source from a platform perspective as it relates to SRE offers listeners a high-level approach to these concepts that is not often heard.

Highlights

  • Site Reliability Engineering – Introduction and Advanced Discussion
  • Edge Computing from Platform View
  • Open Source Projects vs Products and Sustainability
  • Monetization of Open Source Matters

Topic                                                                           Time (Minutes.Seconds)

Introduction                                                                0.0 – 3.27
Platform and Value of Platforms                            3.27 – 3.59
SRE Definition & Model                                            3.59 – 10.30 (Go Read Google Book)
SRE is not a Rebadging of Ops                               10.30 – 12.16
Why are Platforms Essential?                                 12.16 – 14.59
Edge Definition and Platform Concept                 14.59 – 21.55
Car Compute at Traffic Intersections                     21.55 – 25.29
Open Source Projects vs Products                       25.29 – 38.18
Open Source Monetization vs Free                       38.18 – 45.33 (Support Vampires)
SRE to Edge to Open Source                                 45.33 – 47.03 (3 Scenarios)
Wrap Up                                                                    47.03 – END

Podcast Guest:  Mark Imbriaco, Global CTO DevOps at Pivotal

Mark Imbriaco is currently Global CTO DevOps at Pivotal. Prior to that Mark Imbriaco was VP, Technical Operations at DigitalOcean.

Technical Operations and Software Development leader with 20 years of experience at some of the most innovative companies in the industry. Broad experience that runs the gamut from service provider to software-as-a-service to cloud infrastructure and platforms.

Week in Review: Data Center 2020 Blog Series Update on Data Centric Computing

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

DC2020: Putting the Data Back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

Read Post and Full DC2020 Blog Series


News

RackN

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

Podcast – John Willis on Docker, Open Source Financing Challenges and Industry Failures

Joining us this week is John Willis, VP DevOps and Digital Practices, SJ Technologies known for many things including being at the initial DevOps meeting in Europe, co-founder of the DevOpsDays events, and the DevOps Café podcast.

Highlights

  • Introduction to the Phoenix Project and the new audio Beyond the Phoenix Project
  • Docker discussion and the issues around its success based on the ecosystem success
  • Docker vs operation split for two different audiences
  • Issue of sustaining open source technology and lack of financing to support this
  • Revenue arc vs viral adoption for open source model
  • Three reasons to choose open source model for software

Topic                                                                                        Time (Minutes.Seconds)
Introduction                                                                             0.0 – 3.10
Beyond the Phoenix Project & DevOps Café                     3.10 – 6.50
Docker Discussion & Ecosystem Success                         6.50 – 17.55 (Moby Project & Community)
Developer vs Operation Split                                               17.55 – 20.31 (Docker is pdf of Containers)
Free Software vs Open Software / Pay for Sustaining    20.31 – 44.29 (VC Funding Issues)
Three Reasons for Open Source                                         44.29 – 48.01
Goal is Not to Hurt People                                                    48.01 – 54.13 (Toyota Example)
Wrap-Up                                                                                  54.13 – END

Podcast Guest: John Willis, VP DevOps and Digital Practices, SJ Technologies

John Willis is Vice President of DevOps and Digital Practices at SJ Technologies. Prior to SJ Technologies he was the Director of Ecosystem Development for Docker, which he joined after the company he co-founded (SocketPlane, which focused on SDN for containers) was acquired by Docker in March 2015. Previous to founding SocketPlane in Fall 2014, John was the Chief DevOps Evangelist at Dell, which he joined following the Enstratius acquisition in May 2013. He has also held past executive roles at Opscode/Chef and Canonical/Ubuntu. John is the author of 7 IBM Redbooks and is co-author of “The Devops Handbook” along with authors Gene Kim, Jez Humble, and Patrick Debois. The best way to reach John is through his Twitter handle @botchagalupe.