Unknown's avatar

About spector13

Cloud Evangelist at Dell and open source community manager (in the near past) for OpenStack and Xen.org.

Mobilize your Ops Team Against Operational Paralysis

Many IT departments struggle with keeping “the lights on” as legacy hardware and software consume significant resources preventing the team from taking advantage of new technologies to modernize their infrastructure. These legacy issues not only consume resources but also cause challenges to find qualified experts to keep them operational as the older the technology the less likely to find experienced support. Even worse, new employees are typically not interested in working on old technology while the IT press obsesses on what comes next.

Freezing older technology in place without capable support or an understanding of how the product works is certainly not an industry best practice; however, it is commonly accepted in many large IT organizations. RackN has built a single, open source platform to manage not just new technologies but also legacy services allowing IT teams to actively engage the older technology without fear.

Issue: Expertise & the Unknown

  • Existing Infrastructure – legacy technology abounds in modern enterprise infrastructure with few employees capable of maintaining
  • State of the Art vs the Past – new employees are experienced in the latest technology and not interested in working on legacy solutions

Impact: Left Behind

  • Stuck in the Past – IT teams are unwilling to touch old technology that just works
  • Employee Exodus – limited future for employees maintaining the past

RackN Solution: Stagnation to Action

  • Operations Excellence – RackN’s foundational management ensures IT can operate services regardless of platform (e.g. data center, public cloud, etc)
  • Operational Paralysis – RackN delivers a single platform for IT to single platform capable of supporting existing solutions, newly arriving technologies as well as prepare for future innovation down the road.

The RackN team is ready to unlock your operational potential by preventing paralysis:

Get Ready, RackN is heading to Interop ITX

Next week our co-founders are headed to Las Vegas for Interop ITX. Both are speaking and are available to meet to discuss our technology, DevOps, etc. If you are interested in meeting please contact me to setup a time.  I would also like to acknowledge Rob Hirschfeld’s role as a Review Board member for the DevOps track.

Rob Hirschfeld is participating in a discussion panel and individual talk and Greg Althaus is running a 90 minute hands-on lab on immutable deployments.

TALKS

DevOps vs SRE vs Cloud Native

Speaker:                       Rob Hirschfeld
Date and Time:           Wednesday, May 2 from 1:00 – 1:50 pm
Location:                      Grand Ballroom G
Track:                            DevOps

DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE).

Our speaker has been creating leading edge infrastructure and cloud automation platforms for over 15 years. His recent work in Kubernetes operations has lead to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.

In this talk, we’ll explore this trend and discuss concrete ways to cope with the coming changes. We’ll look at the reasons while SRE is attractive and get specific about ways that teams can bootstrap their efforts and keep their DevOps Fu strong.

Immutable Deployments: Taking Physical Infrastructure from Automation to Autonomy

Speakers:                                Greg Althaus
Date and Time:                      Wednesday, May 2 from 3:00 – 4:30 pm
Location:                                 Montego C
Track:                                       Infrastructure
Format:                                    Hands-On-Session

Physical Infrastructure is the critical underpinning of every data center; however, it’s been very difficult to automate and manage. In this hands-on session, we’ll review the latest in physical layer automation that uses Cloud Native DevOps processes and tooling to make server (re)provisioning fully automatic.

Attendees will be guided through a full automation provisioning cycle using a mix of technologies including Terraform and Digital Rebar. We’ll use cloud based physical servers from Packet.net for the test cycle so that attendees get to work with real infrastructure even from the session.

By the end of the session, you’ll be able to setup your own data center provisioning infrastructure, create a pool of deployed servers, allocate those servers using Infrastructure as Code processes. Advanced students may be able to create and deploy complete images using locally captured images.

This session has a limited amount of seating, so an RSVP is required. Please RSVP here.

From What to How: Getting Started and Making Progress with DevOps

Speakers:                                Damon Edwards, Jayne Groll, Rob Hirschfeld, Mandy Hubbard
Date and Time:                      Thursday, May 3 from 1:00 – 1:50 pm
Location:                                 Grand Ballroom G
Track:                                       DevOps

Organizations are recognizing the benefits of DevOps but making strides toward implementation and meting goals may be more difficult than it seems. This panel discussion with multiple DevOps experts and practitioners will explores practices and principles attendees can apply to their own DevOps implementations, as well as metrics to put into place to track success. Track Chair Jayne Groll will moderate the discussion.

 

Podcast – Mark Imbriaco on SRE, Edge, and Open Source Sustainability

Joining us this week is Mark Imbriaco, Global CTO DevOps, Pivotal. Mark’s view of ops and open source from a platform perspective as it relates to SRE offers listeners a high-level approach to these concepts that is not often heard.

Highlights

  • Site Reliability Engineering – Introduction and Advanced Discussion
  • Edge Computing from Platform View
  • Open Source Projects vs Products and Sustainability
  • Monetization of Open Source Matters

Topic                                                                           Time (Minutes.Seconds)

Introduction                                                                0.0 – 3.27
Platform and Value of Platforms                            3.27 – 3.59
SRE Definition & Model                                            3.59 – 10.30 (Go Read Google Book)
SRE is not a Rebadging of Ops                               10.30 – 12.16
Why are Platforms Essential?                                 12.16 – 14.59
Edge Definition and Platform Concept                 14.59 – 21.55
Car Compute at Traffic Intersections                     21.55 – 25.29
Open Source Projects vs Products                       25.29 – 38.18
Open Source Monetization vs Free                       38.18 – 45.33 (Support Vampires)
SRE to Edge to Open Source                                 45.33 – 47.03 (3 Scenarios)
Wrap Up                                                                    47.03 – END

Podcast Guest:  Mark Imbriaco, Global CTO DevOps at Pivotal

Mark Imbriaco is currently Global CTO DevOps at Pivotal. Prior to that Mark Imbriaco was VP, Technical Operations at DigitalOcean.

Technical Operations and Software Development leader with 20 years of experience at some of the most innovative companies in the industry. Broad experience that runs the gamut from service provider to software-as-a-service to cloud infrastructure and platforms.

Week in Review: Data Center 2020 Blog Series Update on Data Centric Computing

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

DC2020: Putting the Data Back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

Read Post and Full DC2020 Blog Series


News

RackN

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

Podcast – John Willis on Docker, Open Source Financing Challenges and Industry Failures

Joining us this week is John Willis, VP DevOps and Digital Practices, SJ Technologies known for many things including being at the initial DevOps meeting in Europe, co-founder of the DevOpsDays events, and the DevOps Café podcast.

Highlights

  • Introduction to the Phoenix Project and the new audio Beyond the Phoenix Project
  • Docker discussion and the issues around its success based on the ecosystem success
  • Docker vs operation split for two different audiences
  • Issue of sustaining open source technology and lack of financing to support this
  • Revenue arc vs viral adoption for open source model
  • Three reasons to choose open source model for software

Topic                                                                                        Time (Minutes.Seconds)
Introduction                                                                             0.0 – 3.10
Beyond the Phoenix Project & DevOps Café                     3.10 – 6.50
Docker Discussion & Ecosystem Success                         6.50 – 17.55 (Moby Project & Community)
Developer vs Operation Split                                               17.55 – 20.31 (Docker is pdf of Containers)
Free Software vs Open Software / Pay for Sustaining    20.31 – 44.29 (VC Funding Issues)
Three Reasons for Open Source                                         44.29 – 48.01
Goal is Not to Hurt People                                                    48.01 – 54.13 (Toyota Example)
Wrap-Up                                                                                  54.13 – END

Podcast Guest: John Willis, VP DevOps and Digital Practices, SJ Technologies

John Willis is Vice President of DevOps and Digital Practices at SJ Technologies. Prior to SJ Technologies he was the Director of Ecosystem Development for Docker, which he joined after the company he co-founded (SocketPlane, which focused on SDN for containers) was acquired by Docker in March 2015. Previous to founding SocketPlane in Fall 2014, John was the Chief DevOps Evangelist at Dell, which he joined following the Enstratius acquisition in May 2013. He has also held past executive roles at Opscode/Chef and Canonical/Ubuntu. John is the author of 7 IBM Redbooks and is co-author of “The Devops Handbook” along with authors Gene Kim, Jez Humble, and Patrick Debois. The best way to reach John is through his Twitter handle @botchagalupe.

Week in Review: Digital Rebar Provision v3.8.0 Release

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

Digital Rebar Provision Announces v3.8.0 – Workflows

Workflows are a first-class element of the system now. They have their own API endpoint and machine object field. They simplify all the MATH that used to be in the change-stage/map.

Key Features in the Release:

  • Workflows
    • Create Workflow object that replaces change-stage/map method for changing stages on machines
    • Maintain backwards compatibility with the change-stage/map system.
    • Update Machine object to have workflow as first-class field
    • Update Validations to properly control workflow states.
    • Allow events to be publish but not propagated. This allows for local log file logging of events without loops.
    • Add Windows-based drpcli to drp
    • More

News

RackN

Digital Rebar Community 

L8ist Sh9y Podcast

Social Media

 

Immutable Image Deployment from Digital Rebar Mastered Golden Image

Shane Gibson, Sr. Architect and Community Evangelist, RackN created a new Digital Rebar Provision (DRP) video highlighting immutable provisioning from a “golden image” as well as the ability to create that “golden image” from within Digital Rebar Provision.

Highlights:

  • Immutable Image Deployment Solution to 20 Target Bare Metal Machines
  • Creation of a “Golden Image” in Digital Rebar Provision
  • Detailed Overview of the RackN Portal UX to Support this Demo

More information on the Digital Rebar community and Digital Rebar Provision:

Podcast – Erica Windisch on Observability of Serverless, Edge Computing, and Abstraction Boundaries

Joining us this week is Erica Windisch, Founder/CTO at IOpipe, a high fidelity metrics and monitoring service which allows you to see inside AWS Lambda functions for better insights into the daily operations and development of severless applications.

Highlights

  • Intro of AWS Lambda and IOpipe
  • Discussion of Observability and Opaqueness of Serverless
  • Edge Computing Definition and Vision
  • End of Operating Systems and Abstraction Boundaries

Topic                                                                                Time (Minutes.Seconds)

Introduction                                                                    0.0 – 1.16
Vision of technology future                                         1.16 – 3.04 (Containers ~ Docker)
Complexity of initial experience with new tech       3.04 – 5.38 (Devs don’t go deep in OS)
Why Lambda?                                                                5.38 – 8.14 (Deploy functions)
What IOpipe does?                                                        8.14 – 10.54 (Observability for calls)
Lambda and Integration into IOpipe                          10.54 – 13.48 (Overhead)
Observability definition                                                 13.48 – 17.25
Opaque system with Lambda                                     17.25 – 21.13
Serverless framework still need tools to see inside   21.13 – 24.20 (Distributed Issues Day 1)
Edge computing definition                                           24.20 – 26.56 (Microprocessor in Everything)
Edge infrastructure vision                                            26.56 – 29.32 (TensorFlow example)
Portability of containers vs functions                         29.32 – 31.00 (Linux is Dying)
Abstraction boundaries                                                31.00 – 33.50 (Immutable Infra Panel)
Is Serverless the portability unit for abstraction?     33.50 – 39.46 (Amazon Greengrass)
Wrap Up                                                                          39.46 – END

 

Podcast Guest: Erica Windisch, Founder/CTO at IOpipe

Erica Windisch is the founder and CTO of IOpipe, a company that builds, connects, and scales code. She was previously a software and security engineer at Docker. Before joining Docker, worked as a principal engineer at Cloudscaling. Studied at Florida Institute of Technology.

 

Week in Review: Provision Physical and Virtual from a Single Platform

Welcome to the RackN and Digital Rebar Weekly Review. You will find the latest news related to Edge, DevOps, SRE and other relevant topics.

RackN NOW Provisions Virtual Machines Not Just Physical Machines 

This expansion to virtual machines allows Digital Rebar Provision (DRP) users to not only provision physical infrastructure but virtual as well both locally and in clouds. In this simple demo video we show how to connect a virtual platform to DRP and provision virtual machines alongside your bare metal infrastructure.

Learn More


News

RackN

Digital Rebar Community

L8ist Sh9y Podcast

Social Media

Create your first CentOS 7 Machine on RackN Portal with Digital Rebar Provision

This is the third blog in a series demonstrating the steps required to complete a series of tasks in the RackN Portal using Digital Rebar Provision.

Prerequisite

You will need an account on the RackN Portal with an active Digital Rebar Provision endpoint running. In this How To, I am using Packet.net for my infrastructure as I have no local hardware available to build a local system.

For information on creating a Digital Rebar Provision endpoint and connecting it to a RackN Portal please see these two prior How To blogs:

Step 1 : Create a new Machine on Packet.net

The RackN Portal needs a physical machine for Digital Rebar Provision (DRP) to discover and track in the Machine section of the UX. I am providing steps to create that machine on my Packet.net account:

  • Login into your Packet.net account

In the image above, I show my DRP endpoint (spectordemo-drp-ewr1-00) and a machine (spectordemo-machines-ewr1-01) I created during the Deploy and Test DRP in less than 10 Minutes How To guide. Note – my machines are Type 0 which is about $0.07 an hour to run and the location is at the EWR1 Packet.net data center.

  • Select +Add New to create a new physical machine on Packet.net

Enter the following information for the entry fields on the “Deploy on Demand” page:

  • Hostname: Enter anything you want with a .com (e.g. spectortest.com)
  • Location: Choose the same location of your endpoint – see screen above (e.g. EWR1)
  • Type: Type 0 (cheapest machine ~ $.07 per hour)
  • OS: Custom iPXE ; a new window will appear below that selection area after choosing Custom iPXE
    • Enter the http address of your Endpoint along with “:default.ipxe” at then end so you get “http://#.#.#.#:8091:default.ipxe” (NOTE – the RackN portal address will have :8092, be sure to switch here to :8091)
  • Select the “User Data” button and a new pop-up screen will appear; select SAVE

Packet will then show the new machine as it is setup with the color going from yellow to green during setup. If you click “View Progress” you can monitor the machine start.

Within a few minutes, the machine will switch from yellow to green at which point you will have created a new physical machine to provision with DRP.

Step 2 : Provision a new CentOS 7 Machine from with the RackN Portal 

  • Prepare the Global Workflow

The default Workflow available needs to be removed if you are working with Packet.net machines. If your screen does not look like the final Workflow image shown below, take the following steps:

  1. Delete the Workflow by clicking “Remove” on each step until it is removed
  2. Click the Workflow Wizard to create the 3 Stages shown below

The final Workflow page should look like the image below with three separate Stages and follow-on steps for processing.

  • Confirm new Machine is Visible to RackN Portal

The newly created machine on Packet.net should now be visible in your Bulk Actions page as shown below. The Stage will be set to “sledgehammer-wait and BootEnv to “sledgehammer.”

If the Stage for the new machine is not correct, reboot the machine using the Plugin Action -> powercycle option. The machine should then set to the proper Stage and BootEnv as shown above.

  • Change the Stage and BootEnv to CentOS 7 Settings

Before this final step, be sure to check the machine in the Packet.net settings that it is set for PXE Boot to YES/ON.

In the Bulk Action page, you can change the Stage and BootEnv settings. Select the newly created machine and set the Stages to “centos-7-install” as shown below and then click the 4-arrow button.

Once complete you will see the following setup on the Bulk Action page.

  • Reboot the new Machine in Packet.net

The final step to provision this new machine from DRP is to change the Plugin Action option to “powercycle” and press the hand with figure down. Of course, make sure your machine is selected as show in the image above.

Step 3 : Monitor the Installation of CentOS 7 on the new Machine

To monitor the activity on your new machine you will need to ssh into that machine from a terminal window. To get the ssh key, I selected the new machine in the RackN Portal and grabbed the content from the >_packet/sos: line below. In this case I used 9a17d7d1-fa74-4757-8683-82b57e8e3ed2@sos.sjc1.packet.net.

In the same directory you ran the “pkt-demo” How To in the first blog, you will see a file like “spectordemo-machines-ssh-key” depending on the names you used in the first blog.  Run this command:

ssh -i spectordemo-machines-ssh-key 9a17d7d1-fa74-4757-8683-82b57e8e3ed2@sos.sjc1.packet.net

This will connect to the new machine so you can see activity. For the machine waiting at sledgehammer-wait you will see the following image:

Once the reboot is executed in STEP 3 / (Reboot the New Machine in Packet.net) you will see the machine shut down and disconnect you. Run the same ssh command and you will see this screen while the machine reboots:

The machine will then move into the CentOS 7 install and you will see a sequence of Linux install information such as the following:

This completes the provisioning of a new machine on Packet.net using the RackN Portal Workflow process.