Provision Virtual Machines with an Open Source Physical Infrastructure Solution

Rob Hirschfeld, CEO/Co-Founder, RackN created a new Digital Rebar Provision (DRP) video highlighting the creation of virtual machines within the standard automation process. Highlights:

  • Create a New Virtual Machine from the Physical Provisioning Tool – DRP
  • VirtualBox IMPI Plugin – Preview of Pre-Release Tool
  • RackN Portal will inventory virtual machines available on network for management
  • Packet IMPI Plugin – enable creation of VMs on Packet cloud hardware

This expansion to virtual machines allows DRP users to not only provision physical infrastructure but virtual as well both locally and in clouds.

More information on the Digital Rebar community and Digital Rebar Provision:

RackN Portal Management Connection to the 10 Minute Demo

In my previous blog, I provided step by step directions to install Digital Rebar Provision on a new endpoint and create a new node using Packet.net for users without a local hardware setup. (Demo Tool on GitHub) In this blog, I will introduce the RackN Portal and connect it to the active setup running on Packet.net at the end of the demo process.

NOTE – You will need to run the demo process again to have both the DRP installation and endpoint active on Packet.net.

Current Status

There will be two machines running in Packet:

  • Digital Rebar Provision running on an Endpoint
  • A new physical node provided by DRP

In order to have run the process in the previous blog, you will have created a RackN Portal account to get the RackN code to add into the Secrets file.

Steps to Connect RackN Portal

When you first go to the RackN Portal you will see the following screen:

The first step is to enter the Endpoint Address which will come from the Packet.net Endpoint server setup in the previous blog. To get the address go to the “Configure DRP” step and you will see the following which contains the Endpoint http address:

running ACTION:  drp-setup-demo
+ set +x
+ drpcli –endpoint=https://147.##.##.63:8092 bootenvs uploadiso centos-7-install
{
 “Path”: “CentOS-7-x86_64-Minimal-1708.iso”,

“Size”: 830472192
}
+ set +x
{
 “centos-7-install”: “packet-ssh-keys:Success”,

“discover”: “packet-discover:Success”,
“packet-discover”: “centos-7-install:Reboot”,
“packet-ssh-keys”: “complete-nowait:Success”
}

Enter the following https address https://147.##.##.63:8092 into the Endpoint Address and press the blue arrow. You will then be taken to the login screen where you enter the standard login info:

Select “Defaults” to have the system fill in the Login information. If you need more information on this screen, please review the Install Guide.

RackN Portal Tour

After completing the login your RackN Portal screen will look like this:

At this point, we want to see the new node that was created in the final step of our demo process. Select “Machines” on the left-hand navigation below SYSTEM and you will see the new machine that was created. NOTE – The Red X next to Subnets is appropriate for Packet.net infrastructure.

You can confirm this machine name with the name of the machine in the last stage of the process. Both the RackN Portal and the data below indicate that I have created a new node called “spectordemo-machines-ewr1-01“.

Selecting the newly created machine you will see the following information:

In the next blog, we will use the RackN Portal to create a second node and look at the Workflow process to install an operating system on both nodes.

If you have any questions or would like to get started learning more about Digital Rebar Provision and RackN please join the Slack community.

Deploy and Test Digital Rebar Provision in less than 10 Minutes : How To Guide

Part 1 of 3 in Digital Rebar Provision How To Blog Series

For operators looking to better understand Digital Rebar Provision (DRP) RackN has developed an easy to follow process leveraging Packet.net for physical device creation. This process allows new users to create a physical DRP endpoint and then provision a new physical node on Packet. Information and code to run this guide is available at https://github.com/digitalrebar/provision/tree/master/examples/pkt-demo.

In this blog, I will take the reader through the process with images based on running via my Mac.

SETUP

  • You will need an account on Packet at https://www.packet.net/. I created a personal account and entered a credit card to pay for the services used. The cost on Packet to run this is minimal.
    • From your Packet.net account you will need to create a NEW Project and an API Key. The API key will look like 7DE1Be6NLjGP6KUH4mbUAbysjwOx9kHo and the Project will look like b5d29881-8561-4f3b-8efb-2d61003fe2e7. NOTE – The values shown are changed and will not work in Packet.
  • You will need an account on the RackN Portal via https://portal.rackn.io. From this account you will need your Username which looks like t98743fk-3865-4315-8d11-11127p9e41bd. NOTE – The value shown is not a valid Username.
  • Mac Users – I needed to have Homebrew installed on my machine to run this demo script. Run the 2 steps below…

PROCESS

  • Git Clone the guide (DO NOT run w/ “sudo”)
  • Edit the Secrets file with Packet and RackN Portal info from Setup
    • vi private-content/secrets

# specify your API KEY that has access to PROJECT ID below
API=”insert_api_key_here”
# specify the PROJECT ID that API KEY has access to
PROJECT=”insert_project_id_here”
# RackN Username – necessary to download registered (but free) content packs
USERNAME=”insert username here”

  • Run the demo-run.sh Script
    • ./demo-run.sh : this will launch the guide and you will see the Digital Rebar bear along with a request to run the next step

  • <RETURN> “Install Terraform”

  • <RETURN> “Install Secrets”

  • <RETURN> “Generate Public/Private RSA Keys”

  • <RETURN> “Packet SSH Key”

  • <RETURN> “2nd Packet SSH Key”

  • <RETURN> Creating the DRP Endpoint on Packet

  • <RETURN> Create a Terraform Plan

  • <RETURN> Download DRP to Endpoint

  • <RETURN> SSH Keygen

  • <RETURN> SSH Keyscan

  • <RETURN> Install DRP onto Packet Host Endpoint

Additional Installation Content Not Shown

  • <RETURN> Configure DRP

Additional Configuration Content Not Shown

NOTE – Getting a FAILED at this stage is expected and you should continue

  • <RETURN> Setup DRP Endpoint

  • <RETURN> Create new Packet Physical Node form DRP Endpoint

At this point you will have 2 machines running in Packet:

  • Digital Rebar Provision running on an Endpoint
  • A new physical node provisioned by DRP
  • To clean-up this process and shut down the 2 Packet machines run the following command ./bin/control.sh cleanup
    • It will clean up Packet as well as reset all files back to the original state when cloned from github.

In my next blog, I will introduce the process to connect your Packet Endpoint machine to the RackN Portal so you can see the newly created node and begin working with it from the RackN Portal.

If you have any questions, please leverage the RackN Slack #Community channel where Digital Rebar community members and RackN engineers are available to assist.

Terraform Bare Metal – A Leap forward for SDx

Software Defined Infrastructure (SDx) allows operators to manage data centers in a more consistent and controlled way. It allows teams to define their environment as code and use automation to execute that definition in practice. To deliver this capability for physical (aka bare metal) servers, RackN has created a Digital Rebar provider for Terraform. The provider is a simple addition that take just seconds to enable. (Video Demonstrations at End of Blog)

The Terraform Bare Metal provider allows plans to provision and recover servers using a node resource.

The operation of this provider is simple and relies on standard workflow stages in Digital Rebar. Adding the Terraform Content Package installs a new stage that adds Terraform parameters. Including this stage in the global workflow will automatically register machines as available for Terraform. The integration uses two parameters to manage the server pool: Terraform Managed and Terraform Assigned.

When the Terraform provider asks for a node resource, it queries the Digital Rebar API for machines that are managed (true) and not assigned (false) plus whatever additional filters were required in the plan. The provider then uses the API to set assigned true and the requested Stage (e.g. centos-install) and polls until the node enters the Complete stage. The destroy action reverses the action to release the node. Digital Rebar uses the stage changes as a trigger to restart the machine workflow.

Using a Terraform plan with Digital Rebar, operators can manage complex data centers layouts from a single command line.

For users, all of the above steps are completely hidden. Operators can monitor the request using the Digital Rebar UX to ensure the plan is executing. In addition, plan metadata can set user or identification values to the machines when they are reserved to help track allocations. In this way, administrators can easily track and account for machines reserved via Terraform.

For full out-of-band control, users should add the RackN IPMI plugin. This adds the ability to force power states during plan execution. The provider does not require out-of-band management to function. RackN also maintains Packet.net and VirtualBox plugins with the same API as the IPMI plugin. This allows developers to easily test plans against virtual or cloud resources.

RackN customers are making big plans to use this simple and powerful integration to manage their own SDx roadmap. We’re excited to hear about new ways to improve data center operations, especially new edge ideas. Let us know what you are thinking!

Demonstration of Terraform Bare Metal Provisioning with Digital Rebar Provision V3.2

Setting up the Environment to run Digital Rebar Provision V3.2 for Terraform

Podcast with Zach Smith talking Bare Metal and AWS Training Wheels

Joining this week’s L8ist Sh9y Podcast is Zach Smith, CEO of Packet and long-time champion of bare metal hardware. Rob Hirschfeld and Zach discuss the trends in bare metal, the impact of AWS changing the way developers view infrastructure, and issues between networking and server groups in IT organizations.

Topic                                                            Time (Minutes.Seconds)

Introduction                                                       0.0 – 0.43
History of Packet                                               0.43 – 1:38
Why Public Cloud Bare Metal                         1.38 – 2.10
Price Points Metal vs VM                                 2.10 – 3.08
Intro Compute to Non-Data Center People 3.08 – 4:27
RackN early Customer                                      4.27 – 5.41
Managing non-Enterprise Hardware             5.41 – 7.45
Cloud has forever changed IT Ops                 7.45 – 10.20
Making Hardware Easier                                 10.20 – 12.35
Continuous Integration (CI)                            12.35 – 14.37
Customer Story w/ Terraform                        14.47 – 16.08
SRE, DevOps and Engineering Thinking     16.08 – 16:49
Most extreme Metal Pipelines                        16.49 – 18.02
Coolest New Hardware in Use                        18.02 – 19.28
How order metal and add to data center     19.28 – 22.47
RackN and the Switch                                       22.47 – 24.39
Edge Computing Break Enterprise IT           24.39 – 25.16
DevOps Highlights for Today                          25.16 – 27.01
Post Provision Control in Open Source          27.01 – 30.03
Data Centers in early 2000’s                            30.03 – 31.27
Nov 1 in NYC: Cloud Native in DataCenter   31.27 –  END

Podcast Guest: Zach Smith, CEO Packet

Zachary has spent his last 16 years building, running and fixing public cloud infrastructure platforms.  As the CEO of Packet, Zachary is responsible for the company’s strategic product roadmap and is most passionate about helping customers and partners take advantage of fundamental compute and avoid vendor lockin.  Prior to founding Packet, Zachary was an early member of the management team at Voxel, a NY-based cloud hosting company sold to Internap in 2011, that built software to automate all aspects of hosting datacenters.  He lives in New York City with his wife and 2 young children. Twitter @zsmithnyc

Data Center Bacon: Terraform to Metal with Digital Rebar

TL;DR: We’ve built a buttery smooth Terraform provider for Bare Metal that runs equally on, of course, servers, Packet.net servers or VirtualBox VMs.  If you like Hashicorp Terraform and want it to own your data center too, then read on.

Deep into the Digital Rebar Provision (DRP) release plan, a customer asked the RackN team to build a Terraform provider for DRP.  They had some very specific requirements that would stress all the new workflows and out-of-band management features in the release: in many ways, this integration is the ultimate proof point for DRP v3.1 because it drives DRP autonomously.

The primary goal was simple: run a data center as a resource pool for Terraform.

Here our CTO, Greg Althaus, giving a short demo of the integration.

Of course, it is not that simple.  Operators need to be able to provide plans to pick correct nodes from resources pools.  Also, the customer request was to deploy both Linux and Windows images based on Packet.  That meant that the system needed both direct-to-disk image writing and cloud-init style post-configuration.  The result is deployments that are blazingly fast (sub 5 minutes) and highly portable.

An additional challenge in building the Terraform Provider is that no one wants to practice building plans against actual servers.  They are way too slow.  We need to be able to build and test the Terraform provider and plans quickly on a laptop or cloud infrastructure like Packet.net.  Our solution was to build parallel out-of-band IPMI type plugins for all three platforms so that the Terraform provider could interact with Digital Rebar Provision consistently regardless of the backing infrastructure.

We were able to build a full fidelity CI/CD pipeline for plans without committing dedicated infrastructure at the dev or test phases.  That is a significant breakthrough.

Terraform is kicking aaS for cluster deployments on cloud and we’re getting some very enthusiastic responses when we describe both the depth and simplicity of integration with Digital Rebar Provision.  We’re actively collecting feedback and testing both new DRP features and Terraform integration so it’s not available for open consumption; however, we very much want to find operators interested in field trials.

Please contact us if Terraform on Metal is interesting.  We’d be happy to show you how it works and discuss our next steps.

Further Listening?  Our Latest Shiny (L8stSh9y) podcast with Greg Althaus and Stephen Spector covers the work.

July 14 – Weekly Recap of All Things Site Reliability Engineering (SRE)

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

SRE Items of the Week

Teradata Acquires San Diego-based Start-up StackIQ to Strengthen Teradata Everywhere and IntelliCloud Capabilities
http://prn.to/2vicpUb

SAN DIEGO, July 13, 2017 /PRNewswire/ — Teradata (NYSE:  TDC), the leading data and analytics company, today announced the acquisition of StackIQ, developers of one of the industry’s fastest bare metal software provisioning platforms which has managed the deployment of cloud and analytics software at millions of servers in data centers around the globe. The deal will leverage StackIQ’s expertise in open source software and large cluster provisioning to simplify and automate the deployment of Teradata Everywhere. Offering customers the speed and flexibility to deploy Teradata solutions across hybrid cloud environments, allows them to innovate quickly and build new analytical applications for their business.

How Platforms and SREs Change the DevOps Contract on  CapitalOne DevExchange
http://bit.ly/2uVXekf

capitalone
DevOps struggles under a “fully shared responsibility” contract for Developers and Operations that drives a futile search for elusive “full-stack engineers.” It’s time to revisit how to Dev and Ops are going to collaborate because these jobs often have different priorities.
READ MORE

RackN Introduction Video
Rob Hirschfeld, CEO and Co-Founder introduces RackN in 48 seconds

Kubernauts Worldwide Meetup
This video is from our first Kubernauts Worldwide Meetup covering the new features in Kubernetes 1.7 presented by Ihor Dvoretskyi, Kubernetes Pain Points and Upgrade presented by Rob Hirschfeld and about Kubernauts Training presented by Des Drury. Arash Kaffamanesh moderated the online meetup and provided a short overview about what Kubernauts are about.

Rob starts at 38 minute 50 seconds

Video Series w/ Packet.net
Three videos showing how to use Packet.net custom IPXE option with Digital Rebar IPXE provisioning

http://bit.ly/2t54J65      (Video 1 of 3)
http://bit.ly/2tO5WCy   (Video 2 of 3)
http://bit.ly/2vi5dXZ     (Video 3 of 3)

Let’s DevOps IRL: My SRE Postings on RackN by Rob Hirschfeld
http://bit.ly/2tzCvnj  

I’m investing in these Site Reliability Engineering (SRE) discussions because I believe operations (and by extension DevOps) is facing a significant challenge in keeping up with development tooling.   The links below have been getting a lot of interest on twitter and driving some good discussion. READ MORE

newsletter

Subscribe to our new daily DevOps, SRE, & Operations Newsletter https://paper.li/e-1498071701#/
_____________

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

OTHER NEWSLETTERS

 

Got some change? Build a datacenter ops lab on your coffee break [with Packet.net MaaS]

We’re using Packet.net hosted metal to test automation for private metal (video).  You can use discount code “RACKN100” to get a credit on Packet and try it yourself.

At RackN, we’ve been shrinking our scale deployment platform down to run faithfully on a desktop class system. Since we abstract the network and hardware complexity, you can build automation that scales to physical from as little as 16 Gb of RAM (the same size as Packet’s smaller server). That allows the exact same logic we use for an 80 node Ceph or Kubernetes cluster work on my 14” laptop.

In fact, we’ve been getting a bit obsessed with making a clean restart small and fast using containers, VMs and bootstrapping scripts.

Creating a remote test lab is part of this obsession because many rehearsals make great performances.  We wanted to eliminate the setup time and process for users who just want to experiment with a production grade deployment. Using Packet.net hosted metal and some Ansible scripts, we can build a complete HA Kubernetes cluster in about 15 minutes using VMs. This lets us iterate on Kubernetes best practices virtually since the “setup metal part” is handled abstractly by Digital Rebar.

Yawn. You could do the same in AWS. Why is that exciting?

The process for the lab system we build in Packet.net can then be used to provision a complete private infrastructure on metal including RAID, BIOS and server networking. Even though the lab uses VMs, we still do real networking, storage and configuration. For example, we can iterate building real software defined networking (SDN) overlays in this environment and then scale the work up to physical gear.

The provision and deploy time is so fast (generally, under 15 minutes) that we are using it as a clean environment for Dev and QA cycles on new automation. It’s also a very practical demo environment for these platforms because of the fidelity between this environment and an actual pilot. For me, that means spending $0.40 so I don’t have to sweat losing my work in process, battery life or my wifi connection to crank out a demo.

BTW… Packet.net servers are SUPER FAST. Even the small 16 Gb RAM machine is packed with SSDs and great connectivity.

If you are exploring any of the several workloads that we’ve been building (Docker Swarm, Kubernetes, Mesos, CloudFoundry, Ceph and OpenStack) or just playing around with API driven physical provisioning, we just made that work a little easier and a lot faster.