OpenCrowbar Drill release (will likely become v2.3) is wrapping up in the next few weeks and it’s been amazing to watch the RackN team validate our designs by pumping out workloads and API integrations (list below).
I’ve posted about the acceleration from having a ready state operations base and we’re proving that out. Having an automated platform is critical for metal deployment because there is substantial tuning and iteration needed to complete installations in the field.
Getting software setup once is not a victory: that’s called a snowflake
Real success is tearing it down and having work the second, third and nth times. That’s because scale ops is not about being able to install platforms. It’s about operationalizing them.
Integration: the difference between install and operationalization.
When we build a workload, we are able to build up the environment one layer at a time. For OpenCrowbar, that starts with a hardware inventory and works up though RAID/BIOS and O/S configuration. After the OS is ready, we are able to connect into the operational environment (SSH keys, NTP, DNS, Proxy, etc) and build real multi-switch/layer2 topographies. Next we coordinate multi-node actions like creating Ceph, Consul and etcd clusters so that the install is demonstrably correct across nodes and repeatable at every stage. If something has to change, you can repeat the whole install or just the impacted layers. That is what I consider integrated operation.
It’s not just automating a complex install. We design to be repeatable site-to-site.
Here’s the list of workloads we’ve built on OpenCrowbar and for RackN in the last few weeks:
- Ceph (OpenCrowbar) with advanced hardware optimization and networking that synchronizes changes in monitors.
- Docker Swarm (RackN) (or DIY with Docker Machine on Metal)
- StackEngine (RackN) builds a multi-master cluster and connects all systems together.
- Kubernetes (RackN) that includes automatic high available DNS configuration, flannel networking and etcd cluster building.
- CloudFoundry on Metal via BOSH (RackN) uses pools of hardware that are lifecycle managed OpenCrowbar including powering off systems that are idle.
- I don’t count the existing RackN OpenStack via Packstack (RackN) workload because it does not directly leverage OpenCrowbar clustering or networking. It could if someone wanted to help build it.
And… we also added a major DNS automation feature and updated the network mapping logic to work in environments where Crowbar does not manage the administrative networks (like inside clouds). We’ve also been integrating deeply with Hashicorp Consul to allow true “ops service discovery.”
Pingback: Details behind RackN Kubernetes Workload for OpenCrowbar | Rob Hirschfeld
Reblogged this on RackN.
LikeLike
Pingback: Got some change? Build a datacenter ops lab on your coffee break [with Packet.net MaaS] | RackN
Pingback: 2015 Container Review |