Since I’ve already bragged about how this workload validates OpenCrowbar’s deep ops impact, I can get right down to the nuts and bolts of what RackN CTO Greg Althaus managed to pack into this workload.
Like any scale install, once you’ve got a solid foundation, the actual installation goes pretty quickly. In Kubernetes’ case, that means creating strong networking and etcd configuration.
Here’s a 30 minute video showing the complete process from O/S install to working Kubernetes:
Here are the details:
Clustered etcd – distributed key store
etcd is the central data service that maintains the state for the Kubernetes deployment. The strength of the installation rests on the correctness of etcd. The workload builds an etcd cluster and synchronizes all the instances as nodes are added.
Networking with Flannel and Proxy
Flannel is the default overlay network for Kubernetes that handles IP assignment and intercontainer communication with UDP encapsulation. The workload configures Flannel as for networking with etcd as the backing store.
An important part of the overall networking setup is the configuration of a proxy so that the nodes can get external access for Docker image repos.
Docker Setup
We install the latest Docker on the system. That may not sound very exciting; however, Docker iterates faster than most Linux images so it’s important that we keep you current.
Master & Minion Kubernetes Nodes
Using etcd as a backend, the workload sets up one (or more) master nodes with the API server and other master services. When the minions are configured, they are pointed to the master API server(s). You get to choose how many masters and which systems become masters. If you did not choose correctly, it’s easy to rinse and repeat.
Highly Available using DNS Round Robin
As the workload configures API servers, it also adds them to a DNS round robin pool (made possible by [new DNS integrations]). Minions are configured to use the shared DNS name so that they automatically round-robin all the available API servers. This ensures both load balancing and high availability. The pool is automatically updated when you add or remove servers.
Installed on Real Metal
It’s worth including that we’ve done cluster deployments of 20 physical nodes (with 80 in process!). Since OpenCrowbar architecture abstracts the vendor hardware, the configuration is multi-vendor and heterogenous. That means that this workload (and our others) delivers tangible scale implementations quickly and reliably.
Future Work for Advanced Networking
Flannel is really very basic SDN. We’d like to see additional networking integrations including OpenContrail as per Pedro Marques work.
At this time, we are not securing communication with etcd. This requires advanced key management is a more advanced topic.
Why is RackN building this? We are a physical ops automation company.
We are seeking to advance the state of data center operations by helping get complex scale platforms operationalized. We want to work with the relevant communities to deliver repeatable best practices around next-generation platforms like Kubernetes. Our speciality is in creating a general environment for ops success: we work with partners who are experts on using the platforms.
We want to engage with potential users before we turn this into a open community project; however, we’ve chosen to make the code public. Please get us involved (community forum)! You’ll need a working OpenCrowbar or RackN Enterprise install as a pre-req and we want to help you be successful.
Pingback: From Metal Foundation to FIVE new workloads in five weeks | Rob Hirschfeld
Reblogged this on RackN.
LikeLike
Pingback: RackN fills holes with Drill Release | RackN
Pingback: Accelerating Community Ops on Kubernetes in Hybrid Style | RackN