Who’s in charge here anyway? We need to start uncovering OpenStack’s Hidden Influencers

After the summit (#afterstack), a few of us compared notes and found a common theme in an under served but critical part of the OpenStack community.  Sean Roberts (HIS POST), Allison Randal (her post), and I committed to expand our discussion to the broader community.

PortholesLack of Product Management¹ was a common theme at the Atlanta OpenStack summit.  That effectively adds fuel to the smoldering “lacking a benevolent dictator” commentary that lingers like smog at summits.  While I’ve come think this criticism has merit, I think that it’s a dramatic oversimplification of the leadership dynamic.  We have plenty of leaders in OpenStack but we don’t do enough to coordinate them because they are hidden.

One reason to reject “missing product management” as a description is that there are LOTS of PMs in OpenStack.  It’s simply that they all work for competing companies.  While we spend a lot of time coordinating developers from competing companies, we have minimal integration between their direct engineering managers or product managers.

We spend a lot of time getting engineering talking together, but we do not formally engage discussion between their product or line managers.  In fact, they likely encourage them to send their engineers instead of attending summits themselves; consequently, we may not even know those influencers!

When the managers are omitted then the commitments made by engineers to projects are empty promises.

At best, this results in a discrepancy between expected and actual velocity.  At worst, work may be waiting on deliveries that have been silently deprioritized by managers who do not directly participate or simply felt excluded the technical discussion.

We need to recognize that OpenStack work is largely corporate sponsored.  These managers directly control the engineers’ priorities so they have a huge influence on what features really get delivered.

To make matters worse (yes, they get worse), these influencers are often invisible.  Our tracking systems focus on code committers and completely miss the managers who direct those contributors.  Even if they had the needed leverage to set priorities, OpenStack technical and governance leaders may not know who contact to resolve conflicts.

We’ve each been working with these “hidden influencers” at our own companies and they aren’t a shadowy spy-v-spy lot, they’re just human beings.  They are every bit as enthusiastic about OpenStack as the developers, users and operators!  They are frequently the loudest voices saying “Could you please get us just one or two more headcount for the team, we want X and Y to be able to spend full-time on upstream contribution, but we’re stretched too thin to spare them at the moment”.

So it’s not intent but an omission in the OpenStack project to engage managers as a class of contributors. We have clear avenues for developers to participate, but pretty much entirely ignore the managers. We say that with a note of caution, because we don’t want to bring the managers in to “manage OpenStack”.

We should provide avenues for collaboration so that as they’re managing their team of devs at their company, they are also communicating with the managers of similar teams at other companies.

This is potentially beneficial for developers, managers and their companies: they can gain access to resources across company lines. Instead of being solely responsible for some initiative to work on a feature for OpenStack, they can share initiatives across teams at multiple companies. This does happen now, but the coordination for it is quite limited.

We don’t think OpenStack needs more management; instead, I think we need to connect the hidden influencers.   Transparency and dialog will resolve these concerns more directly than adding additional process or controls.

Continue reading

Understanding OpenStack Designated Code Sections – Three critical questions

A collaboration with Michael Still (TC Member from Rackspace) & Joshua McKenty and Cross posted by Rackspace.

After nearly a year of discussion, the OpenStack board launched the DefCore process with 10 principles that set us on path towards a validated interoperability standard.   We created the concept of “designated sections” to address concerns that using API tests to determine core would undermine commercial and community investment in a working, shared upstream implementation.

Designated SectionsDesignated sections provides the “you must include this” part of the core definition.  Having common code as part of core is a central part of how DefCore is driving OpenStack operability.

So, why do we need this?

From our very formation, OpenStack has valued implementation over specification; consequently, there is a fairly strong community bias to ensure contributions are upstreamed. This bias is codified into the very structure of the GNU General Public License (GPL) but intentionally missing in the Apache Public License (APL v2) that OpenStack follows.  The choice of Apache2 was important for OpenStack to attract commercial interests, who often consider GPL a “poison pill” because of the upstream requirements.

Nothing in the Apache license requires consumers of the code to share their changes; however, the OpenStack foundation does have control of how the OpenStack™ brand is used.   Thus it’s possible for someone to fork and reuse OpenStack code without permission, but they cannot called it “OpenStack” code.  This restriction only has strength if the OpenStack brand has value (protecting that value is the primary duty of the Foundation).

This intersection between License and Brand is the essence of why the Board has created the DefCore process.

Ok, how are we going to pick the designated code?

Figuring out which code should be designated is highly project specific and ultimately subjective; however, it’s also important to the community that we have a consistent and predictable strategy.  While the work falls to the project technical leads (with ratification by the Technical Committee), the DefCore and Technical committees worked together to define a set of principles to guide the selection.

This Technical Committee resolution formally approves the general selection principles for “designated sections” of code, as part of the DefCore effort.  We’ve taken the liberty to create a graphical representation (above) that visualizes this table using white for designated and black for non-designated sections.  We’ve also included the DefCore principle of having an official “reference implementation.”

Here is the text from the resolution presented as a table:

Should be DESIGNATED: Should NOT be DESIGNATED:
  • code provides the project external REST API, or
  • code is shared and provides common functionality for all options, or
  • code implements logic that is critical for cross-platform operation
  • code interfaces to vendor-specific functions, or
  • project design explicitly intended this section to be replaceable, or
  • code extends the project external REST API in a new or different way, or
  • code is being deprecated

The resolution includes the expectation that “code that is not clearly designated is assumed to be designated unless determined otherwise. The default assumption will be to consider code designated.”

This definition is a starting point.  Our next step is to apply these rules to projects and make sure that they provide meaningful results.

Wow, isn’t that a lot of code?

Not really.  Its important to remember that designated sections alone do not define core: the must-pass tests are also a critical component.   Consequently, designated code in projects that do not have must-pass tests is not actually required for OpenStack licensed implementation.

Just for fun, putting themes to OpenStack Conferences

I’ve been to every OpenStack summit and, in retrospect, each one has a different theme.  I see these as community themes beyond the releases train that cover how the OpenStack ecosystem has changed.

The themes are, of course, highly subjective and intented to spark reflection and discussion.

City Release Theme My Commentary
ATL Ice House Its my sandbox! The new marketplace is great and there are also a lot of vendors who want to differentiate their offering and are not sure where to play.
HK Havana Project land grab It felt like a PTL gold rush as lots of new projects where tossed into the ecosystem mix.  I’m wary of perceived “anointed” projects that define “the way” to do things.
PDX Grizzly Shiny new things We went from having a defined core set of projects to a much richer and varied platforms, environments and solutions.
SD Folsom Breaking up is hard to do Nova began to fragment (cinder & quantum neutron)
SF Essex New kids are here Move over Rackspace.  Lots of new operating systems, providers, consulting and hosting companies participating.  Stackalytics makes it into a real commit race.
BOS Diablo Race to be the first Everyone was trying to show that OpenStack could be used for real work.  Lots of startups launched.
SJC Cactus Oh, you like us! We need some process This is real so everyone was exporing OpenStack.  We clearly needed to figure out how to work together.  This is where we migrated to git.
SA Bexar We’re going to take over the world We handed out rose-colored classes that mostly turned out pretty accurate; however. many some top names from that time are not in the community now (Citrix, NASA, Accenture, and others).
ATX Austin We choose “none of the above” There was a building sense of potential energy while companies figured out that 1) there was a gap and 2) they wanted to fill it together.

OpenStack ATL Recap to the 11s: the danger of drama + 5 challenges & 5 successes

HallwayI’ve come to accept that the “Hallway Track” is my primary session at OpenStack events.  I want to thank the many people in the community who make that the best track.  It’s not only full of deep technical content; there are also healthy doses of intrigue, politics and “let’s fix that” in the halls.

I think honest reflection is critical to OpenStack growth (reflections from last year).  My role as a Board member must not translate into pom-pom waving robot cheerleader.

 

What I heard that’s working:

  1. Foundation event team did a great job on the logistics and many appreciate the user and operator focus.  There’s is no doubt that OpenStack is being deployed at scale and helping transform cloud infrastructure.  I think that’s a great message.
  2. DefCore criteria were approved by the Board.  The overall process and impact was talked about positively at the summit.  To accelerate, we need +1s and feedback because “crickets” means we need to go slower.  I’ll have to dedicate a future post to next steps and “designated sections.”
  3. Marketplace!  Great turn out by vendors of all types, but I’m not hearing about them making a lot of money from OpenStack (which is needed for them to survive).  I like the diversity of the marketplace: consulting, aaServices, installers, networking, more networking, new distros, and ecosystem tools.
  4. There’s some real growth in aaS services for openstack (database, load balancer, dns, etc).   This is the ecosystem that many want OpenStack to drive because it helps displace Amazon cloud.  I also heard concerns that to be sure they are pluggable so companies can complete on implementation.
  5. Lots of process changes to adapt to growing pains.  People felt that the community is adapting (yeah!) but were concerned having to re-invent tooling (meh).

There are also challenges that people brought to me:

  1. Our #1 danger is drama.  Users and operators want collaboration and friendly competition.  They are turned off by vendor conflict or strong-arming in the community (e.g.: the WSJ Red Hat article and fallout).  I’d encourage everyone to breathe more and react less.
  2. Lack of product management is risking a tragedy of the commons.  Helping companies work together and across projects is needed for our collaboration processes to work.  I’ll be exploring this with Sean Roberts in future posts.
  3. Making sure there’s profit being generated from shared code.  We need to remember that most of the development is corporate funded so we need to make sure that companies generate revenue.  The trend of everyone creating unique distros may indicate a problem.
  4. We need to be more operator friendly.  I know we’re trying but we create distance with operators when we insist on creating new tools instead of using the existing ecosystem.  That also slows down dealing with upgrades, resilient architecture and other operational concerns.
  5. Anointed projects concerns have expanded since Hong Kong.  There’s a perception that Heat (orchestration), Triple0 (provisioning), Solum (platform) are considered THE only way OpenStack solves those problems and other approaches are not welcome.  While that encourages collaboration, it also chills competition and discussion.
  6. There’s a lot of whispering about the status of challenged projects: neutron (works with proprietary backends but not open, may not stay integrated) and openstack boot-strap (state of TripleO/Ironic/Heat mix).  The issue here is NOT if they are challenged but finding ways to discuss concerns openly (see anointed projects concern).

I’d enjoy hearing more about success and deeper discussion around concerns.  I use community feedback to influence my work in the community and on the board.  If you think I’ve got it right or wrong then please let me know.

Hugs & Rants Welcome: OpenStack reaching out with “community IRC office hours”

2013-07-11_20-07-21_286This is a great move by the OpenStack community managers that I feel like is worth amplifying.  Copied from community email:

Hello folks

one of the requests in Atlanta was to setup carefully listening ears for developers and users alike so they can highlight roadblocks, vent frustration and hopefully also give kudos to people, suggest solutions, etc.

I and Tom have added two 1 hour slots to the OpenStack Meetings calendar

    •  Tuesdays at 0800 UTC on #openstack-community (hosted by Tom)
    • Fridays at 1800 UTC on #openstack-community (hosted by Stefano)

so if you have anything you’d like the Foundation to be aware of please hop on the channel and talk to us. If you don’t/can’t use IRC, send us an email and we’ll use something else: just talk to us.

Regards,

Stef

https://wiki.openstack.org/wiki/Meetings/Community

How DefCore is going to change your world: three advisory cases

The first release of the DefCore Core Capabilities Matrix (DCCM) was revealed at the Atlanta summit.  At the Summit, Joshua and I had a session which examined what this means for the various members of the OpenStack community.   This rather lengthy post reviews the same advisory material.

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled “OpenStack.”

As a refresher, there are three uses of the OpenStack mark:

  • Community: The non-commercial use of the word OpenStack by the OpenStack community to describe themselves and their activities. (like community tweets, meetups and blog posts)
  • Code: The non-commercial use of the word OpenStack to refer to components of the OpenStack framework integrated release (as in OpenStack Compute Project Nova)
  • Commerce: The commercial use of the word OpenStack to refer to products and services as governed by the OpenStack trademark policy. This is where DefCore is focused.

In the DefCore/Commerce use, properly licensed vendors have three basic obligations to meet:is_it_openstack_graphic

  1. Pass the required Refstack tests for the capabilities matrix in the version of OpenStack that they use. Vendors are expected (not required) to share their results.
  2. Run and include the “designated sections” of code for the OpenStack components that you include.
  3. Other basic obligations in their license agreement like being a currently paid up corporate sponsor or foundation member, etc.

If they meet these conditions, vendors can use the OpenStack mark in their product names and descriptions.

Enough preamble!  Let’s see the three Advisory Cases

MANDATORY DISCLAIMER: These conditions apply to fictional public, private and client use cases.  Any resemblence to actual companies is a function of the need to describe real use-cases.  These cases are advisory for illustration use only and are not to be considered definitive guidenance because DefCore is still evolving.

Public Cloud: Service Provider “BananaCloud”

A popular public cloud operator, BananaCloud has been offering OpenStack-based IaaS since the Diablo release. However, they don’t use the Keystone component. Since they also offer traditional colocation and managed services, they have an existing identity management system that they use. They made a similar choice for Horizon in favor of their own cloud portal.

banana

  1. They use Nova a custom scheduler and pass all the Nova tests. This is the simplest case since they use code and pass the tests.
  2. In the Havana DCCM, the Keystone capabilities are a must pass test; however, there are no designated sections of code for Keystone. So BananaCloud must implement a Keystone-compatible API on their IaaS environment (an effort they had underway already) that will pass Refstack, and they’re good to go.
  3. There are no must pass tests for Horizon so they have no requirements to include those features or code. They can still be OpenStack without Horizon.
  4. There are no must pass tests for Trove so they have no brand requirements to include those features or code so it’s not a brand issue; however, by using Trove and promoting its use, they increase the likelihood of its capabilities becoming must pass features.

BananaCloud also offers some advanced OpenStack capabilities, including Marconi and Trove. Since there are no must pass capabilities from these components in the Havana DCCM, it has no impact on their offering additional services. DefCore defines the minimum requirements and encourages vendors to share their full test results of additional capabilities because that is how OpenStack identifies new must pass candidates.

Note: The DefCore DCCM is advisory for the Havana release, so if BananaCloud is late getting their Keystone-compatibility work done there won’t be any commercial impact. But it will be a binding part of the trademark license agreement by the Juno release, which is only 6 months away.

Private Cloud: SpRocket Small-Business OpenStack Software

SpRocket is a new OpenStack software vendor, specializing in selling a Windows-powered version of OpenStack with tight integration to Sharepoint and AzurePack. In their feature set, they only need part of Nova and provide an alternative object storage to Swift that implements a version of the Swift API. They do use Heat as part of their implementation to set up applications back ended by Sharepoint and AzurePack.

  1. sprocketFor Nova, they already use the code and have already implemented all required capabilities except for the key-store. To comply with the DefCore requirement, they must enable the key-store capability.
  2. While their implementation of Swift passes the tests, We are still working to resolve the final disposition of Swift so there are several possible outcomes:
    1. If Swift is 0% designated then they are OK (that’s illustrated here)
    2. If Swift is 100% designated then they cannot claim to be OpenStack.
    3. If Swift is partially designated then they have to adapt their deploy to include the required code.
  3. Their use of Heat is encouraged since it is an integrated project; however, there are no required capabilities and does not influence their ability to use the mark.
  4. They use the trunk version of Windows HyperV drivers which are not designated and have no specific tests.

Ecosystem Client: “Mist” OpenStack-consuming Client Library

Mist is a client library for load+kt programmers working on applications using the OpenStack APIs. While it’s an open source project, there are many commercial applications that use the library for their applications. Unlike a “pure” OpenStack program, it also supports other Cloud APIs.

Since the Mist library does not ship or implement the OpenStack code base, the DefCore process does not apply to their effort; however, there are several important intersections with Mist and OpenStack and Core.

  • First, it is very important for the DefCore process that Mist map their use of the OpenStack APIs to the capabilities matrix. They are asked to help with this process because they are the best group to answer the “works with clients” criteria.
  • Second, if there are APIs used by Mist that are not currently tested then the OpenStack community should work with the Mist community to close those test gaps.
  • Third, if Mist relies on an API that is not must-pass they are encouraged to help identify those capabilities as core candidates in the community.

OpenStack DefCore Matrix Cheet Sheet

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating minimum standards for products labeled “OpenStack.”

In the last week, the DefCore committee release the results of 6 months of work.  We choose to getting input in favor of cleanups and polish, so please be patient if some of the data is overwhelming.

We’ve got enough feedback to put together this capabilities matrix cheat sheet to help the interpret all the colors and data on the page (headers link).

capabilities_matrix_explained

DefCore Capabilities Scorecard & Core Identification Matrix [REVIEW TIME!]

Attribution Note: This post was collaboratively edited by members of the DefCore committee and cross posted with DefCore co-chair Joshua McKenty of Piston Cloud.

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating minimum standards minimum standards for products labeled “OpenStack.”

The OpenStack Core definition process (aka DefCore) is moving steadily along and we’re looking for feedback from community as we move into the next phase.  Until now, we’ve been mostly working out principles, criteria and processes that we will use to answer “what is core” in OpenStack.  Now we are applying those processes and actually picking which capabilities will be used to identify Core.

TL;DR! We are now RUNNING WITH SCISSORS because we’ve reached the point there you can review early thoughts about what’s going to be considered Core (and what’s not).  We now have a tangible draft list for community review.

capabilities_selectionWhile you will want to jump directly to the review draft matrix (red means needs input), it is important to understand how we got here because that’s how DefCore will resolve the inevitable conflicts.  The very nature of defining core means that we have to say “not in” to a lot of capabilities.  Since community consensus seems to favor a “small core” in principle, that means many capabilities that people consider important are not included.

The Core Capabilities Matrix attempts to find the right balance between quantitative detail and too much information.  Each row represents an “OpenStack Capability” that is reflected by one or more individual tests.  We scored each capability equally on a 100 point scale using 12 different criteria.  These criteria were selected to respect different viewpoints and needs of the community ranging from popularity, technical longevity and quality of documentation.

While we’ve made the process more analytical, there’s still room for judgement.  Eventually, we expect to weight some criteria more heavily than others.  We will also be adjusting the score cut-off.  Our goal is not to create a perfect evaluation tool – it should inform the board and facilitate discussion.  In practice, we’ve found this approach to bring needed objectivity to the selection process.

So, where does this take us?  The first matrix is, by design, old news.  We focused on getting a score for Havana to give us a stable and known quantity; however, much of that effort will translate forward.  Using Havana as the base, we are hoping to score Ice House ninety days after the Juno summit and score Juno at K Summit in Paris.

These are ambitious goals and there are challenges ahead of us.  Since every journey starts with small steps, we’ve put ourselves on feet the path while keeping our eyes on the horizon.

Specifically, we know there are gaps in OpenStack test coverage.  Important capabilities do not have tests and will not be included.  Further, starting with a small core means that OpenStack will be enforcing an interoperability target that is relatively permissive and minimal.  Universally, the community has expressed that including short-term or incomplete items is undesirable.  It’s vital to remember that we are looking for evolutionary progress that accelerates our developer, user, operator and ecosystem communities.

How can you get involved?  We are looking for community feedback on the DefCore list on this 1st pass – we do not think we have the scores 100% right.  Of course, we’re happy to hear from you however you want to engage: in intentionally named the committed “defcore” to make it easier to cross-reference and search.

We will eventually use Refstack to collect voting/feedback on capabilities directly from OpenStack community members.

Open Operations [4/4 series on Operating Open Source Infrastructure]

This post is the final in a 4 part series about Success factors for Operating Open Source Infrastructure.

tl;dr Note: This is really TWO tightly related posts: 
  part 1 is OpenOps background. 
  part 2 is about OpenStack, Tempest and DefCore.

2012-01-11_17-42-11_374One of the substantial challenges of large-scale deployments of open source software is that it is very difficult to come up with a best practice, or a reference implementation that can be widely explained or described by the community.

Having a best practice deployment is essential for the growth of the community because it enables multiple people to deploy the software in a repeatable, stable way. This, in turn, fosters community growth so that more people can adopt software in a consistent way. It does little good if operators have no consistent pattern for deployment, because that undermines the developers’ abilities to extend, the testers’ abilities to ensure quality, and users’ ability to repeat the success of others.

Fundamentally, the goal of an open source project, from a user’s perspective, is that they can quickly achieve and repeat the success of other people in the community.

When we look at these large-scale projects we really try to create a pattern of success that can be repeated over and over again. This ensures growth of the user base, and it also helps the developer reduce time spent troubleshooting problems.

That does not mean that every single deployment should be identical, but there is substantial value in having a limited number of success patterns. Customers can then be assured not only of quick time to value with these projects., They can also get help without having everybody else in the community attempt to untangle how one person created a site-specific. This is especially problematic if someone created an unnecessarily unique scenario. That simply creates noise and confusion in the environment., Noise is a huge cost for the community, and needs to be eliminated nor an open source project to flourish.

This isn’t any different from in proprietary software but most of these activities are hidden. A proprietary project vendor can make much stronger recommendations and install guidance because they are the only source of truth in that project. In an open source project, there are multiple sources of truth, and there are very few people who are willing to publish their exact reference implementation or test patterns. Consequently, my team has taken a strong position on creating a repeatable reference implementation for Openstack deployments, based on extensive testing. We have found that our test patterns and practices are grounded in successful customer deployments and actual, physical infrastructure deployments. So, they are very pragmatic, repeatable, and sustained.

We found that this type of testing, while expensive, is also a significant value to our customers, and something that they appreciate and have been willing to pay for.

OpenStack as an Example: Tempest for Reference Validation

The Crowbar project incorporated OpenStack Tempest project as an essential part of every OpenStack deployment. From the earliest introduction of the Tempest suite, we have understood the value of a baselining test suite for OpenStack. We believe that using the same tests the developers use for a single node test is a gate for code acceptance against a multi-node deployment, and creates significant value both for our customers and the OpenStack project as a whole.  This was part of my why I embraced the suggestion of basing DefCore on tests.

While it is important to have developer tests that gate code check-ins, the ultimate goal for OpenStack is to create scale-out multi-node deployments. This is a fundamental design objective for OpenStack.

With developers and operators using the same test suite, we are able to proactively measure the success of the code in the scale deployments in a way that provides quick feedback for the developers. If Tempest tests do not pass a multi-node environment, they are not providing significant value for developers to ensure that their code is operating against best practice scenarios. Our objective is to continue to extend the Tempest suite of tests so that they are an accurate reflection of the use cases that are encountered in a best practice, referenced deployment.

Along these lines, we expect that the community will continue to expand the Tempest test suite to match actual deployment scenarios reflected in scale and multi-node configurations. Having developers be responsible for passing these tests as part of their day-to-day activities ensures that development activities do not disrupt scale operations. Ultimately, making proactive gating tests ensures that we are creating scenarios in which code quality is continually increasing, as is our ability to respond and deploy the OpenStack infrastructure.

I am very excited and optimistic that the expanding the Tempest suite holds the key to making OpenStack the most stable, reliable, performance cloud implementation available in the market. The fact that this test suite can be extended in the community, and contributed to by a broad range of implementations, only makes that test suite more valuable and more likely to fully encompass all use cases necessary for reference implementations.

Networking in Cloud Environments, SDN, NFV, and why it matters [part 2 of 2]

scott_jensen2Scott Jensen is an Engineering Director and colleague of mine from Dell with deep networking and operations experience.  He had first hand experience deploying OpenStack and Hadoop and has a critical role in defining Dell’s Reference Architectures in those areas.  When I saw this writeup about cloud networking (first post), I asked if it would be OK to post it here and share it with you.

GUEST POST 2 OF 2 BY SCOTT JENSEN:

So what is different about Cloud and how does it impact on the network

In a traditional data center this was not all that difficult (relatively).  You knew what was going to running on what system (physically) and could plan your infrastructure accordingly.  The majority of the traffic moved in a North/South direction. Or basically from outside the infrastructure (the internet for example) to inside and then responded back out.  You knew that if you had to design a communication channel from an application server to a database server this could be isolated from the other traffic as they did not usually reside on the same system.

scj-net1

Virtualization made this more difficult.  In this model you are sharing systems resources for different applications.  From the networks point of view there are a large number of systems available behind a couple of links.  Live Migration puts another wrinkle in the design as you now have to deal with a specific system moving from one physical server to another.  Network Virtualization helps out a lot with this.  With this you can now move virtual ports from one physical server to another to ensure that when one virtual machine moves from a physical server to another that the network is still available.  In many cases you managed these virtual networks the same as you managed your physical network.  As a matter of fact they were designed to emulate the physical as much as possible.  The virtual machines still looked a lot like the physical ones they replaced and can be treated in vary much the same way from a traffic flow perspective.  The traffic still is primarily a North/South pattern.

Cloud, however, is a different ball of wax.  Think about the charistics of the Cattle described above.  A cloud application is smaller and purpose built.  The majority of its traffic is between VMs as different tiers which were traditionally on the same system or in the same VM are now spread across multiple VMs.  Therefore its traffic patterns are primarily East/West.  You cannot forget that there is a North/South pattern the same as what was in the other models which is typically user interaction.  It is stateless so that many copies of itself can run in tandem allowing it to elastically scale up and down based on need and as such they are appearing and disappearing from the network.  As these VMs are spawned on the system they may be right next to each other or on different servers or potentially in different Data Centers.  But it gets even better.   scj-net2

Cloud architectures are typically multi-tenant.  This means that multiple customers will utilize this infrastructure and need to be isolated from each other.  And of course Clouds are self-service.  Users/developers can design, build and deploy whenever they want.  Including designing the network interconnects that their applications need to function.  All of this will cause overlapping IP address domains, multiple virtual networks both L2 and L3, requirements for dynamically configuring QOS, Load Balancers and Firewalls.  Lastly in our list of headaches is not the least.  Cloud systems tend to breed like rabbits or multiply like coat hangers in the closet.  There are more and more systems as 10 servers become 40 which becomes 100 then 1000 and so on.

 

So what is a poor Network Engineer to do?

First get a handle on what this Cloud thing is supposed to be for.  If you are one of the lucky ones who can dictate the use of the infrastructure then rock on!  Unfortunately, that does not seem to be the way it goes for many.  In the case where you just cannot predict how the infrastructure will be used I am reminded of the phrase “there is not replacement for displacement”.  Fast links, non-blocking switches, Network Fabrics are all necessary for the physical network but will not get you there.  Sense as a network administrator you cannot predict the traffic patterns who can?  Well the developer and the application itself.  This is what SDN is all about.  It allows a programmatic interface to what is called an overlay network.  A series of tunnels/flows which can build virtual networks on top of the physical network giving that pesky application what it was looking for.  In some cases you may want to make changes to the physical infrastructure.  For example change the configuration of the Firewall or Load Balancer or other network equipment.  SDN vendors are creating plug-ins that can make those types of configurations.  But if this is not good enough for you there is NFV.  The basic idea here is that why have specialized hardware for your core network infrastructure when we can run them virtualized as well?  Let’s run those in VM’s as well, hook them into the virtual network and SDN to configure them and we now can virtualize the routers, load balancers, firewalls and switches.  These technologies are in very much a state of flux right now but they are promising none the less.  Now if we could just virtualize the monitoring and troubleshooting of these environments I’d be happy.