How to simulate an OpenStack Infra Slave

Situation: You’ve committed your code, you’ve submitted a patch, and yet for some reason, and regardless of the number of rechecks, your tests simply won’t pass the gate? How can you test the gate, locally, to triage what’s happening? By creating a local slave VM.

Prerequisites

To complete this tutorial, you will need the following:

  • Vagrant
  • VirtualBox
  • A local clone of OpenStack’s system-config repository: git clone git://git.openstack.org/openstack-infra/system-config

Create a local.pp manifest.

A quick look at the .gitignore file at the root of the system-config project reveals that both ./manifests/local.pp and Vagrantfile are ignored. With that in mind, let us start by creating a simple local puppet manifest which describes our node:

# path: ./manifests/local.pp
# Any node with hostname "slave-.*" will match.
node /slave-.*/ {
  class { 'openstack_project::single_use_slave':
    sudo => true,
    thin => false,
  }
}

The openstack_project::single_use_slave manifest is used by nodepool – or rather, by disk-image-builder on behalf of nodepool- to build the virtual machine image used in OpenStack’s gate. This happens once a day, so any changes made in system_config will require at least 24 hours to propagate to the build queue.

Create a Vagrantfile

Next, we create a Vagrantfile that invokes the above manifest. Note that I am explicitly setting hostname on each node – this allows us to choose specifically which manifest will be applied to our guest.

# path: ./Vagrantfile
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create a new trusty slave: `vagrant up slave-trusty`
  config.vm.define "slave-trusty" do |trusty|
    trusty.vm.box = "ubuntu/trusty64"
    trusty.vm.network 'private_network', ip: '192.168.99.10'
    trusty.vm.hostname = 'slave-trusty' # Use this to control local.pp
  end

  # Create a new xenial slave: `vagrant up slave-xenial`
  # Will only work in vagrant > 1.8.1
  config.vm.define "slave-xenial" do |xenial|
    xenial.vm.box = "ubuntu/xenial64"
    xenial.vm.network 'private_network', ip: '192.168.99.11'
    xenial.vm.hostname = 'slave-xenial' # Use this to control local.pp
  end

  # Increase the memory for the VM. If you need to run devstack, this needs
  # to be at least 8192
  config.vm.provider "virtualbox" do |v|
    v.memory = 2048
  end

  # Install infra's supported version of puppet.
  config.vm.provision "shell",
      inline: "if [ ! -f '/etc/apt/preferences.d/00-puppet.pref' ]; then /vagrant/install_puppet.sh; fi"

  # Install all puppet modules required by openstack_project
  config.vm.provision "shell",
      inline: "if [ ! -d '/etc/puppet/modules/stdlib' ]; then /vagrant/install_modules.sh; fi"

  # Symlink the module in system_config into /etc/puppet/modules
  config.vm.provision "shell",
      inline: "if [ ! -d '/etc/puppet/modules/openstack_project' ]; then ln -s /vagrant/modules/openstack_project /etc/puppet/modules/openstack_project; fi"

  config.vm.provision :puppet do |puppet|
    puppet.manifest_file  = "local.pp"
  end
end

IMPORTANT NOTE: As of Vagrant 1.8.3, the above declared slave-xenial will fail to boot properly. This is because at this time, the published ubuntu/xenial64 image does not contain the guest additions, which must be installed manually. For specifics on how to do this, please examine this launchpad issue.

Vagrant up!

Last step: Execute vagrant up slave-trusty. With luck, and a little patience, this will create a brand new, clean, running jenkins-slave for you to test your build in.

Where next?

From this point, you should take a look at the project-config repository and determine which additional VM configuration steps are being executed by your job, so you can create an environment specific to the problem you’re trying to triage. Alternatively, you can explore some of the other nodes in ./manifests/site.pp, and perhaps extend the Vagrantfile above to instantiate a VM for one of infra’s services, such as StoryBoard or Grafana. Using the above template, you should be able to construct test instances of any infra component.

Update (June 27th, 2016)

The above method may also be used to simulate a regular OpenStack Infra server, with a few modifications. For this example, we’ll try to simulate an OpenStack Mirror. Add the following to your local puppet manifest:

# path: ./manifests/local.pp
node mirror {
  # This module is included on all infra servers. It sets up accounts, public keys, and the like.
  class { 'openstack_project::server':
    iptables_public_tcp_ports => [22, 80],
    sysadmins                 => hiera('sysadmins', [])
  }
  
  # This module includes functionality specific to this server.
  class { 'openstack_project::mirror':
    vhost_name => $::ipaddress,
    require    => Class['Openstack_project::Server'],
  }
}

After doing so, add this node to your Vagrantfile:

# path: ./Vagrantfile
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create a new mirror slave: `vagrant up mirror`
  config.vm.define "mirror" do |mirror|
    trusty.vm.box = "ubuntu/trusty64"
    trusty.vm.network 'private_network', ip: '192.168.99.22'
    trusty.vm.hostname = 'mirror' # Use this to control local.pp
  end

... # Continue from example above.

And done! Now you can invoke vagrant up mirror and watch as your openstack-infra mirror server is provisioned. There are a few caveats:

  1. If you want to add a new puppet module, you’ll want to add it to modules.env. Doing so will only trigger an automatic install if you’re starting from a fresh guest host, so you’ll either also have to install it manually, or recreate your guest.
  2. Some manifests require a hostname. In this case, I usually reference the hosts’ IP Address, as managing DNS is too much effort for most test scenarios. vhost_name => $::ipaddress

JavaScript RoadMap for OpenStack Newton

This post contains the current working draft of the OpenStack JavaScript roadmap. It’s a big list, and we need help to land it during the Newton cycle. Overall themes for this cycle are Consistency, Interoperability, and engaging with the JavaScript community at large, all topics which I’ve written about at length. Our end goal is to build the foundations of a JavaScript ecosystem, which permits the creation of entirely custom interfaces.

Note: We are not trying to replace Horizon, we are aiming to help those downstream who need something more than “Vanilla OpenStack”, and thus maintain their own code. The vast majority of modern UI and UX development happens in JavaScript, and there are many use cases that have not yet been met.

OpenStack Projects

These projects are part of the big tent, and will see significant development during the Newton Cycle.

ironic-webclient

The ironic-webclient will release its first version during the Newton cycle. We’re awfully close to having the basic set of features supported, and with some excellent feedback from the OpenStack UX team, will also have a sexy new user interface that’s currently in the review queue. Once this work is complete, we will begin extracting common components into a new project, named…

js-openstacklib

This new project will be incubated as a single, gate-tested JavaScript API client library for the OpenStack API’s. Its audience is software engineers who wish to build their own user interface using modern javascript tools. As we cannot predict downstream use cases, special care will be taken to ensure the project’s release artifacts can eventually support both browser and server based applications.

Philosophically, we will be taking a page from the python-openstackclient book, and avoid creating a new project for each of OpenStack’s services. We can make sure our release artifacts can be used piecemeal, however trying to maintain code consistency across multiple different projects is a hard lesson that others have already learned for us. Let’s not do that again.

Infrastructure Projects

These projects belong to OpenStack’s Infrastructure and/or QA team. They’re used to support the building of JavaScript projects in OpenStack.

js-generator-openstack

Yeoman is JavaScript’s equivalent of cookiecutter, providing a scaffolding engine which can rapidly set up, and maintain, new projects. Creating and maintaining a yeoman generator will be a critical part of engaging with the JavaScript community, and can drive adoption and consistency across OpenStack as well. Furthermore, it is sophisticated enough that it could also support many things that exist in today’s Python toolchain, such as dependency management, and common tooling maintenance.

Development of the yeoman generator will draw in lessons learned from OpenStack’s current UI Projects, including Fuel, StoryBoard, Ironic, Horizon, Refstack, and Health Dashboard, and attempt to converge on common practices across projects.

js-npm-publish-xstatic

This project aims to bridge the gap between our JavaScript projects, and Horizon’s measured migration to AngularJS. We don’t believe in duplicating work, so if it is feasible to publish our libraries in a way that Horizon may consume (via the existing xstatic toolchain), then we certainly should pursue that. The notable difference is that our own projects, such as js-openstacklib, don’t have to go through the repackaging step that our current xstatic packages do; thus, if it is possible for us to publish to npm and to xstatic/pypi at the same time, that would be best.

Xenial Build Nodes

As of two weeks ago, OpenStack’s Infrastructure is running a version of Node.js and npm more recent than what is available on Trusty LTS. Ultimately, we would like to converge this version on Node4 LTS, the release version maintained by the Node foundation. The easiest way to do this is to simply piggyback on Infra’s impending adoption of Xenial build nodes, though some work is required to ensure this transition goes smoothly.

Maintained Projects

The following projects are active and considered ‘complete’, though they will require continuous maintenance throughout the Newton cycle. I’ve included all the needed work that I am aware of, however if there’s something I’ve missed, please feel free to comment.

eslint-config-openstack

eslint has updated to version 2.x, and no more rule bugfixes are being landed in 1.x. eslint-config-openstack will follow in kind, updating itself to use eslint 2.x. We will releases this version as eslint-config-openstack v2.0.0, and continue to track the eslint version numbers from there. Downstream projects are encouraged to adopt this, as it is unlikely that automated dependency updates for JavaScript projects will land this cycle.

NPM Mirrors

We are currently synchronizing all npm packages to our AFS master disks, which should be the final step in getting functional npm mirrors. Some minor tweaking will be required to make them functional, and they will need to be maintained throughout the next cycle. Issues raised in the #openstack-infra channel will be promptly addressed.

This includes work on both the js-openstack-registry-hooks project and the js-afs-blob-store project, which are two custom components we use to drive our mirrors.

oslo_middleware.cors

CORS landed in mitaka, and we will continue to maintain it going forward. In the Newton cycle, we have the following new features planned:

  • Automatic allowed_origin detection from Keystone (zero-config).
  • More consistent use of set_defaults.
  • Configuration maintenance as projects deprecate X-* headers in accordance with RFC 6648.

Stretch Projects

These are projects which we know need to be done, however we simply do not have enough contributors.

Docs

Documentation is important. Usable documentation is even more important. The tricky bit is that OpenStack’s documentation is all python/sphinx based, and we have not yet investigated whether it’s possible to bridge the two languages. If you have time to explore this intersection, we’d be happy to hear your findings.


That concludes it for the Newton Cycle. As you can see, there’s a lot of work to do. Can you help?

JavaScript on the Trailing Edge

The public opinion of the JavaScript community is that it’s fast. We break things, we’re hungry for the latest features, and none of us want to return to the days of slow innovation that ended with the death of IE6. This really isn’t true; there are several core JavaScript projects, such as Angular, JQuery, and React, which have solid governance, and measured release cycles, that would mesh well with OpenStack. It just happens that those projects are surrounded by thousands of smaller ones, run by handfuls of engineers who are volunteering their time.

However, the JavaScript community offers many benefits, from layout frameworks to new user interface paradigms, and OpenStack could easily take advantage of all these. As I’ve pointed out, the user interface needs of a cloud platform vary by user, not by deployment, and it is high time that OpenStack catered to more than just the Operator mind set. There remain some obstacles to this, however they are easily solved:

Backwards Compatibility

The first challenge we face is backwards compatibility. We must balance the rapid adoption of new developments like ES6, with downstream LTS support commitments that can last several years. We must do all this, while also not losing ourselves in a morass of hacks, special cases, shortcuts, and workarounds. This requires common dependency management for all our JavaScript libraries, and we can easily draw on the lessons learned in OpenStack’s requirements project to lead the way.

Complacency

Furthermore, we face a social challenge, that of complacency. The counterargument I most frequently get is “Why not use Horizon”. As my previous post on composable cloud interfaces highlights, Horizon is too narrowly focused. While it does an admirable job at supporting the Operator use case, and provides many ways to extend itself, a brief survey I performed last year revealed that two thirds of downstream Horizon users either maintain full forks of horizon’s source, or are building entirely custom user interfaces. To me, this is stark evidence that horizon falls short of meeting the use cases of all of our OpenStack operators.

Funding

Lastly, we face the rather pedestrian challenge of funding. While I’ve come across broad support for a greater role of JavaScript in OpenStack’s UI development – to the level of squeefun bouncing in a hallway when I mentioned ES6 – it remains a fact of life that those corporate members with the most to gain by the strategic evolution of OpenStack are usually content to let ‘someone else’ do the work, while dedicating their own employees towards more immediate revenue sources.

 

It’s a Catch-22 situation: We cannot prove the viability of JavaScript thick-client UI’s without a functional alternative to horizon, but we cannot get to that alternative without engineers willing – and able – to contribute. Personally, I feel very privileged to be one of a very small number of fully dedicated upstream engineers. To the best of my knowledge, Elizabeth Elwell and I are the only two entirely dedicated towards strategically advancing User Interface development in OpenStack. We are making good progress, however we do not anticipate adoption in the next cycle.

With help, Newton will contain the last pieces we need.

OpenStack Infra now uses Node.js v4 and npm v2

OpenStack’s Infrastructure is now running all of its NPM and NodeJS-based test jobs using the newer NodeJS v4, and npm 2.15. That’s pretty awesome, given that previously we were running on 0.10.25. ES6, anyone?

LTS is important

Here in OpenStack we try to stick as closely as possible to LTS packages. We do this for multiple reasons, chief of which is that many of our customers have governance and compliance constraints that prevent them from installing arbitrary bits on their hardware. Furthermore, up until recently, the only feasible test platform was Ubuntu Trusty (CentOS7 and RHEL are recent additions), which forced us to rely on NodeJS v0.10.25 and npm v 1.3.10. Neither of these are supported by the Node Foundation or npm inc., and security backports are the responsibility of the distro packaging team.

Vulnerable to Upstream Changes

These out-of-date versions leave us vulnerable to upstream changes, and last week we encountered this issue: npm upgraded the registry to provide gzipped content, something which our older npm client did not understand. The fix was slow in coming (at no fault of npm inc’s engineers), and we were left unable to access registry packages in a reliable way, preventing us from releasing some components of Mitaka… during release week.

When npm breaks, what do we do?

We needed two things- a quick fix, and a future strategy. If possible, both at the same time. Sadly, we couldn’t just update npm; while there’s precedent (we run a non-distro version of pip), the older npm client could not reach the registry to download the packages necessary to update itself. A different approach was needed.

Last summer, the Node Foundation joined the Linux Foundation, and announced plans for an LTS release schedule which was realized in October. Shortly afterwards, linux distro packages began appearing for those versions, and they have since graduated in both Debian and Ubuntu. While neither of these are yet available to us (Xenial Xerus releases later this month), it nevertheless gave us a clear path forward:

1. Upgrade to Node4 via the NodeSource package repository

In order to unblock our builds, we upgraded all of our jenkins slaves to use the NodeSource repository’s version of NodeJS 4, which ships with npm 2.15. While not precisely the LTS versions, they were close enough to solve the issue we were encountering. This would give us a backwards-compatible solution for all of our trusty nodes, while paving the way to the adoption of xenial.

This work has been completed, and our gate is happy again. Woohoo!

2. Switch to Xenial as quickly as feasible

We’ve already been testing xenial nodes in OpenStack’s gate, and anticipate officially making those nodes available during the Newton cycle. Once those become available, we’ll start moving all of our javascript jobs to those nodes instead.

3. Build a static mirror on top of OpenStack’s unified mirror infrastructure

By hosting our own package mirror, we isolate ourselves from many problems from the upstream package registry. There have already been plans to build this, and the first major components of it (Unified Mirrors) have already landed. The only thing these recent problems (as well as similar problems like leftpad) have done, is raised the urgency.

 

Stay tuned for updates

I’ll be posting updates on this effort to this blog, which is syndicated to OpenStack planet. Stay tuned, and let me know if you want to help! My IRC handle is @krotscheck, and I’m always online on FreeNode’s #openstack-infra channel.

Securely publishing to NPM, the OpenStack way

The following article has been making the rounds, claiming a new worm exploit against npm. First of all, this is not a new exploit, nor is it in any way unique to npm – pip, gem, rpm, and deb have the same issue. Many may not even consider this an exploit at all – it’s a known feature, provided by package repositories, that permit compiling platform-specific bytecode. This is useful if, for instance, your package depends on a c-level library.

The exploit works something like this:

  1. If you are easily authenticated against a package repository, and…
  2. …you install a package which compiles and runs someone else’s code, then…
  3. …an attacker can execute malicious code which can publish itself to your packages…
  4. …which will then subsequently infect anyone who fuzzily matches against your packages’ versions.

This is not news. Mitigation approaches exist. Here’s how we do it in OpenStack:

Step 1: Do not use privileged build environments

Every test, package, or other build command runs on a throwaway jenkins slave that only exists for that test, after which it is deleted. While during test setup the jenkins user begins with passwordless sudo, that privileged is almost always revoked before the tests are run. In short, even if malicious code is downloaded during npm install, it is never executed in an environment that permits a privilege escalation attack.

This approach doesn’t have to be restricted to our Cloud VM’s either. You can do this with docker images, vagrant boxes, you name it.

Step 2: Build an intermediary tarball with `npm pack`

`npm pack` builds a release tarball from a given package, in our case the current project at its release version tag. We do this on the above-mentioned throwaway slave, so that any scripts executed during the construction process cannot access any credentials. After construction, this tarball is uploaded to tarballs.openstack.org, from which anyone can retrieve it.

Step 3: Publish without scripts

OpenStack’s infrastructure contains one jenkins slave that possesses credentials necessary to publish artifacts. Its sole purpose in life is to download a release tarball, and to push that tarball to a package repository. In npm’s case, we execute  `npm publish <tarball> --ignore-scripts`, to ensure that none of the packages’ lifecycle events are accidentally executed, further isolating us from unexpected attacks.

Other security measures

In addition to the above publishing flow, we also have several policies in place intended to ensure that our packages are trustworthy.

    • Only one user owns our npm packages. This prevents other owners from accidentally compromising the package.
    • Only verified, gpg-signed git tags using registered keys will trigger our publish jobs. To easily enable this, add git-sign-tag=true to your global or local .npmrc (Of course, you’ll need to be able to sign a tag).
    • We strongly prefer using strict version matching in our packages, which also has the benefit of making our builds deterministic. The fastest way for you to accomplish this yourself is to commit your shrinkwrap file to version control.
    • We don’t just publish our code via npm; If you’d prefer using a git dependency, or a tarball link, that option is open to you.

Horizon Usage Survey

Over the past few weeks, I’ve run a survey that attempts to discover how people use OpenStack’s Horizon (aka openstack-dashboard), and I’d like to publish some preliminary results. I’ll be soliciting responses during the Vancouver Summit next week, so if you haven’t participated yet, you still have time to do so. The link to do so is here: http://tinyurl.com/horizon-usage-survey.

Results

In two weeks, the survey gathered 36 responses. Due to the small sample size and the non-random selection of participants, this data should not be considered statistically significant — Self-selected populations rarely are — however it does provide us with a window into how Horizon is used in the real world.

OpenStack Deployment Statistics

The following are charts that address the scale of our users’ OpenStack deployments.


Deployment Size


This is an indication of how many bare-metal instances comprise our user’s clouds.

OpenStack Version


What versions are currently deployed by our users. Note that some deploy multiple clouds.

Cloud Type


The type of cloud gives us an indication of what use cases our users encounter.


Horizon Deployment

These charts represent information about Horizon usage.


What is your UI?


Whether our users use Horizon, a custom-build UI, or both.

Install Tools


What tools do our users use to install and maintain horizon.

Host Operating System


The operating system on which Horizon is installed.


Horizon Customization

Information about the tools that are used to customize horizon, what parts of horizon are customized, and where Horizon falls short.


How did you customize?


There are many ways to customize horizon: Plugins, the Customization Module, creating your own Django Application with horizon as a dependency, or to just maintain your own source fork.

What was changed?


Which parts of Horizon were customized: Templates, Behaviors, Workflows, or more?

Maintained Source


In the case of a Django application, Custom UI, or a Horizon Fork, our users must maintain their own source repository.


What is the one key feature missing from horizon?

This was a free-form question, so I’ve taken the liberty to group the responses into different categories.

Usability and simplified experience

These responses address simplicity and usability in horizon.

  • Customer Facing features that improve and simplify the experience.
  • Masking Networks that cannot be attached to an instance during the instance boot wizard.
  • Simple image panel that only shows latest images, instead of all images.
  • Improved access and usability of horizon’s metrics visualization.
  • Use-friendly instance creation.
Hosted Cloud Features

These seem to be feature requests focused around hosting a cloud provider and selling it as a self-service cloud platform.

  • Self-service project management (Project Admin/Owner, etc).
  • Billing & Pricing integration.
New Features

These appear to be features

  • Approval Automation for Quotas, Tenants, and allocations.
  • Cloud Federation.
    (note: one respondent indicated that they fielded their own user interface because horizon could not talk to other clouds)
Extensibility Improvements
  • Panel Extensions are difficult to manage.
  • No uniform way to import horizon extensions, too many options.
Other

For the sake of completeness, I’ve added features here that are not easily categorized.

  • Invincibility
  • Too many to List

JavaScript Dependency Management in OpenStack

A problem that I’ve been working on the last week has been JS dependency management – driven by npm and bower – inside of OpenStack. To be honest, this problem can be extended to JS dependency management in any application that wants to be packaged within a Linux distribution, as that is the ultimate bar that needs to be met. In this case, however, we’re just going to focus on OpenStack. And, to narrow things down even more, we’re only going to focus on front-end, bower-driven dependencies, used during runtime of the front-end.

To be clear: We are not talking about which tools to use. We are talking about making a javascript/html project’s source code both trustworthy enough for packagers, while providing access to the npm-driven toolchain preferred in this community.

Note: I anticipate updating this post to make more recommendations as I build out Ironic’s Webclient. Stay tuned.

Bower: Commit your dependencies

TL/DR: The ultimate recommendation to the OpenStack community is to use project-appropriate tools to resolve dependencies, but to ultimately commit them to source. For python, you might use something like bower.py. For NPM/Javascript projects, I personally recommend main-bower-files as demonstrated in this gulp file.

Requirement: Builds must be deterministic

Packagers’ customers are banks. Governments. Large corporations. Entities which need to ensure that the software they’re running can be signed and verified, and there are significant dollar values rolled up in SLA’s to ensure this. There’s lots of policies in place for this, some of which seem so draconic as to be beyond unreasonable. If you’re curious, I recommend reading up on PCI compliance. It all makes sense, once you realize that it’s possible to guess a password from the return speed of an error response.

In the world of packaging, this means that builds must be deterministic: If you run the build different times, the output must be exactly the same. If you can’t do that, you can’t md5 or sha1 sum the results for verification, and suddenly the packager is on the hook for the next big security breach.

Fact: Bower is not deterministic

Bower’s pretty neat. It is a registry, rather than a repository, so it only provides the address of where you can get a library, rather than providing the package itself. In the vast majority of cases, this means that bower will point you at a git repository, from which the command line client then extracts the tags as versions. This is pretty awesome, because it means that you can make github host your repository for you.

Yet…. git lets you go back in time and rewrite history. While awesome, this means that bower itself does not provide a deterministic way of resolving a dependency, and therefore cannot be used by packagers. Yes, you can cache bower and the git/svn repositories that it links to. In fact, I wrote a bower-mirror puppet module that will build a server for you that does just that. That does not solve the problem of git being non-deterministic though. As long as a library’s primary source is a git tag, you can’t trust it.

Solution: Use bower anyway

Wait, what? No, I’m serious. Fact is that bower is the de-facto dependency registry for front-end development. We should use it, because it’s an awesome tool. We should also ensure that our builds are deterministic, which means that bower should not be run as part of a build, and should only be used to assist in resolving and committing dependencies.

There is precedent: The NPM documentation itself recommends that you commit all your dependencies, a fact that came out during the SSL Debacle of 2014. Yet even without this recommendation from the JavaScript community itself, there is precedent in OpenStack via the oslo-incubator libraries. Since they are libraries in incubation, they are directly copied and committed into a target project, rather than using pip.

How do you do this? Well, that’s up to you. If you’re a mostly-python project that wants to use the bower registry but is allergic to node, then I’d suggest something like bower.py. If instead you’re using the NPM toolchain, something like the ‘update_dependencies’ target in this gulpfile should work for you.

Goodbye Launchpad, Hello Storyboard

The OpenStack Infrastructure team has successfully migrated all of the openstack-infra project bugs from LaunchPad to StoryBoard. With the exception of openstack-ci bugs tracked by elastic recheck, all bugs, tickets, and work tracked for OpenStack Infrastructure projects must now be submitted and accessed at https://storyboard.openstack.org. If you file a ticket on LaunchPad, the Infrastructure team no longer guarantees that it will be addressed. Note that only the infrastructure projects have moved, no other OpenStack projects have been migrated.

This is part of a long-term plan to migrate OpenStack from Launchpad to StoryBoard. At this point we feel that StoryBoard meets the needs of the OpenStack infrastructure team and plan to use this migration to further exercise the project while we continue its development.

As you may notice, Development on StoryBoard is ongoing, and we have not yet reached feature parity with those parts of LaunchPad which are needed for the rest of OpenStack. Contributions are always welcome, and the team may be contacted in the #storyboard or #openstack-infra channels on freenode, via the openstack-dev list using the [storyboard] subject, or via StoryBoard itself by creating a story. Feel free to report any bugs, ask any questions, or make any improvement suggestions that you come up with at: https://storyboard.openstack.org/#!/project/456

We are always looking for more contributors! If you have skill in AngularJS or Pecan, or would like to fill in some of our documentation for us, we are happy to accept patches. If your project is interested in moving to StoryBoard, please contact us directly. While we are hesitant to move new projects to storyboard at this point, we would love working with you to determine which features are needed to support you.

Relevant links:

StoryBoard Authentication and Authorization

During the OpenStack Summit in Paris this last week, we made a concerted effort to finally migrate the openstack-infra projects over to StoryBoard. This is a pretty big milestone for us, because it’s the first real set of users that we’ve had on our system – basically our beta users. Of course the best laid plans ran into some problems, one of which is forcing us to make a decision on how to handle user identity. What follows is my personal opinion on where we are, where I’d like to see us go, and what I feel would take us to get there.

Problem Summary

Our original data source (pre-migration) permits permits duplicate user names, which results in “duplicate” user records. In some cases this is intentional, as users wish to retain supplemental identifiers (such as IRC nicks) on their user accounts, while being able to clearly separate contributions made as an agent of an organization (ex: employer), from contributions made on their own behalf.

StoryBoard, in contrast, does not permit duplicate user names, which has raised the question on what to do during data import. Should we permit duplicate user names, and risk creating zombie users? Should we prompt the admin during import to decide whether to create a user or link a user? In that case, what do we do about the extra OpenID, do we permit two different users to log in as one user?

This, and many other edge cases, makes this one of those hairy problems legendary for causing technical debt, so rather than trying to patch the problem right now, it behooves us to consider where we want to end up, and take the minimum number of steps towards that goal which also solves our immediate problem.

The Long-Term Goal

My long term goal with StoryBoard is twofold: Firstly, I want our authentication system to be n-pluggable, so that an install can permit their users to authenticate against multiple Authentication providers. This is the “Log in with Facebook/Google/OpenID” story, and while I anticipate that the vast majority of users will only use one single login method, the ability to link multiple providers is critical in complex organizational structures as well as handling legacy auth migration cases.

Secondly, I want users to have the power to declare their own identity within the system, with as minimal fuss as possible. The best way to describe this is as Google Auth’s multiple login, where a user may switch their identity within the same browser session.

Current Design

The current design of StoryBoard’s authentication contains two portions: Authentication and Authorization. The first, Authentication, is the to-be-pluggable system I referred to earlier, whereby a user’s identification is delegated to a trusted third party. Once we have received a user’s identity from the remote system, we attempt to resolve that user against our local database, creating a new record if necessary.

The second part, Authorization, mirrors the standard OAuth/OpenID flow of establishing a trust relationship with the browser (to the best of our ability, anyway) and issuing the user an API Bearer token they can use to make queries with.

The entire system right now is hard-coded to only accept a single OpenID provider, and our design has only been tested against Launchpad/Ubuntu One.

Identifying Delta

To get from ‘Where we are’ to ‘Where we want to be’, we then need to identify the necessary delta to our code, data, and functionality.

  • To enable a user to link to multiple authentication providers, we must be able to store N remote ID tokens (such as an OpenID) per user.
  • To enable multiple authentication plugins, we must use a stevedore-like plugin discovery mechanism to allow multiple plugins to be loaded. Similar code is already in place, and should be relatively easy to copy.
  • To enable a user to choose how they wish to authenticate, we must permit some form of authentication discovery, where the API informs the user what authentication options they may have.
  • In order to permit user discovery by multiple different parameters (email, irc handle, etc), we must permit a user record to reference multiple email addresses and multiple IRC handles, which MAY conflict.
  • To permit multi-session in the browser, the web client should store multiple user auth tokens, and permit a user to ‘switch’ between which one is being used.
  • In order to display the name of the actor in the UI, providing a consistent display name is necessary. While we can update this display name by whatever comes back from our remote auth providers, may cause caching problem when a user’s display name changes from system to system.
  • In order to permit the normalization of user records and allow zombie removal, we must permit a user merge activity that can support two use cases: Firstly, in the case that a user retains the ability to identify as both users to be merged, a self-serve merge process by which identity control is verified and resolved. Secondly, in the case where a user record has become truly orphaned from an authentication system, an administration action that permits a brute-force user merge.

With the above “eventual” features in mind, it quickly becomes clear that the users table in storyboard currently contains too much data. Given a 1-to-n mapping on email addresses, user names, OpenID’s, and display names, removing these into their own tables reduces our user table to little more than an ID and a login timestamp.

Back to the problem

With the above in mind, it becomes clear that there is no real benefit to maintaining a uniqueness constraint on the username column, as it provides no real useful data. IRC handles – one of the goto identifiers in OpenStack, do not cleanly map 1-to-1 with actual user records, as a particular person might be acting for different agents. Thus it is actually a benefit for us to permit duplicate usernames.

By lifting the uniqueness constraint, we both fix our immediate problem, and take a step in the correct direction for our optimal system.