How to simulate an OpenStack Infra Slave

Situation: You’ve committed your code, you’ve submitted a patch, and yet for some reason, and regardless of the number of rechecks, your tests simply won’t pass the gate? How can you test the gate, locally, to triage what’s happening? By creating a local slave VM.

Prerequisites

To complete this tutorial, you will need the following:

  • Vagrant
  • VirtualBox
  • A local clone of OpenStack’s system-config repository: git clone git://git.openstack.org/openstack-infra/system-config

Create a local.pp manifest.

A quick look at the .gitignore file at the root of the system-config project reveals that both ./manifests/local.pp and Vagrantfile are ignored. With that in mind, let us start by creating a simple local puppet manifest which describes our node:

# path: ./manifests/local.pp
# Any node with hostname "slave-.*" will match.
node /slave-.*/ {
  class { 'openstack_project::single_use_slave':
    sudo => true,
    thin => false,
  }
}

The openstack_project::single_use_slave manifest is used by nodepool – or rather, by disk-image-builder on behalf of nodepool- to build the virtual machine image used in OpenStack’s gate. This happens once a day, so any changes made in system_config will require at least 24 hours to propagate to the build queue.

Create a Vagrantfile

Next, we create a Vagrantfile that invokes the above manifest. Note that I am explicitly setting hostname on each node – this allows us to choose specifically which manifest will be applied to our guest.

# path: ./Vagrantfile
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create a new trusty slave: `vagrant up slave-trusty`
  config.vm.define "slave-trusty" do |trusty|
    trusty.vm.box = "ubuntu/trusty64"
    trusty.vm.network 'private_network', ip: '192.168.99.10'
    trusty.vm.hostname = 'slave-trusty' # Use this to control local.pp
  end

  # Create a new xenial slave: `vagrant up slave-xenial`
  # Will only work in vagrant > 1.8.1
  config.vm.define "slave-xenial" do |xenial|
    xenial.vm.box = "ubuntu/xenial64"
    xenial.vm.network 'private_network', ip: '192.168.99.11'
    xenial.vm.hostname = 'slave-xenial' # Use this to control local.pp
  end

  # Increase the memory for the VM. If you need to run devstack, this needs
  # to be at least 8192
  config.vm.provider "virtualbox" do |v|
    v.memory = 2048
  end

  # Install infra's supported version of puppet.
  config.vm.provision "shell",
      inline: "if [ ! -f '/etc/apt/preferences.d/00-puppet.pref' ]; then /vagrant/install_puppet.sh; fi"

  # Install all puppet modules required by openstack_project
  config.vm.provision "shell",
      inline: "if [ ! -d '/etc/puppet/modules/stdlib' ]; then /vagrant/install_modules.sh; fi"

  # Symlink the module in system_config into /etc/puppet/modules
  config.vm.provision "shell",
      inline: "if [ ! -d '/etc/puppet/modules/openstack_project' ]; then ln -s /vagrant/modules/openstack_project /etc/puppet/modules/openstack_project; fi"

  config.vm.provision :puppet do |puppet|
    puppet.manifest_file  = "local.pp"
  end
end

IMPORTANT NOTE: As of Vagrant 1.8.3, the above declared slave-xenial will fail to boot properly. This is because at this time, the published ubuntu/xenial64 image does not contain the guest additions, which must be installed manually. For specifics on how to do this, please examine this launchpad issue.

Vagrant up!

Last step: Execute vagrant up slave-trusty. With luck, and a little patience, this will create a brand new, clean, running jenkins-slave for you to test your build in.

Where next?

From this point, you should take a look at the project-config repository and determine which additional VM configuration steps are being executed by your job, so you can create an environment specific to the problem you’re trying to triage. Alternatively, you can explore some of the other nodes in ./manifests/site.pp, and perhaps extend the Vagrantfile above to instantiate a VM for one of infra’s services, such as StoryBoard or Grafana. Using the above template, you should be able to construct test instances of any infra component.

Update (June 27th, 2016)

The above method may also be used to simulate a regular OpenStack Infra server, with a few modifications. For this example, we’ll try to simulate an OpenStack Mirror. Add the following to your local puppet manifest:

# path: ./manifests/local.pp
node mirror {
  # This module is included on all infra servers. It sets up accounts, public keys, and the like.
  class { 'openstack_project::server':
    iptables_public_tcp_ports => [22, 80],
    sysadmins                 => hiera('sysadmins', [])
  }
  
  # This module includes functionality specific to this server.
  class { 'openstack_project::mirror':
    vhost_name => $::ipaddress,
    require    => Class['Openstack_project::Server'],
  }
}

After doing so, add this node to your Vagrantfile:

# path: ./Vagrantfile
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create a new mirror slave: `vagrant up mirror`
  config.vm.define "mirror" do |mirror|
    trusty.vm.box = "ubuntu/trusty64"
    trusty.vm.network 'private_network', ip: '192.168.99.22'
    trusty.vm.hostname = 'mirror' # Use this to control local.pp
  end

... # Continue from example above.

And done! Now you can invoke vagrant up mirror and watch as your openstack-infra mirror server is provisioned. There are a few caveats:

  1. If you want to add a new puppet module, you’ll want to add it to modules.env. Doing so will only trigger an automatic install if you’re starting from a fresh guest host, so you’ll either also have to install it manually, or recreate your guest.
  2. Some manifests require a hostname. In this case, I usually reference the hosts’ IP Address, as managing DNS is too much effort for most test scenarios. vhost_name => $::ipaddress

JavaScript on the Trailing Edge

The public opinion of the JavaScript community is that it’s fast. We break things, we’re hungry for the latest features, and none of us want to return to the days of slow innovation that ended with the death of IE6. This really isn’t true; there are several core JavaScript projects, such as Angular, JQuery, and React, which have solid governance, and measured release cycles, that would mesh well with OpenStack. It just happens that those projects are surrounded by thousands of smaller ones, run by handfuls of engineers who are volunteering their time.

However, the JavaScript community offers many benefits, from layout frameworks to new user interface paradigms, and OpenStack could easily take advantage of all these. As I’ve pointed out, the user interface needs of a cloud platform vary by user, not by deployment, and it is high time that OpenStack catered to more than just the Operator mind set. There remain some obstacles to this, however they are easily solved:

Backwards Compatibility

The first challenge we face is backwards compatibility. We must balance the rapid adoption of new developments like ES6, with downstream LTS support commitments that can last several years. We must do all this, while also not losing ourselves in a morass of hacks, special cases, shortcuts, and workarounds. This requires common dependency management for all our JavaScript libraries, and we can easily draw on the lessons learned in OpenStack’s requirements project to lead the way.

Complacency

Furthermore, we face a social challenge, that of complacency. The counterargument I most frequently get is “Why not use Horizon”. As my previous post on composable cloud interfaces highlights, Horizon is too narrowly focused. While it does an admirable job at supporting the Operator use case, and provides many ways to extend itself, a brief survey I performed last year revealed that two thirds of downstream Horizon users either maintain full forks of horizon’s source, or are building entirely custom user interfaces. To me, this is stark evidence that horizon falls short of meeting the use cases of all of our OpenStack operators.

Funding

Lastly, we face the rather pedestrian challenge of funding. While I’ve come across broad support for a greater role of JavaScript in OpenStack’s UI development – to the level of squeefun bouncing in a hallway when I mentioned ES6 – it remains a fact of life that those corporate members with the most to gain by the strategic evolution of OpenStack are usually content to let ‘someone else’ do the work, while dedicating their own employees towards more immediate revenue sources.

 

It’s a Catch-22 situation: We cannot prove the viability of JavaScript thick-client UI’s without a functional alternative to horizon, but we cannot get to that alternative without engineers willing – and able – to contribute. Personally, I feel very privileged to be one of a very small number of fully dedicated upstream engineers. To the best of my knowledge, Elizabeth Elwell and I are the only two entirely dedicated towards strategically advancing User Interface development in OpenStack. We are making good progress, however we do not anticipate adoption in the next cycle.

With help, Newton will contain the last pieces we need.

We need a consistent OpenStack

The following is a table of some basic implementation details in OpenStack’s Mitaka API projects. It isn’t intended to shame anyone; it is intended to highlight tool and framework fragmentation in OpenStack. Here’s the data, the article follows below.

Integrated Release APIs

Project
ceilometer PasteDeploy pecan,wsme py27,py34 global-requirements oslo-generate-config
cinder PasteDeploy routes py27 global-requirements oslo-generate-config
glance PasteDeploy routes,wsme py27,py34 global-requirements oslo-generate-config
heat PasteDeploy routes py27,py34 global-requirements oslo-generate-config
ironic pecan,wsme py27,py34 global-requirements
keystone PasteDeploy routes py27,py34 global-requirements oslo-generate-config
neutron PasteDeploy pecan py27,py34 global-requirements oslo-generate-config
nova PasteDeploy routes py27,py34 global-requirements oslo-generate-config
sahara flask py27,py34 global-requirements oslo-generate-config
swift ? py27,py34
trove PasteDeploy routes py27,py34 global-requirements

Supporting API Projects

Project
aodh PasteDeploy pecan,wsme py27,py34 oslo-generate-config
barbican PasteDeploy pecan py27,py34 global-requirements
cloudkitty PasteDeploy pecan,wsme py27,py34 oslo-generate-config
congress PasteDeploy routes py27,py34 global-requirements oslo-generate-config
cue pecan,wsme py27,py34 global-requirements oslo-generate-config
designate PasteDeploy flask,pecan py27,py34 global-requirements
freezer falcon py27,py34 global-requirements
fuel web.py py27,py34
kite pecan,wsme py27,py34
magnum pecan,wsme py27,py34 global-requirements oslo-generate-config
manila PasteDeploy routes py27,py34 global-requirements oslo-generate-config
mistral pecan,wsme py27,py34 global-requirements oslo-generate-config
monasca-api falcon py27
monasca-log-api falcon py27
murano PasteDeploy routes py27 global-requirements oslo-generate-config
searchlight PasteDeploy routes,wsme py27,py34 global-requirements oslo-generate-config
senlin PasteDeploy routes py27,py34 global-requirements oslo-generate-config
solum pecan,wsme py27,py34 oslo-generate-config
tacker PasteDeploy routes py27,py34 global-requirements
zaqar falcon py27,py34 global-requirements oslo-generate-config

Just scratching the surface

The table above only scratches the surface of OpenStack’s tool fragmentation, as it only focuses on frameworks and configuration in API projects. It does not address other inconsistencies, such as supported image types, preferred messaging layers, testing harnesses, oslo library adoption, or a variety of other items.

I’ve already spoken about Cognitive Load and OpenStack, how the variability in our projects can trick your brain and make you less effective. Furthermore, we’ve seen a lot of discussion on how we should Choose Boring Technology, as well as hallway discussions about how OpenStack should be more opinionated in its deployments. In fact, things have gotten so bad that the shade project was created – a library whose express intent is to hide all the differences in deployed OpenStack clouds.

Variability is bad for OpenStack

The lack of consistency across OpenStack is harming us, in very specific ways.

Contributing to multiple projects is hard

Nobody wants to climb multiple learning curves. Knowledge from one project directly transfers to another if the frameworks are similar enough, reducing this learning curve. In short, differences in projects create barriers to cross-project fertilization and contribution, and one way to chip away at those differences is to keep them as similar as possible.

Supporting OpenStack’s dependencies is hard

As an open source community, there is a strong ethos of helping support any projects that we depend on. Yet, how do we pick which upstream project to help fix? If all projects were consistent in their use of, say, WSME, there would be a far larger pool of talent invested in success, and discussions like this one would not happen as frequently (Note: I’m not necessarily advocating WSME here – it merely provides a very useful example).

Maintaining feature consistency is hard

There are many features which our various projects should all support. Simple things, like consistent search query parameters, consistent API version negotiation, consistent custom HTTP Header names – basically anything cross-project or from the API working group.

I have personal experience with this: In Mitaka I was able to land CORS support in most of OpenStack’s API’s. Of the 23 projects that I contributed to, most required that I learn project-specific approaches to governance, launchpad usage, testing harnesses, commit-message constraints, folder structure, and more. The entire experience only taught me one thing: Trying to bring a common feature to OpenStack is something I never want to do again.

Deploying/Running OpenStack is hard

Without an opinionated OpenStack install (down to the supporting services), the chances that someone has run into the same problem as you, drops significantly. Features which rely on service abstraction (messaging for instance) depend on layers of driver abstractions which add more points of failure, and often have to provide workarounds for features supported in one service, but not in another (assuming they don’t give up on that feature entirely).

Portability between clouds is hard

To quote Monty Taylor: “The existence of shade is a bug”. It turns out that OpenStack’s implied portability promise is pretty much a lie, and you will spend a significant amount of effort figuring out how this OpenStack happens to differ from That OpenStack.

We need a consistent OpenStack

We have seen the consequences of inconsistency first hand. In some cases, a complete lack of developer mobility has resulted in echo-chambers, entire projects that are completely convinced that their approach is superior to others’. Other projects are effectively code deserts, unable to recruit contributors. Deployments are difficult, feature support is inconsistent, and rather than address the problem and simplify our projects, we’ve instead built layers of abstraction so we can avoid our most grievous mistakes.

We need a consistent OpenStack. If each project takes a slightly different approach to something, it makes subsequent management and support very difficult. To that end, all of our projects should use a consistent set of tools, frameworks, and libraries.

It is time to abandon server-rendered HTML

There are many, many benefits, to building your business on API’s. Easier B2B interoperability, separation of presentation and business logic, as well as the natural varying velocity of API vs. UI development, are all arguments cited as to why you should present some form of API for your business. API’s are a good – if not necessary – part of the competitive landscape.

Historically, however, most web application development still begins with server-rendered HTML. As evidenced by frameworks such as Express, Django, or WordPress, the ability to render data to browser-ready HTML remains a core part of our development process. This is not surprising – in the early days of the internet, most of the computing power was available only on servers, and implementing complex user interface logic onto a browser was both poorly supported and immature.

We have come a long way since then. Standards – be it RFC’s, WhatWG documents, or W3C standards, have evolved to chip away at the need for HTML to be provided by the server. In 2014, the CORS specification broke down the single-origin policy barrier. The maturing of HTML5, and its adoption in modern browsers, has filled in key feature gaps and converted the web browser into an application platform. And, finally, the rise of browser-based application frameworks such as AngularJS and React, provide a much needed toolkit for manipulating HTML directly in the browser.

There remains one argument in favor of server-rendered HTML: Search Engine Optimization (SEO). Web crawlers are notoriously bad at dynamic web pages, and thus a long-standing `best practice` in my industry has been that all public content must be crawlable. There were some bridging technologies – such as the ?_escaped_fragment_ contract recommended by Google, however even those pushed the onus of generating static content to the server.

And then, on October 14th, 2015, Google deprecated their AJAX crawling API. Quote:

“Today, as long as you’re not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.”

Let’s be honest: Google is the gorilla when it comes to web crawling, and it’s only a matter of time before other search engines will follow suit (if they haven’t already). Web apps, be they simple or complex, can now be fully understood by a crawler, including any content provided by an API.

So… why are we rendering HTML on the server? Static content- be it static HTML or a complex AngularJS app – requires no server resources. In fact, you can host your entire application in Amazon’s S3, point a CDN at it, and it’s available globally. The alternative – paying for regionally placed UI servers – is by comparison unjustifiably expensive.

It is time for this technical approach to be retired. Treat your web page like an API client, no different than your mobile or desktop app.

 

 

OpenStack JavaScript Mitaka Recap

There aren’t that may people working on the JavaScript ecosystem in OpenStack, however even with that in mind, we’ve made quite a bit of progress in Mitaka.

  1. CORS support in all OpenStack services
    While there are still some extant bug fixes that need to land, those appear to be on schedule for the M3 release. We still need help though, so if you have some time to assist, the relevant launchpad bug is here.
  2. A common JavaScript style guide for OpenStack
    The project is named ‘eslint-config-openstack‘, and I’d like to invite everyone to keep an eye on it. The initial flurry of rule proposals seems to have settled, and at this point I’d like to move to a once-per-cycle release cadence.
  3. Publishing to NPM
    OpenStack’s infrastructure is now capable of publishing packages to NPM.
  4. Mirror Foundations
    While we weren’t able to set up an NPM package mirror for OpenStack’s Infrastructure, we were able to migrate our build mirrors to AFS, while adjusting the folder structure to make more room for other language mirrors. A nice side effect: We removed 20 minutes from devstack gate runs by creating space for pypi wheel mirrors.
  5. Ironic-UI Horizon Plugin
    Version 1 of the ironic horizon plugin has been built, and while it doesn’t have the bells and whistles that you may be expecting, it’s nevertheless a major splash by Elizabeth Elwell as she enters the contributor community. Stay tuned for more updates on this.

Did I forget something? Often it feels like there’s actually been more that I worked on, however my memory isn’t the greatest. Leave an update in the comments, and I’ll add it!

 

What does “Composable OpenStack” mean anyway?

A phrase that has been making the rounds recently is “Composable OpenStack”. It’s a very seductive phrase; it suggests a buffet-like experience with OpenStack that can satisfy any appetite, while at the very same time promising nothing. This makes it easy for sales people to use, as the definition can be modified based on the audience. Yet what does it actually mean? Well, it really depends on who you are, and what you want out of a cloud.

The Megacorp

If you are a Megacorp, the term is almost synonymous with “Hybrid Cloud”, a term used frequently in the marketing and sales material of my employer, HPE. In this case, it addresses organizations whose computing resources are spread across multiple cloud providers. A shipping system may live in a data center adjacent to UPS’s servers, the billing system is on an internal cloud for compliance and auditing, while individual supply-chain management components are distributed across regional data centers across the world (via AWS, or Rackspace, or whatever is most cost-effective in the region).

Composing clouds, in this case, means consolidating the management functions (scaling, alerts, security policy, etc), no matter where that component happens to be located.

The Cloud Provider

In this case, we’re talking about an organization (be it an internal IT department or a company like Rackspace) which is offering cloud services to their customers. Here, composability speaks more to the services offered via that cloud. For example:

  • You may have your own authentication system instead of Keystone
  • You may have your own Bare Metal provisioning system.
  • You may, or may not, want to provide Big Data services to some, but not other customers.

This means building the core services of an OpenStack cloud, and either selectively layering on additional services, or replacing those for which you have an internal implementation.

The Application Company

Many companies (Etsy comes to mind) have computing needs that are very well defined. They offer some service with a specific set of requirements, and their cloud is designed for this use case. For example, an organization which offers WordPress containers may only require Bare Metal (Ironic), Container Management (Magnum), Image storage (Glance), and Authorization services (Keystone). Everything else is superfluous, and increases both maintenance and resource requirements.

These companies must be able to choose only those components of OpenStack that are required to meet their business objectives. From a practical standpoint, it means that each service in OpenStack must be able run independently. While many of the services in OpenStack’s integrated release are still tightly coupled, there has been a movement (championed by swift, ironic, and glance) that are peeling this apart.

The Application Developer

Ant the most atomic end of the spectrum, we have the needs of those who are actively developing an application, are using cloud resources to do so, however whose technical requirements are in sufficient flux that a specifically designed cloud is probably not an option. They may be customers of Cloud Providers, however the important distinction is that these individuals are not operators. They think of their work in terms of functioning components, not hosts and networks.

Try to understand the mindset: Describe an application, and its components, rather than a cloud. This application may require a web server, a database, some kind of a file storage mechanism, and a way to expose all these things to the customer in a way that will scale. An operator would look at this and automatically decompose it into services such as Load balancing, IP assignment, DNS, and more. An application developer would rather let someone else worry about those details.

To provide a ‘Composable OpenStack’ to this customer, it means making application components as easy to provision as possible. Details such as MySQL read only slaves, optimizing our load balancer, or subnet DNS, are assumed to be handled for us.


 

Horizon Usage Survey

Over the past few weeks, I’ve run a survey that attempts to discover how people use OpenStack’s Horizon (aka openstack-dashboard), and I’d like to publish some preliminary results. I’ll be soliciting responses during the Vancouver Summit next week, so if you haven’t participated yet, you still have time to do so. The link to do so is here: http://tinyurl.com/horizon-usage-survey.

Results

In two weeks, the survey gathered 36 responses. Due to the small sample size and the non-random selection of participants, this data should not be considered statistically significant — Self-selected populations rarely are — however it does provide us with a window into how Horizon is used in the real world.

OpenStack Deployment Statistics

The following are charts that address the scale of our users’ OpenStack deployments.


Deployment Size


This is an indication of how many bare-metal instances comprise our user’s clouds.

OpenStack Version


What versions are currently deployed by our users. Note that some deploy multiple clouds.

Cloud Type


The type of cloud gives us an indication of what use cases our users encounter.


Horizon Deployment

These charts represent information about Horizon usage.


What is your UI?


Whether our users use Horizon, a custom-build UI, or both.

Install Tools


What tools do our users use to install and maintain horizon.

Host Operating System


The operating system on which Horizon is installed.


Horizon Customization

Information about the tools that are used to customize horizon, what parts of horizon are customized, and where Horizon falls short.


How did you customize?


There are many ways to customize horizon: Plugins, the Customization Module, creating your own Django Application with horizon as a dependency, or to just maintain your own source fork.

What was changed?


Which parts of Horizon were customized: Templates, Behaviors, Workflows, or more?

Maintained Source


In the case of a Django application, Custom UI, or a Horizon Fork, our users must maintain their own source repository.


What is the one key feature missing from horizon?

This was a free-form question, so I’ve taken the liberty to group the responses into different categories.

Usability and simplified experience

These responses address simplicity and usability in horizon.

  • Customer Facing features that improve and simplify the experience.
  • Masking Networks that cannot be attached to an instance during the instance boot wizard.
  • Simple image panel that only shows latest images, instead of all images.
  • Improved access and usability of horizon’s metrics visualization.
  • Use-friendly instance creation.
Hosted Cloud Features

These seem to be feature requests focused around hosting a cloud provider and selling it as a self-service cloud platform.

  • Self-service project management (Project Admin/Owner, etc).
  • Billing & Pricing integration.
New Features

These appear to be features

  • Approval Automation for Quotas, Tenants, and allocations.
  • Cloud Federation.
    (note: one respondent indicated that they fielded their own user interface because horizon could not talk to other clouds)
Extensibility Improvements
  • Panel Extensions are difficult to manage.
  • No uniform way to import horizon extensions, too many options.
Other

For the sake of completeness, I’ve added features here that are not easily categorized.

  • Invincibility
  • Too many to List

A New Approach for Flash Accessibility

My colleague (aka running buddy, aka friend, aka fashionista) Andrea Hill and I had a pow-wow a few months back in anticipation of her Accessibility presentation at Spring </br> . Personally, I thought the conversation was a perfect example of how genius occurs at the intersections of knowledge domains, as we were able to take her expertise on Accessibility standards and my expertise in Flash and Actionscript and come up a back-of-the-napkin approach to Flash Accessibility that might just fix all the headaches caused by interfacing with Assistive Technology. Note that this solution does NOT absolve you from designing for visual impairments, hearing deficiencies and so forth- this is a way of interfacing with screen readers.

State of the Union

Flash content at this point can only interface with several select screen readers, and only on Windows (EDITED: see comments). This is because the Flash player uses Microsoft Active Accessibility, which is, of course, only supported in and via Microsoft technologies. As a result, Accessibility is one of those “Holy Grail” problems you run into over and over again, and everyone slaps a big price on because nobody really knows anything about it.

Solution Overview

Now for the solution. If you really think about the problem, making the flash player accessible is completely redundant. Compiled .swf’s are embedded into the DOM of a web page which, assuming the browser is reasonably up-to-date, already accommodates a broad selection of screen readers. What is really missing is a way for the Flash piece to use the browser as a bridge communicate with them.

Some interesting developments have actually occurred in this arena recently. The first is the release of a “headless” flash player by Adobe which Google and Yahoo now use for SEO purposes, yet could very easily be licensed for other purposes. The second is open-sourcing the .swf file format spec, which could allows someone to write their own ‘accessible’ flash player. Yet both of these solutions are very resource intensive, and take control away from the developer of the application in question.

Enter WAI-ARIA: This is a W3C standard for Accessible Rich Internet Applications (ARIA) that was designed specifically with Ajax-based RIA’s in mind. To give you a quick overview, ARIA outlines a series of attributes by which an XHTML tag (such as div, body, or table) can notify a screen reader of its semantic role, as well as any changes that may occur to/within it. Thus a div tag or unordered list can be given the role of ‘menu’ and an aggressive ‘politeness’ level so that any time the menu changes, the screen reader is notified.

At this point our proposed solution should be pretty clear: Rather than relying on the flash player to connect and manage the relationship with a screen reader, we instead piggyback on the browser’s capability and let it handle our communication for us. This can be easily accomplished via the ExternalInterface class, which not only allows us to interface with the javascript engine, but also allows us to write that same javascript to the DOM from flash so our Accessibility solution becomes completely internalized.

accessibility.png

Fig 1: A flash RIA overlaying a DOM abstraction.

Implementation

To fully understand the implementation of this concept, it’s important to realize that the flash portion of our application completely loses its purpose as a visual display platform, and is relegated to the role of Model and Controller, with the HTML DOM acting as the View. In essence the .swf becomes a Meta application whose job it is to accurately project its current DisplayList hierarchy into the HTML, while accepting commands from that same environment.

This requires a one-to-one mapping between DisplayObjects and HTML elements, which thankfully is fairly easy. To illustrate, take a look at the following two simple code examples. The first is an HTML representation of the DOM rendered by a browser, while the second is an MXML representation of DisplayObjects rendered by the Flash AVM.

xhtml.pngmxml.png

Fig 2: XHTML and MXML representations of a similar page interface.

Look similar, right? Even though they’re both abstractions, you can get a good sense of similar object hierarchy and inheritance, and building an bridging framework becomes a question of determination rather than digging into the depths of the Flash Player. The tools are there, the solution is there, all we need to do is build it.

accessibility-normal.png
accessibility-disabled.png

Fig 3: User Interaction flow for different use cases.

Classifying Rich Internet Applications

I had an excellent discussion with my coworker Susan today about refining certain internal processes, and one of the tangents of the conversation went off on what the actual definition of a Rich Internet Application actually was. As we know, anything from a banner ad to a product configurator can be considered an RIA, and the only common element seemed to be that an RIA retains its functionality within the context of what the user is interacting with. In other words, if you click on button in an RIA, the resulting action does not significantly change the page or window the user is interacting with; Clicking to go to a new page loses context, using an animated accordion to display different content does not.

The similarities, though, end there. Implementation varies, technology varies, scope and location and functionality varies, and all in all it ends up being a pretty difficult convoluted mess to describe. At best you can group them via complexity, and after a brief exercise of that nature we realized that a new breed of networked application was emerging. Well, alright, perhaps not emerging, but instead gaining momentum and acceptance in the mainstream. Here’s the scale, see if you agree with our reasoning.

Level 1: The Widget

At this level of RIA you are attempting to display information in context of a particular page, however you don’t care about anything but the most basic user interaction. These could be things like drop-down menus, product detail pop ups, rotation views, buttons that reveal and/or expand text content (like reviews) and so forth. They are almost always implemented in JavaScript, because to use Flash or another plug in technology would be quite a bit of overkill.

Level 2: The Functional/Interactive Widget

This level of RIA’s describes widgets that allow a user to complete a particular functional task. No longer content with simply displaying information, we’ve now added functionality or a experience that responds to user input. This could be as simple as a DHTML login form or as complex as a Flash-based page takeover, but it necessarily remains restricted to a specific, easily definable task. "Log In", "Rate This Product", "Check Convention Schedule" and so forth are good examples, as they add a richer experience that remains in context with the page itself.

Level 3: The Rich Internet Application (RIA)

The next level of complexity takes the task mentioned above and strings them together into a flow, or objective, thus defining an actual application. While previously you would have perhaps a few simple form fields to fill out, an actual RIA causes the context of the page to change dramatically via user input. User interaction is no longer restricted to a single action, but instead is intended to enable an activity, such as "Tracking your time", "Editing a Photo", or "Managing a Color Palette". This is where the bread and butter of RIA’s exist, as well as the holy grail of Web 2.0: A fully interactive and functional application contained entirely within one browser page.

Level 4: The Rich Networked Application (RNA)

The Rich Networked Application (RNA, I’m trying to coin a term here, help me out) ceases to be bound by the browser, and instead has become an experience that bridges and is uniform across all digital touch points. The service is available not only from a browser, but may also be accessed from a desktop, a mobile device, a vehicle dashboard, a kiosk, a gaming console, or any other networked or partially networked device you can imagine. The RNA reaches out to many delivery channels, and while it may provide a different experience for each it nevertheless remains connected in context across them all. Excellent examples of this are Twitter (and all its clients), Google Maps (available on Mobile, Internet, etc), Kuler (Integrated to the desktop and the entire Adobe Suite), as well as upcoming games like Spore (Share creatures across platforms). Implementation… well, lets be honest, it’s a nightmare if you go into it unarmed. You have to support many different platforms, frameworks, systems and limitations, yet even so we’re starting to see toolsets emerge that address them all (Most notably Adobe’s Flex & AIR, Microsoft’s DLR via WMF & Silverlight, and Javascript libraries like SproutCore, MooTools and Prototype).

Did that make sense to you? It does to me, and I’m really excited to see how what we have today is going to start bridging the Device Divide.

Designers & Developers: Obsolete Titles in a Web-Made World

An interview question I have been frequently asked in the past is: “On the spectrum of Designer < – > Developer, where would you put yourself?”

I’ve always been bothered by that question, because not only do I have a strong background in the Fine Arts, but I have 8 years of solid experience as a developer. The reason I don’t like it is that those of us who operate on the web apply both our creative and logical skills on a daily basis, and in many cases it is our creative streaks that make us so good at what we do. Problem solving skills and creative expression are absolutely inseparable: We learned this from Einstein, and Galileo, and Leonardo Da Vinci, and Thales, and Newton, and a host of other individuals who nowadays would be called the greatest minds of their time.

And yet, every single time we present ourselves professionally, we are categorized into either a logical or creative bucket: Designer or Developer. Admittedly, there are advantages to this, since one can easily quantify compensation, career advancement and project resources based on defined tracks. Furthermore, most (if not all) web production processes work on the basis of phase signoff to protect both the client and the agent, and design necessarily comes before development. Rigid classification of skill and expertise is a boon to management and customer expectations, though it is a poor representation of reality.

Truly great projects involve all hands from kickoff, and while production can perhaps not truly begin until the ideation and proposal comps have been signed it is only by the virtue of continuous collaborative progress that the possibilities begin to flourish and grow. In this kind of an environment, the terms “Designer” and “Developer” cease to have any meaning; Contribution becomes equal in value and blended along the lines of common expertise.

An example: My colleague Jeff Breckenridge (Who’s a fantastic but under-appreciated designer *hack*cough*shameless plug*) has a remarkable understanding of graphic composition, but that does not mean he doesn’t understand the opportunities of object-oriented design. I myself know how important his expertise is, and with my background in production graphics can bring my own skills to the table and meet him on mutual common ground. The end result is well composed, well designed, and is functionally robust.

MikeAndJeff.png

Skills: Jeff Breckenridge & Michael Krotscheck

The work we did on Practical Desktop was great not because he was the “Designer” and I was the “Developer”, but because we both mingled our skills across our particular domains of expertise. In some cases, I relied on his understanding of functional implementation, while in return he provided me with comps I could easily derive interface states from. We both brought our skills to the table, and in the middle they blended to produce something great.

I have stated before, and will state again, that true genius occurs at intersections of knowledge domains. The only thing we are lacking is a way to describe our skills in a way that accurately describes these domains, and is useful in managing projects, resources and employees to achieve the greatest level of creative impact for each implementation. The following flash piece approaches a method by which the skills may be described, and even assists in managing skills for a particular project, however it may not yet be refined enough to properly define career progression and compensation.

Skills: Do It Yourself!

In the end, the point I’m really trying to make is this: The terms “Designer” and “Developer” have no meaning anymore. What matters from this point forward is the blending of skills, and we need a new way of describing these skills so they may be properly blended. What remains is the management of individuals categorized in this way. How exactly does one form a team of professionals when a particular set of skills is needed? And how in the world do you compensate them?