JS-Maven Roundup and Roadmap

To those of you who’ve been paying attention to the original project on GitHub, you’ve probably noticed that there’ve been a lot of changes there recently without any subsequent updates here. I’d like to take a moment to summarize what we’ve been up to, and keep everyone in the loop on where we’re going.

Inevitably as you start into a project you realize the full scope of what you’re trying to do, and as you do the finish line moves farther and farther towards the horizon. Most of the changes made were in respect to that.

js-maven-plugin is now js-maven

I renamed and moved the project because it was rapidly growing beyond the scope of a single maven plugin to include core libraries, tooling, archetypes and reports. Rather than try to hammer all these into one single java project I felt that switching over to a multi-module layout would give me the room I needed to grow. So far this seems to have been a good decision, as I now have the sense that I can (mentally) stretch my arms a little.

No more specific version commitments.

I’m making a concession to my own prioritization style here, which is very much a “Hey, this sounds like an interesting problem to solve” method. Needless to say, this doesn’t really work well with preplanned version releases, as there are some projects that are more fun and some that are less fun, and it resulted in me skipping around a lot. Instead, until I’ve got a few more contributors on the project, I’m simply going to outline a feature set for the various point versions and go after features as they strike me.

I understand that for an enterprise this is not the most optimal approach, so I will make this suggestion: If a particular feature is business critical enough for you to write your own solution, why not contribute it while you’re at it?


I’ve finally gotten around to setting up all the maven:site documentation for the project, and it’s now hosted on github’s pages for the project http://krotscheck.github.com/js-maven/ . Similarly, the README.md on github has been reduced to a quick start (since we know we all prefer copy/paste), as I don’t feel that much more was necessary.

Version Dependencies

We now support Maven 3 and Java 1.7 only. Annoying for some of you, yes, however in keeping with my philosophy of using the latest-and-greatest whenever possible we’re just going to put a line in the sand. That’s not to say we won’t ever support other versions, however those ports will have to come from the community (such as it is).


I’m still debating licenses. Right now the project is under the MIT license, though I might move it over to New BSD. This project will never be licensed under the GPL or any similar viral/copyleft license, as those tend to greatly inhibit adoption.

Roadmap Commitments for v0.2.0

If you’ll take a look at the roadmap, you’ll see the list of features which I want to have ready for the next minor version release. Notable additions were the reporting plugins for JSLint and JSDoc, as it didn’t make much sense to me to include those without making them accessible either. Here’s a list of where we stand right now, with some details:

Plugin Documentation [branch- jsdoc-improvments]

Now that I’ve got the maven site reports running, they decided to get all unruly and hurt my feelings by insulting my coding style. The inevitable cleanup phase is in progress, which also includes a bit of judicious refactoring as the actual usability of the plugin gets a little focus.


As this is my first large maven plugin project, I’m still learning my way around. Refactoring and cleanup is necessary – for example, sourceEncoding should be handled via the global parameter and not individually per Mojo.

Maven Reports: JSLint and JSDoc

These are two new features I realized would be necessary as I got the site up and running. I’m not at all familiar how plugins like this get wired into maven as a whole, so they might go fast or they might not.

Common JS Libraries in a repository [branch- jslib-redeploy-plugin]

Having a javascript maven framework isn’t much use if you don’t have any libraries that you can depend on, so I’m writing a plugin that’ll take a URI-styled token string and will redeploy any artifacts found into a maven repository. This is proving tricky as I cannot resolve their own version dependencies easily (jqueryUI -> jquery for instance), however it’s coming along nicely. Once this is complete I’ll be able to provide a few more archetypes for other projects. jQuery, JavascriptMVC, you name it.

Unit Testing

I’m still evaluating the best javascript unit testing framework out there. There are many solutions for “How to test a javascript library in different javascript engines and/or browsers, but nobody seems to have figured out the holy grail yet. This feature might get punted into its own release, if it turns out that the scope is too big.

Building a JSDoc Runner for Maven

The next step in this plugin is to ensure that we can generate documentation. While it may seem like a stretch to do this so early in the process, it forces us to build a few other pieces that we’ll need later.

Our documentation generator will be JSDoc, chosen because it’s fairly mature, has a strong following, and its markup is broadly supported across IDE’s. In fact JSDoc seems to be one of the few broadly adopted standards out there, though to speculate why is beyond the scope of this post. What matters is that it’s a documentation writer written in Javascript that is run via Mozilla’s Rhino runtime. To properly run it we’ll have to resolve the dependency, unpack it into a temporary directory, and then spin up Rhino to execute the code.

I encountered a problem in the way JSDoc 2.4.0 interacts with Rhino 1.7RC3, a bug introduced by two inline function declarations without semicolons in TokenParser.js. While on one side one might argue that this is a bug in Rhino, declaring functions inside of other functions is a big no-no in the javascript world. As a result I’ve had to generate and deploy my own patched version of JSDoc.

Resolving JSDoc

All of the plugin artifacts for a given mojo are easily available as a parameter, thus locating the proper version of JSDoc is fairly simple.

Full source available on github

 * @parameter default-value="${plugin.artifacts}"
private List<Artifact> pluginArtifacts;

public Artifact locate(String groupId, String artifactId) {

	getLog().info("Resolving " + groupId + ":" + artifactId);

	for (Iterator<Artifact> iterator = pluginArtifacts.iterator(); iterator.hasNext();) {
		Artifact artifact = (Artifact) iterator.next();

		if (artifact.getGroupId().equals(groupId) && artifact.getArtifactId().equals(artifactId)) {
			return artifact;

	getLog().error(String.format("Failed to locate %s:%s", groupId, artifactId));
	getLog().error("This probably means that you didn't specify it as a dependency in the pom.xml file");

	throw new RuntimeException(String.format("Failed to locate %s:%s", groupId, artifactId));

Unpacking JSDoc

As all jar files are zip files, we can simply write a zip extraction routine to unpack it.

Full source available on github

ZipFile zipFile = new ZipFile(artifact.getFile(), ZipFile.OPEN_READ);
Enumeration zipFileEntries = zipFile.entries();

while (zipFileEntries.hasMoreElements()) {
	ZipEntry entry = (ZipEntry) zipFileEntries.nextElement();

	// Construct the new name
	String currentName = entry.getName();
	File destFile = new File(destinationDirectory, currentName);

	if (entry.isDirectory()) {
		// If we are a directory, create it.
	} else {
		// If we're a file, unzip it.
		BufferedInputStream is = new BufferedInputStream(zipFile.getInputStream(entry));
		int currentByte;

		// establish buffer for writing file
		byte data[] = new byte[BUFFER];

		// write the current file to disk
		FileOutputStream fos = new FileOutputStream(destFile);
		BufferedOutputStream dest = new BufferedOutputStream(fos, BUFFER);

		// read and write until last byte is encountered
		while ((currentByte = is.read(data, 0, BUFFER)) != -1) {
			dest.write(data, 0, currentByte);

Configure JSDoc

At this point we’re back in the drudgery of maven configuration. Since we’d like to keep our configurations relatively separate (in a <jsdoc> sub tag in the mojo configuration), I’ve created a separate JSDocOptions class that can accept all of our options. I’m not going to bother copying it all here as there’s quite a bit of code, but you can see the full source code here.

Running JSDoc via Rhino

Lastly, we have to execute JSDoc, which is done via Rhino. This, again, is fairly straightforward.

Full source available on github

public void run(File inputDirectory, File outputDirectory, JSDocOptions options)
	throws JSDocRunnerException
	// Build the string arguments.
	String[] args = options.buildArguments(jsDocRoot, inputDirectory, outputDirectory);

	// Run the javascript engine.
	getLog().info("Building JSDoc...");

	// Create our logging streams
	MojoLoggerOutputStream errorStream = new MojoLoggerOutputStream(LogLevel.ERR);
	Main.setErr(new PrintStream(errorStream));

	MojoLoggerOutputStream outputStream = new MojoLoggerOutputStream(LogLevel.WARN);
	Main.setOut(new PrintStream(outputStream));



Building a Javascript Dependency Resolver in Maven

Dependency resolution, to most javascript developers, means adding another <script> include in the header of their html page, and in most cases this is fine. In fact in 95% of any javascript applications you don’t even have to host this file- just include a link to JQuery on Google Libraries and you won’t need to worry.

In the enterprise, however, different concerns start to take over. The project you’re working on might be dependent on an internal library built by another team, which is subsequently dependent on yet another library, and so on and so forth. As those libraries are incrementally updated, managing dependencies becomes a headache. Furthermore, the true power of javascript compilers like Google Closure are realized when all of the code is available at compile time. In fact, both the Simple and Advanced optimization modes require it, as it teaches Closure which code isn’t used and may thus be discarded.

Maven already has a very mature dependency resolution mechanism built into it, which is backed by a large public repository of common libraries. Javascript is still underrepresented in this arena, true, yet hopefully this plugin will eventually remedy that situation.

Choosing Scope

The maven dependency system requires a scope for each individual dependency to determine when it is needed in the build cycle. This is quite handy- you don’t for instance, want to have all the JUnit libraries be part of your final project compile, but you do want them available during the testing phase.

Similarly, Google’s closure compiler can process an external file (dependency) in two ways: Directly, immediately compiled into the final delivery source, and externally, files which are assumed to be self-contained and included in the final product via a different script tag. These two types of dependencies map rather nicely to Maven’s “compile” and “runtime” scope, however compiling a javascript file into a library causes complexity: If two libraries have the same compiled-in dependency, with different versions, perhaps even with different optimization levels, I run the risk of version and feature conflicts.

As a result, all of our dependencies will be treated as externals.

Resolving a dependency graph

The first step to building dependency resolution into my plugin requires constructing the dependency graph. Maven makes this (relatively) easy.

Full source available on github: DependencyResolver.java

private DependencyNode getDependencyGraph(Artifact rootArtifact)
        throws DependencyResolverException {
        try {
                Dependency rootDependency = new Dependency(rootArtifact, "compile");
                CollectRequest collectRequest = new CollectRequest(rootDependency,projectRepos);

                // Collect all the nodes
                CollectResult collectResult = repoSystem.collectDependencies(repoSession, collectRequest);
                return collectResult.getRoot();
        } catch (DependencyCollectionException e) {
                throw new DependencyResolverException("Cannot build project dependency graph", e);

Filtering dependencies by scope and extension

Now that I have a graph of dependency relationships, I should now visit each node and only collect those nodes that match our required scopes. Since I’ll also use this extension for resolving jsunit, envjs or others, it behooves me to abstract it.

Full source available on github: DependencyResolver.java

public List<Artifact> resolve(String scope, String extension)
                throws DependencyResolverException {

        getLog().debug("Resolving dependencies: [scope:" + scope + "] [extension:"  + extension + "]");
        // Construct the dependency tree of our project.
        Artifact rootArtifact = new DefaultArtifact(project.getGroupId(),
                project.getArtifactId(), project.getPackaging(),
        DependencyNode rootNode = this.getDependencyGraph(rootArtifact);

        // Gather dependency nodes in post order
        PostorderNodeListGenerator postOrderVisitor = new PostorderNodeListGenerator();

        // Filter out the root node.
        DependencyVisitor visitor = new FilteringDependencyVisitor(
                postOrderVisitor, new PatternExclusionsDependencyFilter(

        // If a scope is configured, add a scope filter.
        if (null != scope) {
                visitor = new FilteringDependencyVisitor(visitor,
                                new ScopeDependencyFilter(scope));

        // If a type is configured, add a pattern exclusion filter
        if (null != extension) {
                String patternFilter = "*:*:" + extension + ":*";
                visitor = new FilteringDependencyVisitor(visitor,
                                new PatternInclusionsDependencyFilter(patternFilter));

        // Run the collection.
        return postOrderVisitor.getArtifacts(true);

Resolving dependencies

As it turns out, maven dependencies may, or may not, be resolved. In practice this means that the system won’t bother to download the file until it’s actually needed, so we have to do it manually. The new Aether Dependency API in Maven3 means that artifacts are immutable, so we have to make a copy of each.

Full source available on github: DependencyResolver.java

        List<Artifact> unresolvedArtifacts = postOrderVisitor.getArtifacts(true);
        List<Artifact> resolvedArtifacts = new ArrayList<Artifact>();

        for (Iterator<Artifact> iterator = unresolvedArtifacts.iterator(); iterator.hasNext();) {
                Artifact artifact = iterator.next();
                ArtifactRequest artifactRequest = new ArtifactRequest(artifact,projectRepos, null);
                ArtifactResult artifactResult = repoSystem.resolveArtifact(repoSession, artifactRequest);
                getLog().debug(" + " + artifactResult.getArtifact().toString());
        return resolvedArtifacts;

Including dependencies in our compiler

With a list of resolved dependency artifacts in hand, I can now include them in the compile method from the last post.

Full source available on github: JSLibraryCompilerMojo.java

        // Generate any externals that we need.
        List<JSSourceFile> externs = new ArrayList<JSSourceFile>();
        getLog().info("Resolving External Includes...");

        // Generate the list of source files to include.
        List<JSSourceFile> source = new ArrayList<JSSourceFile>();
        getLog().info("Resolving Compile Includes...");

And there we are! We are now resolving dependencies from other javascript libraries that live in your, or someone else’s, maven repository.

Building a Javascript Compiler for Maven

This post is one in a series where I slowly assemble a Maven workflow for large javascript projects. If you came here via google, I recommend starting at the beginning: Designing a Javascript Maven Plugin. Alternatively, you can just go directly to the js-maven project on github.

Since there’s no sense in rebuilding something that already exists, I decided to start with google’s excellent closure compiler as a baseline, and simply write a maven wrapper around it. There’s nothing inherently wrong with the YUI Compressor; It’s quite a nice compressor. In the light of supporting more use cases, however, I feel that a compiler with more optimization options would be a better choice, especially since all of those options are individually configurable.

However before I wrote the plugin, there are a few things I had to take into consideration:

  1. Developers should be able to debug while developing. This means that the compiled code needs to be human readable.
  2. Optimization may impact code function, so the compiled code should be optimized.
  3. Once packaged, the code should be minified.

In practice, this means two compiler passes over the source code. The first is to perform code optimizations, yet provide a human readable source file for debugging (in jetty, for instance). The second is to minify, but not optimize, shortly before the asset is packaged. This suggests the following steps for my Javascript Library lifecycle.


Given that with Google Closure two steps are practically identical, most of the functionality will be abstracted. This means I’ll effectively be running the compiler twice, not a problem unless a browser decides to throw errors on whitespace usage.

The Compiler Plugin

There are three steps to compiling using Closure. First we need to gather all of our source files, then we need to configure the compiler, and then we’ll run the compiler and output our source. There is a fourth step- gathering dependencies, which I’ll cover in a subsequent post. None of these steps are especially difficult. Here’s step one, finding all of our source code.

Full source available on github: JSLibraryCompilerMojo.java

 * Location of the source directory.
 * @parameter expression="${project.build.sourceDirectory}"
 * @required
protected File sourceDirectory;

 * This method scans the configured source directory of the Mojo and returns all
 * javascript documents, filtering by extension.
protected List<JSSourceFile> generateJavascriptSourceList() {
    ArrayList<JSSourceFile> sourceList = new ArrayList<JSSourceFile>();

    String[] extensions = { "js" };
    getLog().debug("Discovering Source Files");

    Collection<File> sourceFiles = FileUtils.listFiles(sourceDirectory, extensions, true);

    for (Iterator<File> iterator = sourceFiles.iterator(); iterator.hasNext();) {
         File file = (File) iterator.next();


        getLog().debug(" + " + file.getAbsolutePath());

    return sourceList;

Here’s step two, writing our compiler configuration. Note that I consciously made the choice to reduce the complexity of what my users can alter for the sake of illustration; This is primarily done

Designing a Javascript Maven Plugin

As mentioned in my previous post, I believe that the tooling for enterprise grade javascript applications is simply not there yet. What is needed is a set of tools, preferably open source, that are framework agnostic and that fit into established development practice. In this and the subsequent posts, I will walk you through the process of building just this toolset using the Maven as a framework to glue it all together.

Overall Philosophy

First and foremost, the toolset must be independent of the code that is written. We simply cannot have a compiler that will only work for JQuery, or a testing harness that will only work if the test artifacts are composited via Rake.

Furthermore, I would like to adhere to Java conventions as much as possible, by which I mean JSDoc, JUnit-style unit tests and so forth. Java is syntactically very close to Javascript and Java’s strong prevalence in the enterprise space will reduce engineer friction.

The project artifacts will be packaged in as close to a native file type as possible. This becomes tricky: For something like a Javascript Library, a simple minified .js file will do. For a full web application with images, media, html and more this will be trickier as the nature of the server will define one packaging type over another: Jetty/Tomcat will prefer a .jar file, Apache will serve up an uncompressed archive, and so forth.

When possible, we’re going to take advantage of the latest and greatest: CSS compilers, for instance.

Why choose maven?

Maven is an obvious choice for various reasons. Firstly, it enjoys broad IDE integration. Secondly, it has an extremely powerful plugin framework that allows a user to use entire groups of plugins or focus on only the ones they really need. Thirdly, its dependency resolution mechanism is very powerful and very portable, which makes your builds extremely easy to bring to other platforms. Fourthly, it is open source and well supported by the community. Lastly, it enjoys broad support among continuous integration frameworks. All of these together make it a no-brainer to use as the core engine for our javascript toolset.

What workflow steps are needed?

Since maven defines a specific artifact lifecycle, we should start there and decide what we need our plugin to do. After starting to write this for a little bit however I realized that trying to declare it all up front is a significant undertaking, and should be split up by broader concepts:

  1. Compilation
  2. Optimization
  3. Unit Testing
  4. Functional Testing
  5. Packaging and Publishing
  6. Dependency Resolution
  7. Debug Runtime
  8. Headless Runtime
  9. PMD/Linting
  10. Documentation
  11. Reporting

Should we create an Archetype?

The short answer: Yes. The long answer: Archetypes are absolutely critical to drive adoption, examples and acceptance, and are a very useful speed-of-workflow tool. Given the many different potential framework projects out there however, we’re going to have to create many of these; One for JQuery, one for Mootools, one for Backbone… you get the idea. So how about we worry about that once we get more pieces of the compiler in place.

Where will the source live?

I’m glad you asked! I’ll be hosting the project on GitHub. Please feel free to contribute!

Where will the maven repository be?

This is as yet undecided. For now I recommend you clone the project and install it locally. Once there’s enough functionality on the project to warrant a full point release, we’ll revisit this.