Jeremy Kahn's Dev Blog

There's always a method to my madness.

Being Open Sourcey

I absolutely love open source. It’s something that is fascinating on many levels, and it’s also a ton of fun. Depending on who you talk to, “open source” can mean a great many things. At the core, open source is public access to computer source code. Interestingly, this access has created religion-esque alliances and discords within communities for decades. For my part, I am strongly on the side of open. I initially became interested because it’s fun to share code. Over time, I further developed my beliefs and examined my motivations. I have come to believe that open source is what’s best for the world. I think it’s important to be “open sourcey.”

What is “being open sourcey?”

Since I’m inventing a term, I should probably define it. Being open sourcey means being a developer that writes open source code, while also working with the developer community to improve existing code and solve problems.

Writing code is just one fairly small component of this. Software development is heavily maintenance-oriented, and typing out code is actually a pretty trivial task. The bigger challenge is iterating and improving code over time to meet new requirements while also fulfilling the original set of requirements. This becomes exponentially more challenging over time, and it quickly becomes too much work for one person.

World changing code is never written by one person. World changing ideas may come from one person, but actual implementations are always the result of an organized and collaborative effort shared between many smart people. Linus Torvalds did not write the entirety of the Linux kernel that is in use today, and John Resig did not write every line of jQuery. It takes one person to come up with an idea, but it takes a community to execute on it.

Why be open sourcey?

There are many benefits in choosing to make code openly available. I recently came across an audio interview of Linus Torvalds where he succinctly explains why open source makes sense:

I actually see open source as science. If you don’t spread your ideas in the open, if you don’t allow other people to look at how your ideas work and verify that they work, you’re not doing science. You’re doing witchcraft. And traditional software development models where you keep things inside a company and hide what you’re doing, they’re basically witchcraft.

When your code is available to anyone that cares to read it, people can make suggestions about how to improve the code and potentially come up with something that was much better than the original. Granted, such suggestions may be completely wrong, so some good judgment is needed to vet people’s suggestions. However, by closing code off from the world, you eliminate any possibility that it will be passively improved.

It is highly unlikely that you will write code that is so astonishingly brilliant that no one else can write equivalently brilliant code. Programming is simply not that magical. If somebody wants to make the same time tracking app as you, but you don’t share your code, they can simply write their own. Hoarding and hiding your code is not going to effectively thwart your competition, it just wastes time. We can accomplish more as a society by sharing and iterating on our code than we can by rewriting it.

How to be open sourcey

Being open sourcey is pretty simple, really. It boils down to writing and maintaining code and empowering the open source community.

Open source can’t exist without code to share, so that’s the first step. The less obvious task is writing code that is sharable. Write code for others to read. Refine and groom it to be as approachable as possible. If your code is sloppy and hard to follow, you greatly diminish the chance that someone will just come along and improve it. Drive-by patches only occur when another developer is interested in working with your code, and shoddy code will deter a lot of people.

An important component of this is up-to-date automated tests and documentation. The lack of either of these indicates a programmer that is unmotivated. People generally don’t want to work with other people that are unmotivated. In addition to being a sign of respect for fellow coders, testing and documentation defines a clear description of the requirements of the code. Testing is especially important to someone new to the project. When modifying unfamiliar code of any degree of complexity, there is a high probability that something will break. Testing is an incredibly effective tool for communicating to a contributor that they broke something early on.

The other part of being open sourcey is facilitating progress within the community. There’s a lot that goes into this, but at a high level it’s just a matter of encouraging contributions and rewarding participation. Writing a patch is not a trivial task, and people need a reason to take time out of their day to do it.

When someone files a bug or submits a patch, thank them. Praise them for being awesome and taking an interest in your work. Do what you can to reward them for their efforts. An anecdote: In my Shifty and Rekapi projects, I had no need nor interest in building in AMD compatibility. For both projects, someone came along and submitted a Pull Request that added such functionality (Miller Medeiros and Franck Lecollinet, respectively). Adding this feature is not a small task. While I personally didn’t need AMD and was generally indifferent to the benefit that it added to either project, I thanked the authors and accepted their Pull Requests. I wanted to validate their hard work. It was a minor risk on my part, because adding any functionality brings some amount of technical baggage, but rewarding and encouraging my contributors was a worthy tradeoff. Both Miller and Franck went on to contribute significant features to my projects.

The point is, strangers have no reason to help you, especially on the internet. On the off chance that somebody does go out of their way to do something as considerate as write code for you - for free - reward them. All it takes is a “thank you” and a merge. Obviously you have to be judicious when accepting patches, but do whatever it takes to validate your contributors’ efforts.

It’s not about you, it’s about the project

In open source, you are not more important than your contribution. It is absolutely necessary to put progress before egos. If you submit a patch and it gets accepted, it’s because your code was awesome - not you. Likewise, if someone submits a patch that replaces large swaths of your code for better code of their own, you need to accept it. You’d be crazy not to. The only thing that matters at the end of the day is that the code was improved. Users will never know how much code was yours and how much was another person’s.

Take a look at this. It’s a graph of contributor activity on Backbone. Look at how small Jeremy Ashkenas’s individual contribution gets over time. The point of this isn’t to downplay Jeremy’s value in Backbone’s development. On the contrary, this shows that Jeremy knows a good patch when he sees it and is good enough of a leader to accept it. Although he may argue with people on what should or should not be merged into the repository, at the end of the day he wants what is good for Backbone more than he wants what is good for his ego.

Better code for a better future

Open source has a lot of practical advantages over the alternative. It also happens to be incredibly fun to work with like-minded, talented people. However, participating in open source - being open sourcey - takes a lot of discipline and humility. It’s a complex challenge, but it becomes much clearer when you focus on the end goal: Producing better software to make the world a better place.

Being a CSS @keyframe Power User

Animation on the web should be done with CSS and not JavaScript.

JavaScript is powerful and versatile tool. And that’s good, because it’s all web developers have to work with. Especially now that it’s everywhere, we’re really lucky that JavaScript is so flexible. We live in kind of a strange reality where there’s almost nothing that JavaScript can’t do that another language can. However, it’s kind of like a multitool — it can do everything but it doesn’t really excel at anything. Certain situations call for specialized tools, and animation is one of those.

At first blush, animation doesn’t seem like a “style” in the same sense that border-radius is a style. The taxonomy is a little broken, but so it goes with technology. CSS 3 gives us APIs and performance that let us bring the web to life with interfaces that would make Minority Report jealous. This article discusses how to optimize for performance and eye-catching motion.

All hail the GPU

First off, why is CSS the de facto better tool for building animations? Simply put, CSS can take advantage of a computer’s GPU. This is a game changer for performance and animation fidelity. A JavaScript animation operates by invoking a callback function many times a second. This isn’t fundamentally wrong, but it introduces quite a bit of complexity to an app. JavaScript is single threaded, so it can only do one thing at a time. If JavaScript is animating something, it’s not responding to user input or network activity, and vice versa.

That much is more or less common knowledge, but what isn’t discussed as much is the Garbage Collector and its impact on performance. Because JavaScript takes care of memory allocation, the Garbage Collector has to come along and clean up the mess that we’ve made from time to time. In most applications, this is fine and not noticeable. However, this is a huge problem for animations. The Garbage Collector literally stops all JavaScript from running, including animation callback functions, so there is a distinctive stutter from time to time. This is especially noticeable in animations with a high frame rate, because code is being run more frequently and the Garbage Collector runs accordingly.

So, what to do about stuttering JavaScript bogging down our animations? Sidestep the problem entirely: Let’s kick our animations over to the GPU and free up the JavaScript thread and Garbage Collector. With CSS @keyframes, we can allocate our resources more efficiently and let the browser optimize our animations.

The prefix problem

One CSS rule isn’t cool anymore. You know what’s cool? A billion CSS rules. That’s the current state of affairs with CSS 3, anyways. This problem extends to @keyframes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
.myAnimation {
  -moz-animation-name: myAnimation;
  -moz-animation-duration: 2000ms;
  -moz-animation-delay: 0ms;
  -moz-animation-fill-mode: forwards;
  -moz-animation-timing-function: linear;
  -ms-animation-name: myAnimation;
  -ms-animation-duration: 2000ms;
  -ms-animation-delay: 0ms;
  -ms-animation-fill-mode: forwards;
  -ms-animation-timing-function: linear;
  -o-animation-name: myAnimation;
  -o-animation-duration: 2000ms;
  -o-animation-delay: 0ms;
  -o-animation-fill-mode: forwards;
  -o-animation-timing-function: linear;
  -webkit-animation-name: myAnimation;
  -webkit-animation-duration: 2000ms;
  -webkit-animation-delay: 0ms;
  -webkit-animation-fill-mode: forwards;
  -webkit-animation-timing-function: linear;
  animation-name: myAnimation;
  animation-duration: 2000ms;
  animation-delay: 0ms;
  animation-fill-mode: forwards;
  animation-timing-function: linear;
}

This is our reality. We’re actually expected to do this. This is currently what is necessary to have our animations run in all modern browsers.

Believe it or not, nobody really wants to write all this out. Thankfully there are a number of tools to ease the pain, but the current situation is innately broken. Be kind to yourself, use these tools to automate this problem away.

Multilayered easing

Easing formulae are a crucial component of any compelling animation. A while back I discovered something kind of cool about easing formulae: If you animate different properties synchronously with different easing formulae, you can get some very natural and intriguing motion. Most animations that we see on the web use the same easing formula for X and Y, so things move in a straight line. However, if we mix and match formulae — say, easeFrom for X and easeTo for Y — we get fun curves and transitions. I have written Shifty and Rekapi, JavaScript libraries that make it really easy to decouple animation properties from easing formulae.

Combining easing formulae is just as important for CSS animations as it is for JavaScript. Fortunately, the animation-timing-function property allows for this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
.combined-formulae {
  position: absolute;
  -webkit-animation-name: top-property-animation, left-property-animation;
  -webkit-animation-timing-function: linear, cubic-bezier(.895,.03,.685,.22);
  -webkit-animation-duration: 2000ms;
}

@-webkit-keyframes top-property-animation {
  0% { top: 0px; }
  100% { top: 300px; }
}

@-webkit-keyframes left-property-animation {
  0% { left: 0px; }
  100% { left: 300px; }
}

Definitely experiment with different easing formulae combinations. I’ve built a little tool to make this easier.

Automation is awesome

Getting an animation to look just right involves a lot of iteration and tweaking. CSS is absolutely not conducive to this. Neither Chrome Dev Tools nor Firebug currently allow you to easily modify @keyframes dynamically, so we are mostly left to modify the source code and reload for every minor change. This problem is exacerbated by the vendor prefix problem that was discussed previously. There is a lot of boilerplate in CSS animations, and we shouldn’t have to deal with any of that when designing an animation.

I think it is critically important to have powerful, high-level tools for this sort of thing. Animations should be made graphically, not programmatically. There are a number of tools being developed that do exactly this, which is totally awesome. We are essentially rebuilding in HTML 5 what the Flash world has enjoyed for over a decade with their authoring environment.

I think there needs to be good open source tools for this as well, which is why I am developing Stylie. Stylie is an app that makes creating CSS 3 @keyframes animations a drag-and-drop experience. It uses Rekapi to generate optimized, cross-browser CSS that you can copy and paste into your stylesheets.

Choose the best approaches

The tools you choose to generate animations are largely irrelevant. Use whatever tool makes you most efficient, be it open or closed, free or paid. What matters is that you solve a problem with the correct strategy. In the case of web animation, CSS is often a better approach than JavaScript because of GPU performance optimization and the lack of Garbage Collector pauses. And, since actually writing an animation is such a slow and tedious task, let the computer do the work for you and iterate with a graphical tool.

A JavaScript Library Template

I really like to write JavaScript libraries. This is because low-level programming interests me a lot. Although JavaScript is an inherently high-level language, I find writing the abstractions that “just work” like magic for others to be really fascinating. I’ve written a number of libraries of varying complexity, and I’ve developed a pretty good pattern that gives me a lot of flexibility when developing them. Building a library to scale is a much different challenge than building an application to scale, and it’s not discussed very much.

Libraries versus apps

End users are the audience for applications. Libraries are different, their audience is other developers. This makes a big difference when architecting a library. Applications are often very large and have a lot of dependencies, but good libraries are small. I believe that libraries should follow the UNIX philosophy, they should “do one thing and do it well.”

To achieve this, library code must be optimized for size and performance. Dependencies, while not unreasonable to have, must be carefully considered and used sparingly. Using Backbone or CanJS to structure the library codebase is probably not a good idea, because MVC frameworks add significant bloat and makes it less portable. However, some degree of boilerplate is needed to develop and tie it all together.

Introducing lib-tmpl

lib-tmpl is a small project that I wrote. It is a skeleton project to build JavaScript libraries on top of. It does several things for you.

Built to scale, built for sanity

For a project of any size, structure is paramount. It is very important to organize code in a logical way, for a number of reasons. Most importantly, code must be approachable for others. This is necessary for effective collaboration with fellow developers - successful projects are not a one-man show. The best way to start organizing code is to have a directory structure that makes sense. lib-tmpl follows the directory structure that I developed with Rekapi and Shifty, which is a hodgepodge of directory structures that I saw in various other libraries. Each directory has a README that explains what it is for.

Another way to structure a library codebase is to divide it up into modules. The pattern I follow is to have one “core” module that defines a constructor and basic utilities, and multiple non-core modules that each perform a specific task. What a module should do entirely depends on your project, but it should serve to enhance the code that exists in the core. lib-tmpl provides a very simple module pattern that clearly defines all of the sections that you should put your code (such as private/public functions and constants).

“Best practices”

“Best practices” is kind of a weird, nebulous concept. What seems obvious to one person is evil to another, so I tried not to get overbearing with this project. There are certain practices that most developers seem to agree upon, such as testing, documentation, optimizing with a compiler. lib-tmpl provides hooks for all of these. While I simply made a directory to house documentation, I made choices regarding compiler and testing framework options. For testing, I like to use QUnit. I included that as a the default testing framework, along with some basic example tests.

lib-tmpl uses UglifyJS as its build tool. This means that lib-tmpl requires NodeJS. Building a lib-tmpl library is easy, thanks to the included build.js script. The documentation explains how to use and extend the build script.

Building better libraries

I have my opinions when it comes to JavaScript style. lib-tmpl is built on those opinions, but you are encouraged to ditch them and form your own if they don’t suit you. Regardless of the style you choose, lib-tmpl’s structure and included toolset should go a long way to help you build awesome JavaScript libraries and minimize friction.

lib-tmpl lives here and is open source. Please fork and expand on the template to your liking.

Treating JavaScript Like a 30 Year Old Language

I don’t like JavaScript. I write an enormous amount of JavaScript — in fact I write it almost exclusively — but I don’t really like it as a language. I don’t want to write a giant screed as to why I don’t like the language, but suffice it to say that it lacks some really basic features (like default parameters), and its lack of clearly defined rules (semicolons, whitespace) create lots of headaches.

Something that has helped me keep JavaScript manageable and even fun is the coding styles I have developed. I create a lot of rules for myself that result in extremely readable and descriptive code. This equates to lots of boring style constructs and boilerplate comments that add absolutely no features to what I’m writing, but go a long way to keep the code obvious and explicit. I write JavaScript as though it were C, at least to some extent. Why do I create this extra work for myself? Because reading code sucks. It will always suck, whether you are reading your own code or somebody else’s. Especially when it comes to my open source projects, I really, really want people to read and contribute to the code I write. It is absolutely worth it to take extra time to trivially polish code so that fellow humans can easily dive right in. Additionally, I read my code many more times than I write it, so I want to have mercy on my future self by writing painfully obvious and straightforward code.

Much of my style is rooted in the Google JavaScript Style Guide. This is largely due to the fact that I need to follow this style guide in my day job. However, I have fallen in love with the strictness of the Google Way, and have adapted it for my own open source projects. For said open source projects, I don’t follow Google 100% because, well, I have my own opinions on clean code. It’s needless to go over every minutiae of my style, but I think there a few basic rules that go a long way in keeping large amounts of code sane and manageable.

80 character line limits

There’s nothing more irritating when trying to grok code than scrolling around to see the entire line. It’s a mental context killer. While any text editor worth its disk space has the option to wrap text automatically, chances are your code will be read in an environment that doesn’t. This includes CLI programs (like diff and less) and web-based source viewers like Github. Don’t make people scroll, it’s mean.

At some point in computer history, somebody (arbitrarily?) created an 80 character line limit for code. It’s formally documented in the Python Style Guide, and I think it’s totally awesome. This has two benefits. First, as I mentioned, you eliminate the need to horizontally scroll. More substantively, this promotes code simplicity. When you restrict yourself to 80 characters before a line break, you’re more likely to break statements up into smaller chunks. I don’t like clever code that crams a bunch of nested function calls into lines like this:

1
setTransformStyles(context, buildTransformValue(this._transformOrder, _.pick(state, transformFunctionNames)));

This is painful to read. There’s so much to parse mentally, and I might even have to scroll. I’d much rather read each meaningful chunk line by line, and then see how it all comes together:

1
2
3
4
5
6
// Formatted version of the above snippet.  Taken from the Rekapi source.
var transformProperties = _.pick(state, transformFunctionNames);
var builtStyle = buildTransformValue(this._transformOrder,
    transformProperties);

setTransformStyles(context, builtStyle);

While I can add line breaks to the original one-liner to make it conform to the 80 character limit, the limit annoys me into breaking things up into shorter, more obvious statements. I want to be annoyed into this, it makes my code more readable in the long run.

Strict-ish typing with annotations

The Google Closure Compiler has a lot of rules regarding code annotations. These are necessary to allow the compiler to perform the “Advanced” compile-time optimizations, and it blows up when there is a type error. The annotations serve to clearly communicate to the compiler what the expected inputs and outputs of every function are.

It turns out that taking the time to explicitly declare the input and output types of a function to communicate to the compiler have a nice side effect: The types are also explicitly communicated to humans! Let’s take an example of some magical code:

1
2
3
function addNumbers (num1, num2) {
  return num1 + num2;
}

Simple enough, but what if we do this:

1
2
var sum = addNumbers('5', 10);
console.log(sum); // -> 510

Whoops. If the client of this code assumes that addNumbers will do any typecasting for them, they will get unexpected results. However, if we explicitly annotate the function, we leave very little open to interpretation:

1
2
3
4
5
6
7
8
/**
 * @param {number} num1
 * @param {number} num2
 * @return {number}
 */
function addNumbers (num1, num2) {
  return num1 + num2;
}

Much better. Very clear, very explicit. We can even take this a step further and add some documentation for the poor soul who has to read this code in the future:

1
2
3
4
5
6
7
8
9
/**
 * Adds two numbers.
 * @param {number} num1 The first number to add.
 * @param {number} num2 The second number to add.
 * @return {number} The result of adding num1 and num2.
 */
function addNumbers (num1, num2) {
  return num1 + num2;
}

Now, you by no means have to get this detailed with every function that you write. If a function is simple and obvious enough, I often just annotate the types and omit the documentation text. Just be pragmatic about it.

The new keyword

All of the cool kids seem to really hate the new JavaScript keyword. Apparently it’s totally old school and not at all trendy, so therefore you shouldn’t use it. Dmitry Baranovskiy seems to have a particular distaste for it.

Well, I really like the new keyword. When I see new in code, I read it as “make a new instance of the following.” Here’s an example of why I like this clarity:

1
var kitty = Cat();

This is simple enough, but kitty could be anything. Cat could be giving us a number, for all we know. I prefer this:

1
var kitty = new Cat();

You may prefer to just capitalize your constructors, but I feel that using new helps to clearly communicate that a function is actually a constructor.

Compile-time defines

Compilers are totally awesome. Generally speaking, you shouldn’t deploy production code unless it’s been compiled with tools such as the Google Closure Compiler or UglifyJS. For my open source projects, I prefer UglifyJS. I actually get better compression with Closure Compiler, but UglifyJS is easier to develop with. UglifyJS also has a few really awesome features. One that I’ve fallen in love with is code pre-processing with compile-time defines. This feature is a throwback from the C world, and probably other compiled languages that are older than I. In any case, using defines lets you tailor your compiled binaries to fulfill various requirements. For example, you can use defines to tailor a build for mobile platforms that need different code than desktop platforms.

I’m using defines for Rekapi’s testing hooks. There are some methods that I want to expose for my unit tests, but I don’t want to expose them in the compiled code that gets sent to users. It’s just wasted bandwidth and CPU cycles to parse it. Here’s how I set it up:

1
2
3
4
// At the beginning of the library
if (typeof KAPI_DEBUG === 'undefined') {
  var KAPI_DEBUG = true;
}
1
2
3
4
5
6
7
8
9
10
11
12
// Later on in the code
if (KAPI_DEBUG) {
  Kapi._private = {
    'calculateLoopPosition': calculateLoopPosition
    ,'updateToCurrentMillisecond': updateToCurrentMillisecond
    ,'tick': tick
    ,'determineCurrentLoopIteration': determineCurrentLoopIteration
    ,'calculateTimeSinceStart': calculateTimeSinceStart
    ,'isAnimationComplete': isAnimationComplete
    ,'updatePlayState': updatePlayState
  };
}

Kapi._private has references to a bunch of methods that will never be used publicly, but need to be exposed for testing. KAPI_DEBUG is a global variable (eeek!), but is only present in the source code, not the compiled binary. This is thanks to my build.js script:

1
2
3
4
5
6
7
8
9
10
var uglifyJS = require('uglify-js');
var jsp = uglifyJS.parser;
var pro = uglifyJS.uglify;
var ast = jsp.parse( _fs.readFileSync(_distFileName, 'utf-8') );

ast = pro.ast_mangle(ast, {
    'defines': {
      'KAPI_DEBUG': ['name', 'false']
    }
  });

This tells UglifyJS to set KAPI_DEBUG to false when it is compiled. Because my debugging code is wrapped in conditional that tests the boolean value of KAPI_DEBUG, it is marked as unreachable code and not included in the binary. Perfect!

Something to note: At the time of this writing, this feature is poorly documented, but a Pull Request is pending.

I code like an old man

I’ve been writing JavaScript for three-ish years at this point and have built up some strong stylistic preferences. It’s worth noting that I started out with C++ before switching to dynamic languages. While my coding style may not be everyone’s cup of tea, it is optimized for readability. I think a lot of the newer coding conventions do not lend themselves to clarity and approachability. Yes, I use semicolons, new, and love a nicely architected inheritance chain. I don’t do these things out of obstinance, but practicality. I suggest that you adopt whatever coding style results in code that is readable to others.

I write code that is boring, because boring code is readable code. I would contend that readability generally has more impact on the success of a project than micro-optimizations and stylistic experimentation. If that means writing like a C coder in the 80’s, then so be it.