Jeremy Kahn's Dev Blog

There's always a method to my madness.

Open Source as a Civic Duty

I occasionally get asked why I spend so much of my free time writing software and giving it away for free. There are a number of reasons for this — I like to build things and I use it as an excuse to practice and improve my skills — but one of the most driving motivators for me is that I see open source contributions as a civic duty, a moral obligation to the rest of the world.

Considering that I am an employed programmer living in Silicon Valley that is generally thought to not be incompetent, you could deduce that I’m not worried about how I’m going to pay for my next meal. I’m not rich by any stretch, but I have a reasonably comfortable lifestyle, and that’s all I’d ever really want. After all, does a programmer need much more than a laptop and a cup of coffee to be content? Given that I’m not stressing about whether or not I’ll have a place to live and food to eat next month, I’m among the most fortunate people in the world. And if you have access to the technology needed to read this, you probably are too.

Morality is a touchy subject that varies widely among individuals and cultures, and this post is not meant to imply that my sense of morality is necessarily correct. However, I feel that people who are fortunate enough to survive on their own means have an obligation to give back to their community. I believe that some degree of self-sacrifice and civic duty is necessary to build and maintain a community that we all want to live in. This can come in many forms — giving to charity, doing volunteer work, or in my case, writing free software. It doesn’t really matter how you try to contribute to your community, it just matters that you are doing it.

Granted, I’m not writing software that will deliver clean water to poor countries in Africa or help cure Malaria. I tend to focus on web animation tools and other UI utilities. However, I’m doing work so that others don’t have to. My goal is to save time at scale so that other people can solve different problems that don’t yet have solutions. To take an extreme example, consider the GNU Project. As a whole, GNU has saved humanity man-centuries worth of work. There’s very little time spent developing operating systems and basic utilities anymore, because that is mostly a solved problem. Instead, we use GNU tools that others have spent the time to build, freeing us up to pursue other challenges like statistical modeling and AIDS research. If you have doubts about the value of free software, just look at GNU.

Altruism is, unfortunately, not a terribly common value in Silicon Valley. At best, larger companies will have a branch dedicated to positive social impact, and smaller companies may have the occasional charity fundraiser. But it seems that a large portion of Silicon Valley tech companies are focused on some self-serving vision or niche problem that only the founders and their friends have. I don’t want a culture where the only problems being solved are the ones that tech people have. I feel that writing free software on my own time is a tiny step in the right direction, even if indirectly. My dream is that someday, a free tool I’ve written will be used to help do something truly valuable. Anyone can do this, it doesn’t require much money or time — just a sense of the bigger picture.

Setting Up a Local Development Environment in Chrome OS

Cloud-only operating systems are a great option for most people. Web apps are slowly replacing native apps, and that trend is likely to continue. However, one type of user that will have a hard time migrating to the cloud is software developers. So many of our tools are local command line utlities, and they don’t really work in the context of a web application. I tend to treat UNIX as an IDE, and this methodology is at odds with the concept of cloud-only operating systems. I can’t see myself ever giving up tools like Vim or Git, or the simple elegance of piping the GNU Coreutils. Cloud-based IDEs like Cloud9 are cool, but they don’t give me the control that I get from a UNIX-like environment.

The common workaround for this is to SSH into a remote server via the browser, but I feel that this is a fundamentally flawed way to develop software. This approach requires you to buy and maintain two machines instead of one. More importantly, your productivity is limited by the quality of your internet connection. If you are in a coffee shop with spotty wifi or just enjoying a nice day in the park with your laptop, you won’t have easy access to your tools, if any at all.

I got a Chromebook over the holidays, and one of my concerns was that I wouldn’t be able to code on it effectively. I don’t really have much use for a laptop I can’t code with, so I started looking into ways to get Ubuntu loaded so I could enjoy my Linux tools unencumbered. I tried installing a Chromebook-optimized version of Ubuntu, but it was kind of buggy and I felt like I was missing the point of the Chromebook. I removed Ubuntu and instead tried Crouton, an awesome little tool written by David Schneider. Crouton makes it painless to install a chroot Ubuntu environment on Chrome OS. In other words, it creates an Ubuntu file system inside of Chrome OS’s file system and logs you in. When logged into the chroot, you are effectively running Ubuntu. It’s not virtualization, as Ubuntu has full access to your hardware resources, which can be a bad thing if you’re not careful. If you are careful, it works amazingly well. You can even run a Unity environment (or whatever GUI you want, Crouton lets you choose) and graphical applications. You can easily toggle between the Chrome OS and Ubuntu environments with a keyboard combination. What really matters is that you can run OpenSSH and easily interact with it from Chrome, even when you are offline.

This post serves as a guide for two things: Setting up the basic chroot environment with Crouton, and some setup I needed to perform in order to get some development tools running. There were a few issues that needed to be worked out because this is admittedly a backwards way to do things, and the Ubuntu environment installed by Crouton is a little bare-bones.

Getting started with Crouton

The first thing to do is boot the Chromebook into Developer Mode. Doing this will wipe the device, but that doesn’t really matter because everything is backed up to the cloud and gets synced back down when you log in. This is the process I had to follow for my model, and yours may be a little different - just Google for it. You don’t need to worry about setting up the firmware to boot from USB. Once you are in Developer Mode, sign in and hit Ctrl + Alt + T to fire up the shell. Type shell to be dropped into a BASH environment. At this point you need to follow the directions in Crouton’s README, but here’s a quick rundown of what you need to do in the shell we just opened:

1
sudo sh -e ~/Downloads/crouton -t unity

Go for a walk at this point - this will download about 700 MB of files. Once the process is complete, you will be prompted for the root user name password. Enter that, a few other bits of user info, and you’re done! Since we installed Unity, we can fire that up with:

1
2
# The `-b` backgrounds the process.  You can omit it.
sudo startunity -b

You can switch between the Chrome OS and Ubuntu environments with ctrl + alt + shift + F1 (the “back” button) and ctrl + alt + shift + F2 (forward), respectively.

That’s it! Now you can run Ubuntu apps inside of Chrome OS.

Setting up a development environment

The previous section was for setting up an Ubuntu chroot, this section is for setting up some tools that are useful for web development.

Git

You need Git. ‘Nuff said.

1
sudo apt-get install git

Vim with Ruby support

Command-T, a Vim plugin I use, depends on Ruby support. Because of this, I needed to compile Vim with Ruby support enabled. The Ubuntu chroot that Crouton installed lacks a few of the dependencies that a Ruby-enabled Vim requires, so I had to install those myself:

1
sudo apt-get install libncurses5-dev ruby-dev

From here I followed this guide written by Krešimir Bojčić , but here’s the part that actually gets and compiles the source code into an executable:

1
2
3
4
5
6
7
8
9
10
11
12
# Vim is in a Mercurial repository.
sudo apt-get install mercurial

hg clone https://vim.googlecode.com/hg/ ~/vim
cd ~/vim
hg update -C v7-3-154
./configure --with-features=huge  --disable-largefile \
            --enable-perlinterp   --enable-pythoninterp \
            --enable-rubyinterp   --enable-gui=gtk2 \

make
sudo make install

Now your Vim has Ruby support!

OpenSSH

Another critical tool for me is OpenSSH, because I like to SSH into my Ubuntu environment from Chrome and not deal with Unity any more than I have to. The easiest way to do this is to install tasksel and install OpenSSH from there:

1
2
sudo apt-get install tasksel
sudo tasksel

tasksel gives you a UI to select a number of packages you’d like to install, including OpenSSH. You can also easily install a LAMP stack from this UI, if you’d like.

Node

Yup, you can run NodeJS from Chrome OS. It’s as simple as:

1
sudo apt-get install nodejs npm

Full-stack development for $250

The Chromebook is an amazing little device. By running an Ubuntu chroot, you have all the tools you need to build a web project from scratch, and probably a lot more. Keep in mind that it has an ARM instruction set, so some programs may not work (or at least need to be compiled from source). I haven’t had any hangups that I couldn’t fix, however. Why is this worth the trouble? Personally, I just like little computers. It’s also great to have a SSD-powered laptop that has no moving parts - not even a fan. A soft benefit of having such an inexpensive device is the peace of mind of not lugging around a $2000+ laptop with you to the coffee shop. The 11-inch screen is reasonably comfortable to code on and the battery life is great. The Chromebook feels like a poor man’s MacBook Air, and with a full-featured local development environment, I can safely depend on it wherever I go.

In Response to Oscar Godson

My WebKit monoculture post garnered quite a reaction in the community. I figured my opinion would not be particularly popular on this subject, but I stand by what I said. If nothing else it sparked a few really interesting conversations. Oscar Godson wrote a well-reasoned response blog post with some strong counterpoints, which I’d like to address.

Just because something is open source doesn’t mean the downstream browser vendors implement it the same or don’t add to it. If it were true that WebKit was used the exact same away across browsers why are there so many Safari and Chrome differences?

This is true. There is inherent fragmentation in open source, and I don’t know how to solve that problem. However, I would argue that less fragmentation is better than more. Even if a single project gets forked many times and each is heavily modified, there is still a common base from which each fork descends. As I originally said, a world without browser rendering discrepancies is unrealistic. I just think we can somewhat diminish the problem by unifying on a common codebase and branching from it.

All that this does is creates browsers with different versions of WebKit where some have features A, B, C, and some X, Y, Z and some that mix and match and some that implement B differently because they felt that B was implemented in a bad way.

Yes, but this is the nature of open source. This is what happened with Linux, but eventually leading distributions emerged (Ubuntu and Android are examples of market-chosen leaders). Because it’s all Linux it’s not impossible for different versions to cross-pollinate. Ubuntu for Android is a great example of this happening with a degree of success. If either OS was built on top of a different kernel, such a frankenstein would be impossible (or a lot less elegant, anyways).

Also, Gecko is open source, so why WebKit over Gecko?

This much is my own personal opinion. To be clear, the original post reflects my opinion, so take it with a grain of salt. I prefer WebKit to Gecko for a number of reasons. Anecdotally I experience less frustration when developing with WebKit, but (also anecdotally) I am finding WebKit in more places than you can find Gecko. It’s already in two mainstream desktop browsers (Chrome and Safari), it is the (enforced) only option on iOS, and many less-popular browsers already use it or are switching to it. WebKit’s overwhelming market acceptance is why there’s even talk of a WebKit monoculture to begin with. The market is speaking: People want WebKit.

All three of the top browsers will be rapid release soon, so this makes WebKit no different.

True. The update problem will eventually solve itself. I mainly brought up the rapid release cycle to demonstrate how this is not IE6 all over again. My bigger concern is that Web Developers still need to support multiple rendering engines (-webkit, -ms, etc.), which makes us less productive. I feel that the prefix situation is ridiculous and needs to be fixed. The Web Developer community needs to decide which direction to move forward with - I think WebKit is the way to go because it is already winning.

What’s wrong with Gecko and/or Firebug?

Again, this much is my opinion. I feel that Chrome offers more powerful tools than the competition, but developers are of course free to make their own decisions.

And, the fact of the matter is, each WebKit browser has it’s own implementation of the dev tools (see point #1 again). I can’t stand Safari’s. So, you must just mean Chrome’s or do you mean Safari’s? Which is the “right” dev tools for WebKit that are the most “developer-friendly”?

My argument is that WebKit itself is a development tool because it can be scripted for automated tests. I’d bet Gecko has such capabilities, but I haven’t seen a PhantomJS equivalent for Gecko. The community does not seem to support Gecko as strongly as it does WebKit, and I favor options with the best community endorsement when selecting my toolset.

How is Gecko more buggy than WebKit? Are there stats on how many bugs and their severity? Are there more security bugs? Which is safer? I notice more bugs too in Gecko, however, I work in WebKit all day.

I actually agree with this entire part of the counterargument. And no, I don’t have metrics. I was referring to the bugs that a developer experiences during testing, not internal bugs or security issues. Bugginess favors the platform you didn’t start with. I build with WebKit first, and anecdotally I find that a lot of other developers do too. Surely there is a reason for this trend. I would postulate that it’s because a lot of developers have found WebKit and its associated tools to be a better choice in day-to-day development.

In the land of unicorns and rainbows WebKit would be the only browser, innovation would continue, every fork would be a vanilla fork and would somehow work the same on Windows, OS X, Linux, iOS, WP8, and Android. Unfortunately, in the real world that’s not how competition or technology works.

The community need ideals to pursue - this is how we unify. Perfect harmony is unrealistic, but why can’t web development be easier and faster than it is today? I think that it can, and agreeing on a core component such as a rendering engine will get us a little closer to a better ecosystem.

Footnote

I’m not trying to start a revolution. In all likelihood, I’m way off-base with some of my theories, but it is interesting to think about. The WebKit monoculture is something of an elephant in the room for the Web Development community, and I think it’s worth exposing and considering its implications rather than adhering to cultural dogma.

Sit With Your Team

I learned a very important lesson in software development recently. I’ve been on the same team for nearly a year, but for almost that entire time I was not sitting with them. I sat in other parts of the office for various reasons. I was never very far from my team - maybe a 45 second walk. I believed that IM, email, and a multitude of other digital tools would make the distance irrelevant. To be honest, I preferred the distance because it resulted in fewer random flyby interruptions. My logic was, if I always have my headphones on anyways, it shouldn’t matter if I sit with the team, 50 feet away, or at home.

I was wrong.

I recently decided to start sitting in my team’s area. Since doing so, my productivity and overall level of happiness has increased dramatically. All due to a simple change! My teammates are respectful of my need to independently focus with my headphones on, and I of theirs. So what’s different from being across the office? Simply, I can talk to my team now. All I need to do is turn around. The 45 second walk from before encouraged me to not talk to my team and to IM or email them instead, or simply go incommunicado. This led to less collaboration, which led to less fun and less effective problem solving.

This was not something I thought about because the distance seemed trivial. But it wasn’t. Knowing what I do now, I don’t see how working remotely on a regular basis can be effective. All of the best communication tools in the world cannot rival a face-to-face conversation when you are trying to solve a problem. And this is coming from the most introverted person you’re likely to meet.

I Support the WebKit Monoculture

Full disclosure: I work for YouTube, which is owned by Google.

Disclaimer: This post, as well as all posts on this blog, reflect my opinions and not those of my employer.

Counter to the opinion of the Web Development literati, I support a WebKit monoculture. I’ll put this in no uncertain terms: I think the web would be better off if WebKit was the only rendering engine. The common arguments against this are that it’s bad for competition and that “it’s IE6 all over again.” I feel that the former argument is invalid, and the latter is a false dichotomy.

WebKit is open source

One of the core evils behind Internet Explorer is that it is a closed platform. The Web is a fundamentally open platform, and anything “closed” flies in the face of what the Internet stands for. IE attempted to follow in Microsoft’s Embrace, extend, and extinguish strategy, and that is bad for the web.

WebKit is open. The source code is freely available to all, and anyone with a good idea is able to contribute. WebKit has no single owner or controller. In addition to being completely open, it also has strong corporate leadership and support (Apple and Google, among others). WebKit is developed by individuals that want the Web to win, not a single corporation. WebKit is no worse for the Web than Linux is for operating systems.

Rapid release cycle

IE did a lot of good for the Web in the short term, but it left deep scars. It achieved almost complete vendor lock-in that lingers today; many users are stuck in the past because of how difficult it is to upgrade. The WebKit developers recognize this problem and have solved it with easy and frequent updates. In the case of Chrome, it is a fully automatic process. Because WebKit has such strong corporate support, it has faster iterations than any competing project, which leads to more features and bug fixes.

It’s not a platform

To say that a WebKit monoculture is “IE6 all over again” doesn’t really make sense, because WebKit and IE aren’t comparable projects. IE6 tried to be a platform - the thing that apps are built upon - with ActiveX. WebKit is a rendering engine, a single component of many that make up a web browser. WebKit can be implemented into any project that wants to use it.

I don’t love the idea of Web apps that run in only one browser, but I don’t see an issue with apps that support one rendering engine. Focusing on the capabilities of a single rendering engine frees developers up to build great features and not worry about appealing to the lowest common denominator. This freedom is what pushes the Web forward. In a perfect world, you would only only have to write and debug code once. We would be much closer to that reality if all browsers used WebKit for rendering.

It’s more developer-friendly

WebKit has a lot to offer Web Developers. Safari’s built-in developer tools are great, and Chrome’s Developer Tools are simply the best I’ve ever used. These development tools are somewhat orthogonal to WebKit itself - you just usually find them paired with one another. The real beauty of Webkit from a tooling perspective is that it is so flexible. Again, WebKit is a component of a web browser, not a browser in and of itself. PhantomJS is a headless browser that is powered by WebKit. It is fully scriptable and can be used for running automated tests - another thing that substantively pushes the Web forward.

Just as focusing on a single rendering engine allows us to spend more time writing cutting edge features for our apps, it also allows us to build more robust tools. Software fragmentation is bad, and it’s one of the biggest things holding the Web back right now.

Stronger focus on fewer engines

WebKit is not the only open source rendering engine. The big competitor is Gecko, Mozilla’s rendering engine that is used in Firefox. Gecko was instrumental in wresting the Web from Microsoft’s grasp, but it is past its prime. Now Gecko is old, buggy and slow - at least in comparison to WebKit. Imagine if all of the Gecko developers left and joined forces with WebKit? Both projects are (imperfectly) trying to adhere to W3C standards, so fragmentation due to inconsistencies would diminish significantly. Imagine if Firefox switched to WebKit - it would free up a ton of time spent fixing Firefox bugs. Don’t get me wrong, a world without browser-specific bugs is a pipe dream. But standardizing on a single rendering engine would go a long way towards unifying the web.

Competition is good and necessary, but not for all things. I would love to see browser vendors focus on competing on features to benefit the user, not their own interpretation of the W3C standards.

Making Life Easier With Grunt

Grunt is task management tool by Ben Alman for JavaScript projects. I’ve been playing around with it for a few weeks and have integrated it into Shifty and Rekapi. Grunt has brought nothing but benefits to both projects. While it doesn’t do anything new or particularly flashy, it does do a great deal to standardize and streamline development tasks you are already doing or seriously need to consider doing. This includes:

  • Linting source files
  • Running automated unit tests at the command line
  • Concatenating and minifying source files into distributable binaries

These are just the tasks I have used so far. Grunt does more than this out of the box, and you can extend it to perform any custom tasks you may need. The point of this tool is to painlessly automate tasks you need to do repeatedly, probably many times a day.

What I was doing wrong

The libraries I write need a build process so that users can easily grab a deployment-ready file and get to work. Back in the day, I copy/pasted source files into the Google Closure Compiler web UI and then copy/pasted the compiled code into a .min.js file. This was annoying, so I eventually automated the process with a BASH script to do this. This script was difficult to write and maintain, and it also required me to have an internet connection to contact the Closure API or have a copy of the compiler locally. It was slow and cumbersome, but it worked. Eventually, with the help of Miller Medeiros, I switched to custom Node scripts to build my projects.

For linting, I took a similar approach: I copy/pasted code into the JSLint web UI, read the errors, fixed them individually, and then copy/pasted back and forth until the errors were fixed.

For testing, I just manually tested things in the browser. I then discovered the joy of unit testing to automate this, and started writing QUnit tests which, again, I ran in the browser. For larger projects, I have several testing suites, so I needed to check each one.

The consistent flaw with all of these approaches is that they are manual processes. Programmers exist to automate solutions to problems, and I was not doing that.

How I fixed it with Grunt

Grunt abstracts lint, test and build workflows into a system that is configured by a Gruntfile. For me, the biggest benefit was being able to switch from a 150+ line custom build script to a much simpler configuration Object that achieved the same outcome. Since linting and testing is controlled by that same configuration Object, it was trivial to add those tasks. My codebases have already seen improvements from the ease of linting and testing, and I can make improvements more quickly and safely.

Don’t code tasks, configure them

The key lesson I learned is to favor task configuration over implementation. My old build script needed to be re-read and understood every time I needed to change or fix something. It was difficult to reuse for other projects because it was built specifically around the needs it was originally written for. A tool like Grunt takes care of all of the implementation details.

All build processes are fundamentally the same, and Grunt implements the concatenation and minification process for you. You only need to specify input and output files, and Grunt is smart enough to take care of the rest. This is much better than doing it yourself! Configurations are easier to maintain than implementations and are almost always faster to write at the outset.

Standardizing the web developer workflow

A soft benefit of a tool like Grunt is that it helps to unify the common workflows of web developers. It is easier to get familiar with a new codebase using Grunt because there is less infrastructure (building/testing/etc.) to worry about. Another way to look it: Grunt does for workflows what jQuery does for cross-browser DOM manipulation and traversal.

The way forward

As web apps become more sophisticated, so must our tools and workflows. Other ecosystems have enjoyed powerful task and build systems (like Make or Rake) for decades, and JavaScript is finally catching up. I think that Grunt is the ideal tool to help power the next generation of web applications - it is easy to use, it is well-documented, and it has strong community support. Most importantly, it does not get in your way and restrict you to a specific way of doing things. Going forward, all of my projects will be using Grunt - definitely give it a look so you can start automating your workflow too.

Keeping It Sane: The Joy of Constants Modules

The first programming language I learned was C++. C++ has support for something called “constants”, which didn’t seem terribly interesting or valuable to me at first. The idea of a read-only variable seemed kind of pointless, and the syntactic implications felt annoying. It wasn’t until later that I saw the benefits of constant variables - they give meaning to values and don’t let a programmer modify them inadvertently. Given the complexity of pretty much every piece of software, having a guaranteed and predictable variable value is indispensable.

On recent pet projects, I’ve been using Require.js to isolate my constants into a single AMD module that I can access at-will. This has proven to be incredibly useful in keeping my code clean and decoupled, and I think that a similar pattern can benefit any project.

Data vs. logic

First of all, what should be a constant, and what shouldn’t? Simply, any value that represents immutable data should be a constant - everything else is logic. A color string, a placeholder string, anything with a literal value - these are all data. Mathematical formulas, property names (like CSS properties), and conventional components (like the leading on in DOM event names) are logic. Logic should be defined where it is used, and data should be decoupled and defined in a dedicated location.

DRY

The most obvious benefit to using constant variables for your data is that it helps to reduce repetition in your code. This is necessary to achieve a DRY codebase, which is almost always ideal. DRY isn’t just a buzzword, it has many real-world benefits. A DRY codebase is easier to maintain, because you don’t have to hunt around for all instances of a piece of data. For example, you may have a color that indicates the health level for a character in a game. Let’s say that “green” means “healthy”.

1
2
3
// Notice that the variable is named after what it represents, not the value 
// itself.  i.e., it is not "GREEN."
var HEALTHY_COLOR = '#0f0';

If you decide to change this color to something else, you don’t want to dig through the entire codebase and hope that you got every instance of #0f0. It is much easier in the long run to define this value once and only read from it after that. This also makes automated find-and-replace tools more reliable.

To make life even easier, consolidate your constants into a single file. This isn’t strictly necessary, but it’s that much less searching you have to do when you want to modify a constant.

Constants as configuration

I’m of the opinion that good code reads more or less like a configuration file. That is, each line operates as a distinct component of an application and is decoupled from the code around it. This isn’t feasible in most cases, but it’s a ideal to strive for. One way to decouple code is to separate it out into modules, which Require.js is built exactly for.

As I mentioned, it’s best to put all of your constants into a single module file. This seems trivial, but it provides a subtle benefit - this module turns into a sort of “config file” for your entire application. This assumes that you are diligent about separating your logic and data. If you were building a physics simulation, you may want to tweak the acceleration due to gravity, or the coefficient of friction for a given material. All of your tweaking only needs to happen in one place, rather than jumping around the codebase to make changes.

Meaningful code

In addition to decoupled code, I strive for readable, meaningful code. In essence, I like code that reads like prose, or at least a sort of broken english. Some projects take this too far, but ultimately we don’t want to be human compilers. Code should be readable, and meaningful variable names let us do that. A standard convention for constants is to NAME_THEM_LIKE_THIS. This tells whoever is reading the code that the variable is a constant and can/should not be modified. It also helps to communicate your intent. The semantic intent of this jQuery code isn’t very clear:

1
$('.name').css('background', '#f00');

But this is:

1
$(NAME_FIELD_SELECTOR).css('background', REQUIRED_COLOR);

Most importantly, you want to reduce the amount of guesswork that another programmer has to do in order figure out what you were trying to accomplish with a given piece of code.

Faith-based constants

Despite all of the virtues I have extolled of constants, JavaScript doesn’t actually support them. The only way to have “real” constants in JavaScript is to run your code through a compiler like the Google Closure Compiler (with the @const annotation). It appears that const support is coming to the language in ES 6, and that some browsers already support it. Assuming you are targeting a wider audience, you are left to hope that the value of your constants are not modified by an unsuspecting future developer. This is not ideal, and short of using the Closure Compiler, you simply have to be diligent with not writing to constants. VARIABLE_NAMES_LIKE_THIS should go a long way toward preventing mistakes, however.

Constant improvement

Constants make your life easier. While you won’t explicitly think about the benefits I outlined above when using them, they do have an immediate impact on the readability of your code. Constants will help you to separate logic and data and keep your code decoupled. With any luck, we’ll see constants standardized and not need to depend on tools and conventions to enjoy the benefits.

Screenshot Absolutely Everything

One of the best things about being a programmer is using powerful tools to solve problems in simple ways. One of the worst things about being a programmer is the ambiguity of feature requests and bug descriptions. Luckily, we have simple tools to solve this problem! The adage “show, don’t tell” is one that is as true today as it ever was, and developers can avoid hours of frustration and wasted time by taking and requesting screenshots.

Confirming the problem

Consider this bug description:

1
2
3
  1. Go to http://rekapi.com/ease.html
  2. Look at field for "Custom easing 1."
  3. Notice the right side of the input, it looks incorrect.

This is actually an exemplary bug description, but it still leaves a little bit open to interpretation. Specifically, “looks incorrect” could mean anything. This is a contrived example of a bug description, but you’ve no doubt seen bugs filed against you that read like this. Rather than having to read those steps and reproduce the issue, how about just looking at this:

This is much nicer! Showing an image like this does a number of things for you:

  • It saves on time taken to reproduce an issue.
  • It helps you and your fellow humans make associations between images and issues, which our brains are particularly good at.
  • It captures what was actually shown, rather than an individual’s perception. Individual perception is less useful when dealing with UI issues, especially.
  • Since bugs are typically tracked indefinitely, an image gives you something concrete to refer to long after the issue was fixed and forgotten about.
  • It clearly demonstrates the bug regardless if it can be reliably reproduced, which is often not the case.

Screenshots are the best tool for showing an issue and keeping the entire team on the same page.

Confirming the solution

Just as bug descriptions are open to interpretation, so are their resolutions. What is “fixed” to a developer is “off by a pixel” to a designer. Before fixing a bug, committing the code and moving on, send a screenshot over to whomever needs to confirm the fix and save yourself the runaround of having to fix your fix.

This goes beyond simple UI bugs. I recently filed a Pull Request on Github which amounted to a very simple two-line change (I updated the version of jQuery). However, being an open source project with huge visibility, I wanted to make very clear to others that my fix worked. So, I attached a screenshot, simply showing that the version was indeed updated on the site correctly. Now the other developers don’t have to sanity check my fix and can merge the Pull Request worry-free. Proving your solutions with screenshots makes life better for everyone on your team.

Taking screenshots is easy

Taking and sharing screenshots used to be difficult and slow. Luckily, we now have a myriad of tools to optimize this. I have my own roundabout way of taking and sharing screenshots, but there are countless options out there. Taking screenshots in Mac OS X is particularly easy, as there are screenshot-taking utilities built right into the OS (either use the “Grab” application or take a look at Preferences > Keyboard > Keyboard Shortcuts > Screen Shots). Sharing is also easier than ever. You can use Dropbox, Droplr, CloudApp, or any one of the many hosting services available to you. Many of them are free.

Absolutely everything

Screenshots help more than just software developers. They are useful for debugging your family’s IT woes. They are useful for sharing text that can’t be selected for copy/paste. They are useful for showing progress on a design. A picture is worth a thousand words, so save yourself some keystrokes and take screenshots instead!

Developers vs. Engineers vs. Scientists

I’ve been programming professionally for about 3 years at this point, and I’ve noticed some interesting patterns in other programmers I’ve worked with. One of the key differentiators among programmers is motivation. I’m not referring to an individual’s passion to simply be successful in their career, but rather the type of work they want to pursue. The thing they want to do with computers every day, the types of problems they are interested in solving.

The programmers I have observed generally fall into one of three categories: Developers, engineers, and computer scientists (or just “scientists”). These are not silos. They are ranges on a spectrum, and programmers may find themselves oscillating all over this spectrum throughout the course of a day. However, individuals are usually more comfortable in one of these ranges than the others.

It’s important to make something very clear at the outset: The categories I describe here do not imply differentiating levels of aptitude or intelligence. A developer can be every bit as smart as a computer scientist - it’s the interest area that sets programmers apart. It’s also worth noting that the definitions here are not official in any capacity, this is strictly my opinion.

Developers

Developers want to get things done. They are the prototypers, the guns for hire, the guys you can trust to get something workable by the end of the day. They are also reliable for driving a project to completion. Developers love to keep up on the latest jQuery plugins and Ruby Gems. They are always learning the newest tools that will solve their problems faster. Developers are especially useful for building websites in deadline-driven environments. From a business perspective, it is valuable to have someone like this who can hack something together without getting bogged down in technical minutiae. As for starting a programming career, “developers” face the lowest barrier to entry.

Developers run into trouble when their tools stop working. When the third-party JavaScript library they’ve been using suddenly doesn’t meet project requirements exactly, they often have to resort to strange workarounds and code comments like /* DON'T TOUCH THIS!! */, rather than gaining a comprehensive understanding of the problem and why their solution works. At least as far as this definition is concerned, a developer’s knowledge area is a mile wide and a meter deep.

Engineers

Engineers are interested in the mastery of a problem domain. Engineers don’t settle for solutions that “just work,” they dig until they have a holistic understanding of the big picture. They focus on reusable solutions that scale, elegant architectures, and building tools to automate work. Engineers are adept at designing unique solutions for unique problems. Engineers prefer to write and maintain the Ruby Gems that developers find themselves learning and using.

Engineers are sometimes more interested in building solutions than they are in actually solving problems. Rookie engineers in particular will spend a lot of time solving problems they may never have, because it is an interesting challenge for them. If you are managing engineers, it is important to keep them focused on practical tasks, rather than unrestrained experimentation and exploration. That’s what their Github hobby projects are for.

Scientists

Computer Scientists live in a world of theory and are the most likely to advance the state of the art. These are the algorithmists, the mathematicians, the statistical analysts. Depending on what you are trying to build, scientists are indispensable. They build solutions for the problems that most people don’t even understand.

At least in my experience, computer scientists aren’t the best builders. While they construct the most elegant and efficient theoretical solutions, they may not write the most readable code or maintainable systems. Since they are in the minority that actually understands certain complex problems, knowledge transfer becomes an issue. Even with good documentation, it can be difficult to get inside the head of a scientist that has been working out a solution for several weeks.

The skill set of a scientist has a lot of overlap with that of an engineer. I would argue that these groups are differentiated by a focus on theory versus implementation. Scientists focus on theory, and engineers focus on actual implementations. The programmer with a good mix of science and engineering is a true phenom and an invaluable asset. This type of programmer is also very rare.

It takes all kinds

A truly effective team consists of all three of these types of programmers. A good baseball team leverages the strengths of each of its players, and a development team is no different. For my own part, I find myself most comfortable near the “engineer” range of the programmer spectrum, but I often venture into the “developer” range for various tasks. I identify myself as a “scientist” less often, but it becomes necessary from time to time. The point is, programmers flit all around the spectrum, but tend to gravitate towards one range more than others. Balance your team according to what you want to build.

Keeping It Sane: Backbone Views and Require.js

I love experimenting with application architecture and organization. I build applications for fun, and each one gives me a chance to try out a new method for keeping a growing codebase manageable and readable. I try to mix it up a bit with each new project, but a number of consistent patterns have emerged and proven themselves useful time and again.

The same is true of my toolset. I have come to really enjoy two libraries in particular - Backbone and Require.js. Each does one thing and does it well: Backbone organizes an application, and Require.js gets code onto the page. What makes these libraries great is that they are not heavy-handed solutions - as long as you know how to use them, they don’t lock you into some weird pattern that makes your code inflexible. Additionally, they work very well together.

That being said, knowing how to use Backbone and Require.js is a discipline. Without a consistent pattern to follow, even a small Backbone app can grow unmanageable and hard to build upon. The patterns I’ll discuss here are designed to grow with your UI and make your life easier.

I have built a sample project that puts the patterns I’ll discuss into use. You can find the code here.

Keeping it real small

Whether you use the patterns or tools I’ll be going over, one concept that applies to nearly all projects is this: Many smaller files are better than fewer large files. It is much easier to read and comprehend a file that is 200 lines long than one that is 2000 lines. Some files cannot be broken up into smaller chunks for any number of reasons, but often that can be a red flag that the code in that file is too tightly coupled and assumes too much about how it is being used. Think of it like building a house: You can create any number of floor plans with a large collection of bricks. When you’re working with an assortment of pre-assembled walls, it becomes harder to create a floor plan that was previously not considered.

While breaking up code across many files will result in a more complex directory structure, the tradeoff is worth it because you increase your chances of reusing certain bits of functionality in future projects. However, you do wind up with a lot of files to work with and load. It just so happens that Require.js is a very powerful tool for working with many files.

Required bootstrapping

You don’t need terribly much Require.js code to glue your app together. Require.js’s only footprints in your project should be the require or define function calls that wrap the contents of each file:

1
2
3
4
5
// A View module.  Use the view.viewName.js file naming convention.  This file
// would be called view.button.js.
define(['exports'], function (button) {
  button.View = Backbone.View.extend({});
});

I like to use the exports syntax for defining modules. It is syntactical sugar and is not required for this pattern, but it saves me from having to return the module from somewhere in the middle of the file.

It helps to put all of your View files in a directory called view. The rest of this tutorial assumes this convention. In addition to multiple View files, I like to have a single init.js file that serves as the entry point for an app. Place this somewhere higher up your directory structure (it is in the top level for this example), because there should only be one of these and it should be differentiated from other files.

1
2
3
4
5
6
7
// init.js
require(['view/button', 'view/slider'], function (button, slider) {
  // App initialization code goes here.  When this code runs, we can assume that 
  // the files `./view/view.button.js` and `./view/view.slider.js` are loaded
  // and ready to use as `button` and `slider`.
  var buttonView = new button.View();
});

One really great feature of Require.js is the optimization tool. Having a lot of files equates to many HTTP requests, which is slow. Require.js makes it very easy to build one single binary out of all of your modules, making for a much quicker download.

Backbone boilerplate

Some amount of boilerplate code is needed to build out a Backbone View, but luckily we don’t need that much. The pattern I use for Backbone Views more or less echoes that pattern that I use for writing from-scratch JavaScript libraries, because it’s flexible and lends itself to modularity and sane public/private APIs. You can see a working example of this pattern in the companion project I built for this tutorial, but the basic pattern is this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
define(['exports'], function (exportedObject) {

  // PRIVATE UTILITY FUNCTIONS
  //

  function noop () {}

  // This is where we define the View.  Every property on this Object is 
  // exposed publicly.
  exportedObject.View = Backbone.View.extend({

    'events': {
      'click': 'onClick'
    }

    /**
     * @param {Object} opts
     */
    ,'initialize': function (opts) { }

    /**
     * @param {jQuery.Event} evt
     */
    ,'onClick': function (evt) { }
  });
});

There are a few things to keep in mind with this approach. First, try to keep your View’s public API as small as possible. Expose only the methods that are needed to let your View do its job, as well as event handlers. It is easier to work with a View that has a small interface, because it helps to abstract away the implementation. Client code should never have to know about your View’s implementation (that’s called a leaky abstraction). If your View needs to do a complex computation, consider moving that logic into a helper function (inside of the area marked as PRIVATE UTILITY FUNCTIONS) that is called from a public method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
define(['exports'], function (exportedObject) {

  // PRIVATE UTILITY FUNCTIONS
  //

  function longComplexCalculationImplementation () {
    // Implementation code goes here
  }

  exportedObject.View = Backbone.View.extend({

    // Remember, this is a public API.
    'performLongComplexCalculation': function () {
      longComplexCalculationImplementation();
    }
  });
});

The point of this is to move as much implementation code out of the View as possible. A View should have a simple structure - use it for wiring up events to logic and building out APIs. This helps to isolate complexity into discrete components.

Another thing to keep in mind with this pattern is the organization of your View’s methods. Try to group them by their purpose. The order of the groups is up to you, but I prefer to go with something like this:

  1. events map
  2. initialize
  3. Event handlers
  4. Non-event handler public APIs

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Backbone.View.extend({

  'events': {
    'click': 'onClick'
    ,'click button': 'onButtonClick'
  }

  ,'initialize': function () {}

  ,'onClick': function () {}

  ,'onButtonClick': function () {}

  ,'aPublicMethod': function () {}

  ,'anotherPublicMethod': function () {}
});

The goal is to build towards a consistent View organization.

Narrow Views

My organizational approaches may or may not be your cup of tea, as patterns are always subjective. What appeals to one person is weird and ugly to another. In any case, there are some design patterns that are a little more universal. One such pattern is keeping Views narrowly focused. In other words, any given View should be concerned with as little as practical. The trick is finding a balance between a View that is overly granular and one that is too heavy-handed. In my example project, I have a UIMischief View that just manages some buttons that grow and shrink. I could have taken it a step further and made a View to represent each button, but that would have been unnecessarily abstract and would not have added much to the structure of the app. One of the advantages of the pattern I use is that it’s fairly easy to refactor Views into larger or smaller Views if needed. As long as your code is modular and complexity is isolated into small functions, it should not be hard to move things around.

One way to gauge how “narrow” a View should be is to think about components that will be duplicated in your app. Any UI component that may have multiple instances warrants having its own View.

Keeping the peace

Regardless of how you choose to structure your application, pick a method that is consistent and scalable. The pattern I use has helped me build Stylie and my in-progress Pine UI. I strongly recommend that you use some kind of open source framework such as Backbone or CanJS, because a lot of people have already solved many of the problems you will run into when trying to build an app. As a project grows, managing complexity becomes far more difficult than writing new code. Research a solution that works for you.