We are a software consultancy based in Berlin, Germany. We deliver
high quality web apps in short timespans.

Upstream Agile GmbH Logo

Experimenting with node.js/mite

June 03, 2010 by alex

Recently on one of my research fridays I decided to work on a problem that’s been bugging us for a long time. We are big fans of the timetracking service mite and use it for all our projects. The problem is that mite is designed around accounts. You can have many users per account, and then you can run reports across all the users that worked on specific projects. On most projects we’re working with independents who have their own mite accounts. Our problem is that we can’t run reports across multiple mite accounts.

Mite does have a nice JSON API though, so there must be a way to fix this. The result of my work is mite.enterprise, which allows you to enter multiple mite accounts and then report on those in one go. You can find the source on github and it’s running at mite-e.upstre.am so you can use it straight away.

While the app itself is relatively simple it posed some challenges I want to share here:

Making multple API requests and still be fast

The way the app works is that every time it needs data from mite it makes all the neccessary queries to the mite api and immediately delivers it to the browser. It doesn’t have its own persistence or cache layer for holding data. In order to keep things fast these requests to the mite API need to run in parallel. While I could have written a standard Rails app and spawn a new thread for every request I decided to try something new: node.js - node is the new cool kid on the block - a server side Javascript framework where everything is handled asynchronously, similar to Ruby’s eventmachine.

In order to run the API requests in parallel I wrote a Javascript function that fires off the requests and collects all the data in an asynchronous fashion:

function DataCollector(no_of_requests, callback) {
  var datas = [];
  return {
    collector_callback: function() {
      return collect_request_data(function(data) {
        datas.push(data);
        if(datas.length == no_of_requests) {
          callback(datas);
        };
      });
    }
  };
};

var mite_client = {
  time_entries: function(params, callback) {
    // go to the mite api and pass the data to the callback
  }
};

var project_ids = [1, 2, 3, 4, 5],
  data_collector = DataCollector(projects.length, function(datas) {
  // do something with the collected data, e.g. send it to the browser
});

project_ids.forEach(function(project_id) {
  mite_client.time_entries(project_id, data_collector.collector_callback());
});

I think it’s best to read this from bottom to top. At the bottom, for every project id that was given to the app (I simplified the code for readability) I ask the mite client for all the time entries for that project. I need to pass the mite client a callback, which in turn gets passed the actual data. Since I don’t want to act on the data immediately but want to wait until all the requests have returned, I’m asking the DataCollector instance for a callback. DataCollector uses a closure so it can collect all the returned data. The data is collected through functions that are created by its collector_callback function that I pass to the mite client where they are used as callbacks. Because the callback functions all have access to the datas array they can push their data on it until enough data is collected - only then the final callback to process the data is called.

While this might seem a little complicated and confusing at first I think it actually is pretty cool. By using all the callbacks the server app never waits for any I/O operations to complete, hence it is lightning fast and can probably handle thousands of concurrent clients.

Storing account data securely

In order to talk to the mite api we need the accounts’ API keys. I didn’t want to have to worry too much about security and how to store this data on the server so I chose a different approach: I’m not storing any data on the server at all. Instead, I’m using the browsers’ local storage. This is a small key value store that most modern browsers support. The data is stored on the hard disk of the client, hence there’s no danger that anyone will steal all the API keys from the server, because they are distributed all across the web.

Another advantage of that approach is that I don’t need a signup/login process for the app. When a user goes to the mite.enterprise site they can start using it immediately. No email address, no password. The app just looks into the local storage and loads the according data.

Nice and fast GUI

I wanted to keep things small and simple, so I decided to keep all the HTML/CSS out of the server. My node.js server only serves JSON and static files. For handling user interactions and rendering templates I’m using the excellent Sammy framework in combination with mustache.js

Sammy is like Sinatra but it’s implemented in JavaScript and runs in the browser. You can map URLs to actions, but instead of loading a new page for every request you do some AJAX requests (or not even that) and then replace whatever parts of the page you want using jQuery.

Conclusions

Node.js is an awesome library. First it lets me implement servers that are fast as hell, secondly I can implement them in the same language I use for the frontend. With local storage I don’t have to implement a signup/login process. Instead, users can use the app immediately and their data is stored on their computer. With Sammy I can quickly put a rich and responsive GUI on top of my servers which only have to deliver JSON. I guess it’s the future :)

Oh, and mite just became much more useful for us, too.

Hello Explorer!

You’ve found our very old Website Archive (2007-2012). It’s a bit out of date, but we love looking back at how it all started - and how far we’ve come since then! We’re now fully focused on building our coworking management product, Cobot.

Take a look →