We’ve recently launched a completely new site design, simplified our pricing and created a brand new cross-platform desktop app. It’s the last of these I’d like to focus on today; specifically the technology behind it and a few things learned while building it. The following writeup is fairly technical—consider yourself warned!

The motivation

Before we get in too deep let’s evaluate why we felt a desktop (as opposed to a command-line) app was needed:

  1. Simplicity: Finch should ‘just work’ for as many people as possible; the rich interface offered by a desktop versus command-line app gives us a better crack at making this happen
  2. Compatibility: Although Node.js and npm (both required to install the command-line app) are free, open-source and widely compatible, installation often proved troublesome for users less experienced with terminal interfaces
  3. Reach: A by-product of the previous factors; by making the entrypoint simpler and more accessible we’d hope to attract, and interest, a far broader audience.

After a lot of blood, sweat and tears, here’s a screenshot of the result:

The landscape

Attempting to build a cross-platform application—that is, one which will run on multiple operating systems using the same underlying code—presents a number of challenges, especially to a small team such as ours where we simply can’t afford huge amounts of time spent in Research & Development.

The cross-platform / small team constraint is actually a help here since it immediately precludes a number of options (such as a native app written for each supported OS). The shortlist was very quickly whittled down to:

  • Go with an unknown UI toolkit
  • Java with Swing or similar
  • C++ using the QT framework
  • Node.js with Node Webkit (now NW.js)
  • Node.js with Atom Shell (now Electron)

In truth that’s more of a longlist since it didn’t take long to discount Go (no cross-platform GUI solution), Java (horrible cross-platform UI solution) and C++ (lack of familiarity), especially since the command-line app was already written in Node.js meaning plenty of scope for code reuse (which eventually led to teasing out a standalone package, finch-core, upon which both the command-line and desktop apps now depend). A two-horse race was a much more manageable one, so I rolled up my sleeves and dived in.

NW.js or Electron?

Before answering this question it’s worth establishing exactly what problems these projects aim to solve. I’ve found no better explanation than Electron’s very own introduction, so I’ll simply simply quote it here:

Electron enables you to create desktop applications with pure JavaScript by providing a runtime with rich native (operating system) APIs. You could see it as a variant of the Node.js runtime that is focused on desktop applications instead of web servers.

This doesn't mean Electron is a JavaScript binding to graphical user interface (GUI) libraries. Instead, Electron uses web pages as its GUI, so you could also see it as a minimal Chromium browser, controlled by JavaScript.

I’ll add my own ham-fisted definition into the mix too:

Electron allows you to create desktop Node.js applications with a Graphical User Interface driven by web technologies—HTML, CSS, JavaScript et al—while also providing access to OS-native capabilities like menu creation, window manipulation and clipboard access.

Note that anywhere you read ‘Electron’ above you can substitute ‘NW.js’—they have architectural and implementation differences, but these make little difference from a development point of view once their respective boilerplate setup is out of the way.

The Finch desktop app was initially prototyped using both projects; at the time Electron was in its infancy and lacked documentation so based purely on maturity NW.js won the day. Given the enormous development effort behind Electron, its rapid release cycle and superb website (particularly compared to the rather woeful NW.js equivalent) I’d probably choose Electron if I was starting afresh, though in reality you can achieve the same end result whichever you use. Both are backed by technology giants (Intel sponsor NW.js while Electron is a GitHub project), neither look to be going anywhere any time soon and NW.js has a major new release on the not-too-distant horizon. Whichever you pick, you’re probably in safe hands.

Getting started with NW.js

Getting started with NW.js is pretty straightforward; download the correct version for your operating system and follow the quick start guide. The result will be a fairly underwhelming window which reeks of a web page, but you’re up and running!

Incorporating React and Flux

React and Flux are complimentary libraries developed by Facebook which aim to simplify JavaScript application development—a goal they achieve spectacularly well. I’ve tried a huge raft of JS libraries over the years and nothing comes close to the development speed, performance and robustness of React especially backed by the architecture encouraged by Flux. This isn’t a tutorial on using either library but if you’d like to see one by all means let us know.

Getting React and Flux integrated into NW.js is fairly straightforward, although a couple of slight shims are needed to get things working smoothly. This is the exact index.html file used to bootstrap the current version of the desktop app:

<!doctype html>

    <script>document.querySelector("html").className = "html--" + process.platform;</script>
    <link type="text/css" rel="stylesheet" href="assets/dest/css/main.css" />


    <div id="view"></div>


      if (!process.env.NODE_ENV) {
        process.env.NODE_ENV = "production";

      var Bootstrap = require("./lib/Bootstrap");

      // without these React blows up
      global.document = window.document;
      global.navigator = window.navigator;

      var React = require("react");
      var Wrapper = require("./components/Wrapper");

      // we have to set up some initial state before calling
      // React.render otherwise we get a race condition around
      // network connectivity




The NODE_ENV guard is twofold; it allows us to test the app in different modes but also ensures that when started normally it’s set to ‘production’—important because it’s a value which React and Flux look for to optimise certain behaviour and remove some debug output.

React doesn’t expect to be run via NW.js and as such expects a couple of globals to be set without explicitly accessing them via the window object, so we give it a helping hand before including it by attaching them to Node’s global object.

Lastly, note how we try and load React as soon as possible and keep our HTML to a minimum; the sooner we can get out of our initial HTML file (where there’s enormous scope for context confusion—a quick read of this nw.js Wiki page will show you why) and into pure JS-only code on the Node side of things, the better. React compliments this JS-only approach nicely since all components (and thus views) are written in JS or JSX meaning minimal mental context switching when developing your app.

Bridging over the ‘Uncanny valley’

The trouble with a user interface which is essentially a web page is that it looks like a web page. If someone’s just opened their web browser that’s fine because it’s what they expected to see. If they’ve just started an application then it definitely isn’t—and the experience will be jarring. At this stage you’re at risk of sliding into the Uncanny Valley where the app is almost good enough to pass for a native one but exhibits some tell-tale behaviours which give the game away that isn’t, culminating in a less absorbing overall experience and a more negative reaction than had the app not masqueraded as native in the first place.

The following is an incomplete and heavily subjective list of things you can try (with both Electron and NW.js) to attempt to get across the Uncanny Valley and make it safely to the ‘native app’ side.

Go frameless

While a window frame doesn’t indicate that an app isn’t native, removing it goes a long way to making it feel like it is. You’ll need to set window.frame and window.toolbar to false to achieve this with NW.js.

Making your window frameless means you’ll lose your operating system’s default controls (close, minimize, maximise) so you’ll need to re-implement these behaviours yourself. Users won’t be able to drag your app either until you add the relevant CSS rules instructing webkit exactly which elements should be draggable. This means you need to give some thought to creating a clearly defined toolbar for your app, but the extra control and styling gained are more than worth it.

Control your window’s display lifecycle

By default NW.js will show your window as soon as it can, often resulting in a flash of empty content before your app can take over and render something meaningful. Little details like this are big giveaways and can easily break the spell especially on slower computers. Setting and window.show_in_taskbar to false in your package manifest allows you to show your application programatically, when you’re good and ready. The same advantages apply in reverse too; any application cleanup you need to do when a user instructs your app to quit can be done after you’ve hidden the window, making the exit process feel instantaneous even when it isn’t.

Lose the scrollbar

If you’re developing your application in OS X you won’t even notice you’ve got one, but as soon as you test it on Windows or Linux you’re going to see an ugly scrollbar—give away alert!

Luckily webkit lets you style the scrollbar via CSS making it easy to get rid of across the board:

::-webkit-scrollbar {
  width: 0 !important;

Add a tray icon

Mostly relevant to OS X applications and by no means an absolute necessity, a tray icon can play a supporting role to your application’s main window (or replace it completely in some instances). We make use of this on Finch to allow users to close the main window altogether once they’re set up and sharing some sites; having a tray icon is a gentle reminder that the app is still running while keeping it out of your way.

The tray behaviour is by no means perfect on Windows and Linux so it’s wise to offer it as a configuration option to your users. We experimented with sensible defaults while developing the app and in the end only turned on ‘Close to tray’ (what happens when the user clicks the close icon in the main window) on OS X by default. Users can change this, but this approach felt like the most sensible option.

Native controls like context and tray menus help make the app feel more at home

Don’t use <a> tags for in-app navigation

This takes a bit of getting used to since you’ll naturally lean towards anchor links for navigation—but don’t! All links in webkit can be clicked and dragged which will reveal their destination:

Talk about a bubble burster. You can use any element you want to instead of anchors by taking advantage of the onclick event—something you’ll probably be doing anyway if you’re using React.

Get rid of cursor:pointer

If you’re not using anchors you’ll probably dodge this anyway, but don’t be tempted to make your substitute links act too much like web links. The pointer cursor indicating clickability is a web behaviour and should stay there—check any of your native applications and they’ll almost certainly convey clickability by hover state alone.

Take advantage of only needing to support webkit

Web developers are drilled into caring about cross-browser compatibility, but this is something you can throw out of the window when developing NW.js apps. Your app is only ever rendered by webkit, so take advantage of what it can do. If your app is a fixed size you can take this even further by getting things pixel-perfect—throw those ems and rems in the bin if you need to!

Tweak things per OS (if you really have to)

Eagle-eyed observers may have noticed the inline script in our app’s bootstrap HTML:

<script>document.querySelector("html").className = "html--" + process.platform;</script>

Although something of a last resort this does allow fine-tuning to compensate for any rendering discrepancies between operating systems. In truth we only found one issue bad enough to warrant fixing: while OS X adds its own nice border and drop shadow around every app, Windows and Linux don’t, resulting in a white app which completely blends in if the background also happens to be white. It doesn’t sound like much, but it felt pretty disconcerting. Manually adding a CSS border across the board caused the the app to look coarse and doubly framed on OS X while fixing it on everything else. Being able to target each OS individually made this trivial to solve.

Where’s Finch? No border on Linux or Windows just looks downright weird.

Throw in some delighters

Only having to worry about webkit means you can take advantage of transitions, transforms and animations confident in the knowledge that your users will see them exactly as you do. Use this to your advantage to create subtle UI flourishes—you don’t need to go overboard of course, but small interaction animations breathe a lot of life into your application’s interface.

Use the fork

One of the trickiest things we struggled with during development was maintaining buttery smooth animations and transitions throughout the app. The most notable offender was the blue/green connecting animation when sharing a site (note that the animation below is far less smooth than in the app itself):

It doesn’t seem like much but there’s a lot of work establishing a secure connection going on when you click that button. The single-threadedness of JavaScript hurt us here and caused some stuttering on slower machines and instantly broke the native-app spell since no native application should struggle to render a fairly innocuous animation like that.

This was by far the most difficult problem to solve and led to much time spent staring at Chrome’s various debugging tools, trying awkward CSS animation hacks, and even the investigation of Web Workers and Service Workers (neither of which NW.js supports). Stuttering animations would be a show stopper but in the end the solution was reasonably simple: fork a child process to handle all the heavy lifting and leave the main thread for the UI.

Deciding what to fork

In our case this was pretty easy; all of the computationally expensive logic sits in the finch-core Node.js package so it made sense to create a drop in replacement (i.e. with the same or as close-as-we-can get interface) which forks a new process which in turn requires the real finch-core. Doing so meant that anywhere in the app which looked like this:

var finch = require("finch-core");

Simply changed to:

var finch = require("./lib/forked-finch-core");

All forked-finch-core has to do is start a child process and then marshal data to and from it when requested. The following is a stripped-down (and non functional) example of what this looks like:

var events = require("events");
var util = require("util");
var _ = require("lodash");
var child_process = require("child_process");

var fork = child_process.fork(__dirname + "/fork.js", [], {
  env: _.assign({}, process.env)

// add a callback to an object stack, keyed by type and an identifier
function queueCallback(id, type, callback) {
  callbacks[type + ":" + id] = callback;

// invoke a callback when the child process has something to tell us
function invokeCallback(id, type, args) {
  var key = type + ":" + id;
  if (!callbacks[key]) {
    // spit out some debug, handle gracefully
  callbacks[key].apply(null, args);
  callbacks[key] = null;

fork.on("message", function(m) {

  // all messages received back up from the child process adhere to
  // a custom object structure we've defined. As such, we can interrogate 'm'
  // to work out how to propagate any relevant information back out to our caller

  var id =;

  switch (m.type) {
    case "callback":
      invokeCallback(id, m.cb, m.args || []);

    case "connect":
    case "ready":
    case "closing":
    case "idle":
    case "data":
    case "start":
    case "close":
      // handle various session states here by emitting the event
      // and any associated data, allowing callers to tune in to
      // session lifecycle events

// our exported object simply mirrors that offered by the real 'finch-core' package. The
// caller never has to know that we're actually just marshalling data to and from a child process

module.exports = {
  forward: function(id, params, callback) {
      type: "forward",
      id: id,
      params: params

    queueCallback(id, "forward", callback);

  close: function(id, callback) {
      type: "close",
      id: id

    queueCallback(id, "close", callback);

  // etc - all methods exposed by `finch-core` need to be proxied


The child process itself, fork.js, then looks a bit like this:

var finch = require("finch-core");

process.on("message", function(m) {
  switch (m.type) {
    case "forward":
      forward(, m.params);

    case "close":

    // etc - all methods exposed by `finch-core` need to be proxied

function forward(id, params) {
  var session = finch.forward(params, function(err, response) {

    // tell our parent process that the callback has been invoked and
    // pass it the callback's arguments
      type: "callback",
      cb: "forward",
      id: id,
      args: [err, response]

  session.on("start", function() {
      type: "start",
      id: id

  session.on("connect", function() {
      type: "connect",
      id: id

  session.on("ready", function() {
      type: "ready",
      id: id

  session.on("close", function(info) {
      type: "close",
      id: id,
      info: info

  session.on("closing", function() {
      type: "closing",
      id: id

  session.on("idle", function() {
      type: "idle",
      id: id

  session.on("data", function() {
      type: "data",
      id: id

function close(id, callback) {
  finch.close(sessions[id], function(err) {
      type: "callback",
      cb: "close",
      id: id,
      args: [err]

This process isn’t perfect; you’re limited to passing simple data structures between the processes and of course have to manually proxy the forked module’s API, but it’s pragmatic, fast, and reliable. In our case it rescued our animations and the app as a whole so the added complexity was a small price to pay indeed.

Packaging things up: building installers

A native-feeling app is one thing but it’ll be sunk if the installation mechanism isn’t familiar to users of a particular operating system. That means some sort of setup wizard for Windows, a DMG installer for OS X, and a .deb package for debian-based Linux distributions (namely Ubuntu—the only one we’re supporting in practice at the moment).

Creating a .dmg for OS X

Creating an OS X dmg file is easy enough and plenty of tools are available to help you do so. I used the superb appdmg since it’s Node.js based and could be used programatically if required (the command-line app was good enough in the end).

While creating the DMG was simple enough, code signing it—to ensure users weren’t blocked from installing it without control-clicking the installer—took a little more effort. Code signing is beyond the scope of this article but be aware that you’ll need to sign up for an Apple Developer account which has an associated annual cost.

Creating a .deb for Linux

I couldn’t find an awful lot of conclusive evidence of ‘the’ way to package things for Linux, unsurprisingly. In the end I used dpkg-deb in tandem with the prerequisite DEBIAN folder it requires to read certain package metadata displayed to the user at installation time.

Generating the debian installer was relatively painless albeit involved a lot of trial and error; understanding exactly what was and wasn’t required inside the DEBIAN directory took a little bit of practice.

Creating a .exe for Windows

A bit of Googling turned up an excellent install packager for Windows, Inno Setup. Don’t let the modest looking website fool you; Inno Setup has you covered. Again a little trial and error was needed here but the Inno Setup GUI is very good at guiding you through any issues encountered building your installer and also runs perfectly happily via the command prompt for automated builds.

By far the most painful part of creating the Windows installer was purchasing and installing a valid code-signing certificate. Apple’s walled garden may have its pitfalls but it does at least mean that these sort of processes are well-documented and relatively easy to follow. The Windows code signing ecosystem is positively wild west by comparison and it took a great deal of time understanding what type of certificate was required, where to get one and whether it would even work once purchased. For reference, here’s what Windows 8 makes of an unsigned executable file:

Ouch. You can click the almost hidden ‘More Info’ link to proceed but frankly, nobody’s going to.

Note that even with the certificate installed and the installer code signed the app still received untrusted publisher warnings; thankfully these are far less aggressive dialogs and stopped appearing over time as the beta version of the app became more widely trusted (although what exactly trust entails is fairly opaque). Presumably this can be mitigated by purchasing a more thorough (read: more expensive) certificate and/or taking additional steps to verify one’s identity with Microsoft themselves but it’s worth being aware of for smaller resource constrained developers.

This was by far the most painful step in publishing the app and the lack of any coherent documentation on the topic was at times a bit alarming. In the end we went with a certificate from K Software who simply resell Comodo certificates at a far more reasonable rate than Comodo themselves. Be warned that the process of verifying your identity as a publisher is extremely tedious and requires a telephone call, a precursor to which is having your company listed on a ‘reputable’ directory site such as or, a precursor to which is a lot of form filling and (gasp) another phone call plus a series of bombardments afterwards chasing you about your listing. It’s not pleasant and takes time but is a necessary evil and does eventually work—so hang on in there.

Automating the build process

With different versions of the app packaged up for each major OS we wanted to support, the last piece of the puzzle is of course automating the process of building a new release of the app. There are various different solutions for this within the NW.js ecosystem but in all honesty at the time the application was prototyped they didn’t quite offer enough fine-grained control over both the packaging of the application’s assets and the code signing required for the OS X and Windows installers. The state of play might have evolved somewhat nowadays but a cursory glance at the relevant Wiki page doesn’t fill me with confidence (your mileage may of course vary). As such, the build process is a hand-rolled affair which was configured and managed using Jenkins CI in the form of various separate jobs grouped by up/downstream relationships:

The build process shown using Jenkins’ pipeline view. It’s not entirely accurate as we only actually have one finch-gui-package-deploy job; it’s shown four times as it is a downstream job of each individual job in the middle column. It only runs once all four of the middle jobs have completed successfully.

Step one

Since all the node modules we depend on are JavaScript only we can get away with building one common job before splitting off and creating our various installers. This initial job takes care of stripping out all development files, redundant assets and metadata files while also minifying all JS, CSS and images, presenting a pristine clean zipfile of the application once completed.

Step two

Why a zip file? Because step two is where we create our per operating system installers, which run on three separate worker machines (one for each OS we support). A zip file is smaller and easier to copy using scp from the master node to each worker. These jobs run in parallel and simply automate the processes detailed previously to create a signed .dmg, a .deb and two signed .exe files (one for 32-bit flavours of Windows—yes, they still exist!). Once done, each worker copies its installer back to the master Jenkins node.

Separate nodes means the step two jobs are built concurrently—a nice bonus!

Step three

Last but not least, we fingerprint each installer and plot their sizes on a graph using the crude but effective Jenkins plot plugin. We don’t obsess about installer size since a ~30mb download won’t even make most people flinch, but it’s something we keep an eye on to make sure we don’t accidentally introduce an enormous dependency or forget to compress our assets.

Each individual installer’s filesize, in bytes (this image predates the win32 build)

Once these steps are completed the installer files are copied to the main host via scp, and we’re done!


In truth there’s even more to it than what’s been discussed here but more detailed discussion of some of the pain points are best served as separate deep-dive articles in their own right. With any luck this overview will help other developers diving into the weird and wonderful world of NW.js / Electron and hopefully the Finch app serves as some proof of what these fantastic projects can achieve with a bit of love and craft. If you haven’t already then please do download the app and let us know what you think.

If you have any questions, comments or suggestions about the app or the article, or there’s any follow up article in particular you’d like to see then please don’t hesitate to get in touch; you can email me directly at

Nick Payne

Written by Nick Payne
Founder & Lead Developer

Finch helps web designers and developers test their development sites on multiple devices without the need for lengthy deployments or public staging servers. Registration is free & fast.