commit 71b28afb0c5f9eb4334e577c703fc741ceaf593a
Author: James Halliday
Date: Tue Jul 14 15:31:22 2015 -0700

This writeup is cross-posted from the sudoroom blog.

serial over webaudio

At the last hardware hack night, Jake and I got a web page to transmit uart serial data to an arduino at 9600 baud using the webaudio browser api.

Sending serial data from an ordinary web page is interesting because it means you can communicate with hardware without installing any special software onto a device. This makes setting up a system faster and easier, which is really useful for annoying computers like phones where it is very difficult to install and run new software. Any device with a modern web browser will do: android, iOS, linux, windows, MacOSX, whatever!

For a quick demo, check out this web page:

Here is the final project in action:


Serial uses a protocol called UART (Universal Asynchronous Receiver/Transmitter) with a very simple data framing scheme.

UART begins with a low start bit (0), then a character (5-8 bits, configurable but just use 8 bits) with the least significant bit first, followed by 1 or more high stop bits (1s).

For example, to send the message "ABC", we would generate the binary data:

| start |    character    | stop |
| 0     | 1 0 0 0 0 0 1 0 | 1    |
| 0     | 0 1 0 0 0 0 1 0 | 1    |
| 0     | 1 1 0 0 0 0 1 0 | 1    |

Written another way, this data is the binary string: 0100000101 0010000101 0110000101.

Under UART we can send as many stop bits as we want, so if there is no data available, we can keep sending stop bits until there is more data we want to send.

For example, we can send the same "ABC" message from before with more stop bits to control the timing of A, B, and C:

| start |    character    | stop                   |
| 0     | 1 0 0 0 0 0 1 0 | 11111111111111111      |
| 0     | 0 1 0 0 0 0 1 0 | 11111                  |
| 0     | 1 1 0 0 0 0 1 0 | 1111111111111111111111 |

Written another way, this data is the binary string: 01000001011111111111111111 00100001011111 0110000101111111111111111111111.

I wrote a javascript library called uart-pack-frame to implement UART framing with .write() and .read() methods.


Now that we have a way to frame characters for UART, we can send the framed data through the browser's webaudio API.

To do this, we can use the webaudio API's script processor node:

var Context = window.AudioContext || window.webkitAudioContext;
var context = new Context;

var sp = context.createScriptProcessor(2048, 1, 1);
sp.addEventListener('audioprocess', function (ev) {
    // ...

Inside the 'audioprocess' event, we can populate an output buffer with the data we want to send. For example, to send an alternating pattern of -1 and 1 we can do:

sp.addEventListener('audioprocess', function (ev) {
    var output = ev.outputBuffer.getChannelData(0);
    for (var i = 0; i < output.length; i++) {
        output[i] = i % 2 ? -1 : 1;

The output buffer expects floating point values from -1 to +1, inclusive.

These values map to the voltage coming out of the audio jack, which is exactly what we'll need to send our UART data.

The other important piece of information we need is the number of samples per second, which is available as context.sampleRate. The sample rate depends on the system, but some common values are 44100 and 48000. This sample rate sets the theoretical ceiling on how fast we can transmit.

Next we'll need to pick a baud rate that will work with the capacitors that filter out frequencies that are too high or too low. In practice 9600 and 4800 baud seem to work reliably, depending on the device.

To send audio data, we take the audio sample rate and divide based on the baud rate. This ratio defines a window size for how many audio samples to keep each UART bit set.

var baudRate = 9600;
var windowSize = context.sampleRate / baudRate;
var bits = [ 0, 1, 0, 0, 0, 0, 0, 1, 0, 1]; // the letter A with UART framing

// repeat each bit by the window size number of times
var nbits = [];
for (var i = 0; i < bits.length; i++) {
    for (var j = 0; j < windowSize; j++) {

sp.addEventListener('audioprocess', function (ev) {
    var output = ev.outputBuffer.getChannelData(0);
    for (var i = 0; i < output.length; i++) {
        var b = nbits.shift();
        if (b === undefined) output[i] = 0
        else output[i] = b ? -1 : 1

I've wrapped up all of these concepts into a library called webaudio-serial-tx that sets up the webaudio context and encodes input with uart-pack-frame.

With webaudio-serial-tx you can write a program to speak serial over the audio output in the browser:

var serial = require('webaudio-serial-tx');
var port = serial({ baud: 9600 });

port.write('HACK THE PLANET');

Save this program as serial.js, then install install node.js which comes with npm and then do:

$ sudo npm install -g browserify
$ npm install webaudio-serial-tx
$ browserify serial.js > bundle.js
$ echo '<script src=bundle.js></script>' > index.html

Then open index.html in your browser to hear a short blip of data.

You can load a simple demo of webaudio-serial-tx at:


Finally we'll need a simple circuit to interface the audio jack


Circuit diagram by Jake.


With all of those pieces together, check out the final product in action:

The baud rate settings are fussy and specific to the device, but from what I can gather online, other people have the same issues sending serial over audio.

My laptop and Jake's phone worked well at 9600 baud, but my phone worked best at 4800 baud.

A good next step would be to get the browser microphone API working for serial input for full duplex serial communication using only a web page.


commit 2aacc70fc0c4dced9d78811c99c3ee7e7d17a54a
Author: James Halliday
Date: Tue Jan 13 12:04:08 2015 -0800

Here are some tiny backend node modules I like to glue together to build webapps.

Check out the repo on github: substack-flavored-webapp.

var alloc = require('tcp-bind');
var minimist = require('minimist');
var argv = minimist(process.argv.slice(2), {
    alias: { p: 'port', u: 'uid', g: 'gid' },
    default: { port: require('is-root')() ? 80 : 8000 }
var fd = alloc(argv.port);
if (argv.gid) process.setgid(argv.gid);
if (argv.uid) process.setuid(argv.uid);

var http = require('http');
var ecstatic = require('ecstatic')(__dirname + '/static');
var body = require('body/any');
var xtend = require('xtend');
var trumpet = require('trumpet');
var through = require('through2');
var encode = require('he').encode;
var fs = require('fs');
var path = require('path');

var router = require('routes')();
router.addRoute('/', function (req, res, params) {
router.addRoute('/hello/:name', function (req, res, params) {
    layout(res).end('hello there, ' + encode(;

router.addRoute('/submit', post(function (req, res, params) {
    layout(res).end('form submitted!');

var server = http.createServer(function (req, res) {
    var m = router.match(req.url);
    if (m) m.fn(req, res, m.params);
    else ecstatic(req, res)
server.listen({ fd: fd }, function () {
    console.log('listening on :' + server.address().port);

function post (fn) {
    return function (req, res, params) {
        if (req.method !== 'POST') {
            res.statusCode = 400;
            res.end('not a POST\n');
        body(req, res, function (err, pvars) {
            fn(req, res, xtend(pvars, params));

function layout (res) {
    res.setHeader('content-type', 'text/html');
    var tr = trumpet();
    return tr.createWriteStream('#body');

function read (file) {
    return fs.createReadStream(path.join(__dirname, 'static', file));


  • tcp-bind - allocate a low port before dropping privileges
  • routes - organize routing
  • ecstatic - serve static files
  • body - parse incoming form data
  • trumpet - insert html into layouts


This module will allocate a port so that you can allocate as root and then drop down into a non-root user after the port has been allocated:

var alloc = require('tcp-bind');
// ...
var fd = alloc(argv.port);
if (argv.gid) process.setgid(argv.gid);
if (argv.uid) process.setuid(argv.uid);

Specify the user and group to drop into with -g and -u.


The routes module is handy for decomposing the different routes of your webapp.

Create a router with:

var router = require('routes')();

then you can add routes with:

router.addRoute('/robots/:name', function (req, res, params) {
    res.end('hello ' +;

In the http.createServer() handler function, we can dispatch our routes using:

var m = router.match(req.url);
if (m) m.fn(req, res, m.params);

and then have some fallback handlers if none of the routes matched, like to ecstatic.


To serve static assets out of static/, I use ecstatic:

var st = require('ecstatic')(__dirname + '/static');

and then in the http.createServer() function you can do:

st(req, res)


To parse form parameters, I use the body module:

body(req, res, function (err, params) { /* ... */})

or i have a nifty little wrapper function:

function post (fn) {
    return function (req, res, params) {
        if (req.method !== 'POST') {
            res.statusCode = 400;
            res.end('not a POST\n');
        body(req, res, function onbody (err, pvars) {
            fn(req, res, xtend(pvars, params));

so that I can wrap an entire route handler in a post():

router.addRoute('/submit', post(function (req, res, params) {

It might be handy to publish some of these little functions individually to npm but I haven't done that yet.


trumpet is a nifty module for pumping content into some html at a css selector.

I use it to make a little layout function:

function layout (res) {
    res.setHeader('content-type', 'text/html');
    var tr = trumpet();
    return tr.createWriteStream('#body');

and then in static/layout.html I have an empty element with an id of body:

    <h1>my way cool website</h1>
    <div id="body"></div>

Also check out hyperstream, which uses trumpet but lets you specify many selectors at once.

as it grows

As the server file gets bigger, I start moving functions and route handlers into lib/ or lib/routes or someplace like that.

An example route file in lib/ would look like:

module.exports = function (req, res, params) {
    res.end('beep boop\n');

and then in the server.js I can do:

router.addRoute('/whatever', require('./lib/someroute.js'));

or if the route needs some extra information, I can return a function in the route instead:

module.exports = function (msg) {
    return function (req, res, params) {
        res.end(msg + '\n');

and then in server.js:

router.addRoute('/whatever', require('./lib/someroute.js')('beep boop'));

I try to only pass in the information that a route directly needs, since that keeps the code less coupled to my application.

Try to avoid passing around an app object to everywhere since that can create huge problems later when refactoring and the code can become very coupled. It might be ok to pass a bus object around liberally so long as only handles dispatching events.

running the server

To run the server for development, do:

$ npm start

To run the server on port 80 for production, do:

$ sudo node server.js -u $USER -g $USER
commit d8282f8067e8ace850cd57a56bff80d3951cc0b8
Author: James Halliday
Date: Thu Nov 27 18:54:26 2014 -0800

offline decentralized single sign-on in the browser

Recently, browsers have just begun to implement web cryptography. This means that browsers are now capable of the same kind of passwordless decentralized authentication schemes we've had server-side with ssh and tls asymmetric keys for decades.

Imagine if you could just generate a key and sign messages with that key, proving your identity to other users and backend services without the need for a password or even creating an account with yet another web server! We can have the ease of use of signing in with twitter or facebook without any centralized servers and very minimal setup from the end user's perspective.

Even better, this authentication system can work offline. In fact, the system must work offline to be fully secure.

appcache manifesto

By default, web pages can decide to send you whatever javascript and html payload it wants. This payload is probably what you want, at least at first, but consider what might happen if those asymmetric keys are used for commerce or for private communications. Suddenly, the webmaster could easily decide to serve a different payload, perhaps in a targetted manner, that copies private keys to third parties or performs malicious operations. Even if the webmaster is an upstanding person, a government agent could show up at any time with a court order forcing the webmaster to serve up a different payload for some users.

Imagine if whenever you ran the ssh command, your computer fetched the latest version of the ssh binary from and then executed it. This would be completely unacceptable for server programs, and browser apps that handle confidental keys and data should be no different!

Luckily, there is another relatively new feature in the browser that can protect against rogue server updates: the appcache manifest. A page can set a manifest file with:

<html manifest="page.appcache">

and then the browser will load a cache policy from page.appcache. The appcache file can be used to make some documents available offline, but can also be used to prevent the browser from fetching updates to documents. If the max-age header on the appcache file itself is set far enough in the future, the appcache file itself can be made permanent so that the server operator can't update this file either. In the future, the service worker API will provide enough hooks to do the same thing, but browser support is not widespread yet.


Upgrading an application should be possible too without going into the bowels of the browser to clear the appcache. This is where hyperboot comes in to give us opt-in application upgrades for end-users. More security-minded users might even want to check with external auditing systems before upgrading.


With a versioning system in place, we can now start implementing an offline single sign-on system that exposes the web crypto methods securely without exposing private keys to random websites.

There are another few nifty tricks with the service worker API that can give us realtime communication between tabs and iframes that works completely offline.

To give this new system a try, first open in a modern browser and generate a key.

Next open up in a new window or tab and paste into the text box. In the window, approve the request. Now from, you can sign messages with your private key!

Update: if gives cross-domain errors in your browser, try

There is still plenty to do and some unanswered questions about different threat models and how best to prevent replay attacks and domain isolation, but this proof of concept should be good enough to at least start people thinking about decentralized approaches to single sign-on and the changing role of servers and webapps as browser APIs become more capable.


commit 5e004e4de5c1da6888c302607e6556b05b354320
Author: James Halliday
Date: Sat May 17 23:15:29 2014 +0200

One of the most common objections I've heard about embracing modularity and favoring libraries that do a single thing well is that it can be difficult and time-consuming to find packages for each piece of functionality you might need for a given task.

This is certainly true at first, but over time and with practice, it is less and less of a problem as you train up your own heuristics and develop a broad working memory of useful packages and authors who tend to produce useful code that suits your own aesthetic preference.

With a bit of training and practice, you will be skimming npm search results at great speed in no time!

my heuristic

Here's my own internal heuristic for evaluating npm packages:

  • I can install it with npm

  • code snippet on the readme using require() - from a quick glance I should see how to integrate the library into what I'm presently working on

  • has a very clear, narrow idea about scope and purpose

  • knows when to delegate to other libraries - doesn't try to do too many things itself

  • written or maintained by authors whose opinions about software scope, modularity, and interfaces I generally agree with (often a faster shortcut than reading the code/docs very closely)

  • inspecting which modules depend on the library I'm evaluating - this is baked into the package page for modules published to npm

When a project tries to do too many things, parts of it will invariably become neglected as the maintenance burden is unsustainable. The more things a project tries to do, the easier it is to be completely wrong about some assumption and this can also lead to abandonment because it's very difficult to revisit assumptions later.

The best, longest-lasting libraries are small pieces of code that are very tricky to write, but can be easily verified. Highly mathematical tend to be very well represented in this category, like the gamma function or an ecosystem of highly decoupled matrix manipulation modules such as ndarray.

When a library is embedded in an ecosystem of other libraries in a thoroughly decoupled way, a mutual dynamic results where the main library doesn't need to inflate its scope but gets enough attention to find subtle bugs while the dependent libraries can offer excellent interoperability and fit into a larger informal organizational structure.

not too important

Here are some things that aren't very important:

  • number of stars/forks - often this is a reverse signal because projects with overly-broad scope tend to get much more attention, but also tend to flame out and become abandoned later because they take too much effort to maintain over a long period of time. However! Some libraries are genuinely mistakes but it took writing the library to figure that out.

  • activity - at a certain point, some libraries are finished and will work as long as the ecosystem around them continues to function. Other libraries do require constant upkeep because they attack a moving problem but it's important to recognize which category of module you're dealing with when judging staleness.

  • a slick web page - this is very often (but not always) a sign of a library that put all of its time into slick marketing but has overly-broad scope. It is sometimes the case that solid modules also have good web pages but don't be tricked by a fancy web page where a solid readme on github would do just as good for a job.

The main crux of this blog post first appeared as a reddit comment.

commit d95a2849d28593758c03c0cde74175cb807db857
Author: James Halliday
Date: Sun Dec 8 16:14:47 2013 -0800

In node I use simple test libraries like tap or tape that let you run the test files directly. For code that needs to run in both the browser and node I use tape because tap doesn't run in the browser very well and the APIs are mostly interchangeable.

The simplest kind of test I might write in test/ looks like:

var test = require('tape');
var someModule = require('../');

test('fibwibblers and xyrscawlers', function (t) {

    var x = someModule();
    t.equal(, 22);

    x.beep(function (err, res) {
        t.equal(res, 'boop');

To run a single test file in node I just do:

node test/fibwibbler.js

And if I have multiple tests I want to run I do:

tape test/*.js

or I can just use the tap command even if I'm just using tape because tap only looks at stdout for tap output:

tap test/*.js

The best part is that since tape just uses console.log() to print its tap-formatted assertions, all I need to do is browserify my test files.

To compile a single test in the browser I can just do:

browserify test/fibwibbler.js > bundle.js

or to compile a directory full of tests I just do:

browserify test/*.js > bundle.js

Now to run the tests in a browser I can just write an index.html:

<script src="bundle.js"></script>

and xdg-open that index.html in a local browser. To shortcut that process, I can use the testling command (npm install -g testling):

browserify test/*.js | testling

which launches a browser locally and prints the console.log() statements that executed browser-side to my terminal directly. It even sets the process exit code based on whether the TAP output had any errors:

substack : defined $ browserify test/*.js | testling

TAP version 13
# defined-or
ok 1 empty arguments
ok 2 1 undefined
ok 3 2 undefined
ok 4 4 undefineds
ok 5 false[0]
ok 6 false[1]
ok 7 zero[0]
ok 8 zero[1]
ok 9 first arg
ok 10 second arg
ok 11 third arg
not ok 12 (unnamed assert)
    operator: ok
    expected: true
    actual:   false
    at: Test.ok.Test.true.Test.assert (http://localhost:47079/__testling?show=true:7772:10)
# (anonymous)
ok 13 should be equal

# tests 13
# pass  12
# fail  1
substack : defined $ echo $?
substack : defined $


bonus content: if I want code coverage, I can just sneak that into the pipeline using coverify. This is still experimental but here's how it looks:

$ browserify -t coverify test.js | testling | coverify

TAP version 13
# beep boop
ok 1 should be equal

# tests 1
# pass  1

# ok

# /tmp/example/test.js: line 7, column 16-28

  if (err) deadCode();

# /tmp/example/foo.js: line 3, column 35-48

  if (i++ === 10 || (false && neverFires())) {

or to run the tests in node, just swap testling for node:

$ browserify -t coverify test.js | node | coverify
TAP version 13
# beep boop
ok 1 should be equal

# tests 1
# pass  1

# ok

# /tmp/example/test.js: line 7, column 16-28

  if (err) deadCode();

# /tmp/example/foo.js: line 3, column 35-48

  if (i++ === 10 || (false && neverFires())) {

Update (2013-12-21): check out the covert package on npm, which gives you a covert command that runs browserify and coverify for you.

why write tests this way?

The node-tap API is pretty great because it feels asynchronous by default. Since you plan out the number of assertions ahead of time, it's much easier to catch false positives where asynchronous handlers with assertions inside didn't fire at all.

By using simple text-based interfaces like stdout and console.log() it's easy to get tests to run in node and the browser and you can just pipe the output around to simple command-line tools. If you stick to tools that just do one thing but expose their functionality in a hackable way, it's easy to recombine the pieces however you want and swap out components to better suit your specific needs.

git clone