Coordinated Mocha setup and teardown

Playing with Node.js, Express and WebDriver I spotted duplication in my Mocha tests. Each test started/stopped an Express application and created/destroyed a WebDriver client:

before(() => {
  startServer();
  createWebdriver();
});

after(() => {
  stopServer();
  destroyWebdriver();
});

var server, driver;

const startServer = () => server = app.listen(3000);
const stopServer = () => {
  var deferred = Q.defer();
  server.close(() => deferred.resolve());
  return deferred.promise;
};

const createWebdriver = () => {
  driver = new webdriver.Builder()
    .forBrowser('firefox')
    .build();
};
const destroyWebdriver = () => driver.quit();

Obviously, I could move startServer/stopServer and createWebdriver/destroyWebdriver pairs to a separate module and require them in each test. This would have partially solved the duplication, since those methods were defined only in one place.

However, test setup and teardown methods (before and after) are still duplicated in each test case. They share the state (server handle and WebDriver client) and are sequentially coupled. Each launched server instance must be shut down. Same with the WebDriver client. Moreover, I cannot separate setup and teardown steps and reuse e.g. only the server launch.

How can it be solved?

First, you can have more then one Mocha setup and teardown lifecycle function.

Second, statements in anonymous functions passed to before and after are coincidentally cohesive. They have been put together only because they need to be executed at the beginning/end of the test case. This is equivalent to the previous snippet:

before(() => startServer());
before(() => createWebdriver());

after(() => stopServer());
after(() => destroyWebdriver());

Third, functions in JavaScript are first-class citizens. They can be passed around and executed in a different context.

Knowing all this, what’s the final solution?

Coordinate cohesive setup/teardown code in a module. Pass setup and teardown functions to the module and execute them when creating the module instance. Export created or initialised objects if your test methods have to use them.

And the refactored code:

// in test
require('./server')(before, after);
const {driver} = require('./driver')(before, after);

// server.js
module.exports = (setupFn, teardownFn) => {
  var server;

  const startServer = () => server = app.listen(port);

  const stopServer = () => {
    var deferred = Q.defer();
    server.close(() => deferred.resolve());
    return deferred.promise;
  };

  setupFn(() => startServer());

  teardownFn(() => stopServer();
};

// driver.js
module.exports = (setupFn, teardownFn) => {
  var driver;

  const createWebdriver = () => {
    driver = new webdriver.Builder()
      .forBrowser('firefox')
      .build();
  };

  const destroyWebdriver = () => driver.quit();

  setupFn(() => createWebDriver());

  teardownFn(() => destroyWebdriver());

  return {
    driver: () => driver
  };
};

Notice, that in the original code, driver was a WebDriver client instance (an object with properties and methods). After the refactoring, driver is a function that returns the WebDriver client from the module instance. I cannot expose the WebDriver client directly (i.e. as return { driver: driver }). because after the module evaluation it is still undefined.

Conclusion

Passing Mocha lifecycle functions around removed duplication, put sequential coupling under control and allowed composability of test setup and teardown routines.

Serve JSONP in Dyson

Dyson is a great fake JSON server written in JavaScript/Node.JS.

In my current pet project, I have a fake Strava API server. Strava REST API allows you to use JSONP to overcome cross-domain restrictions. You have to add a callback parameter to the query string.

To simulate it with Dyson, you have to override the render method of the Express middleware in the endpoint definition. In my solution, depending on the existence of the callback parameter, I return either raw JSON response or I wrap it with the provided callback:


function renderJsonp(res, callback) {
res.append('Content-Type', 'application/javascript');
res.send(callback + '(' + JSON.stringify(res.body) + ');');
}
function renderJson(res) {
res.send(res.body);
}
function render(req, res) {
var callback = req.query.callback;
if (callback) renderJsonp(res, callback);
else renderJson(res);
};
module.exports = {
path: '/api/v3/athlete/activities',
template: {
// …
},
render: render
};

Listen to your tests – a real example

Listen to your tests – a real example

When reading about TDD you find often sentences like “listen to your tests” or “tests give you feedback about design”. How does it look in practice?

A couple of days ago I interviewed a prospective frontend developer. He submitted the coding assignment and during the on-site interview we kindly asked him to add a new feature to the solution.

The exercise is fairly uncomplicated. You have to display a list of hotels. When a hotel is clicked, its details appear next to the list. The feature to implement was search. Quite simple too – type a text in the box and restrict the list of hotels to the matching ones.

First unit test attempt was like this:

var view = new SearchView();

view.fill('Sunny Palms');

// assert what ???

I was in the navigator role and asked my partner:

“How are you going to check the expected outcome? There is a lot of work to do: we have to access the DOM, extract displayed hotels and verify their names if they match the search pattern.”

I would take us much time to get to a failing test (red lamp in TDD). The test was telling us that we were not on the right way. The component under development probably would have too many responsibilities.

I proposed a different solution. There was already implemented an event bus. The idea was to raise an event and let other component worry about how to handle the search.

The test was quite simple

var eventBus = new EventBus(),
  term;
eventBus.subscribe('search', function (event, data) {
    term = data;
});

view.fill('Sunny Palms');

expect(term).toEqual('Sunny Palms');

We had a failing test. We got to green very quickly – the implementation of the fill method was straightforward.

I wanted to share this story because when I was starting with TDD, the advice “listen to your tests” seemed very awkward. Where do I have to put my ear? Do the tests whisper you that you are wrong? Do they send Morse code? During the programming session I described, it was not a crystal-clear sound. It was a desperate cry for a better design.

Credits to Ky for the “Listen” licensed under CC BY 2.0.

Haskell TDD kickstart with HSpec and Guard

You’ll need:

  • Haskell Platform. It includes (among other things) Cabal – Haskell build system.
  • HSpec – Haskell testing library
cabal update && cabal install hspec
  • Ruby – install through rvm, your OS package manager or use system-provided Ruby interpreter.
  • RubyGems
  • Guard and guard-haskell – for watching your code and executing tests on change
gem install guard-haskell
  • in your project – Guardfile and your-project-name.cabal build descriptor (and of course some code and its tests)

I created a sample project – hspec-kicstart – that bundles everything together.
It has a minimal Guard and Cabal configuration, more-than-trivial production code and the test.

After cloning the project

cabal test

should tell you 1 of 1 test suites (1 of 1 test cases) passed.

guard

and then Enter (runs all tests) yields 1 example, 0 failures.

Clone it, play with it and enjoy!

Hamcrest matchers in Spock

Working with Spock we forget about a hidden gem – Hamcrest matchers. They are assertions turned into method objects. A matcher is usually parametrized with an expected value and is executed latel on the actual one.

When can we use it? For example in a table-driven specification with expected values in the table.

An expected value can be one of the following:

  • exact value
  • ‘not available’ text
  • any number

How can we express these assertions without Hamcrest?

then:
if (exactValue) {
    assert field == exactValue
} else if (notAvailable) {
    assert field == messageSource.getMessage('na', null, LocaleContextHolder.locale)
} else if (isNumber) {
    assert field.number
} else {
    assert false, 'Wrong test configuration'
}

where:
exactValue | notAvailable | isNumber
'12345'    | null         | null
null       | true         | null
null       | null         | true

Bufff…

  • there is a lot of boilerplate code
  • the specification is misleading – what does it mean that expected exactValue is null? Is it really null? Or simply this condition should not be checked?

How can we refactor this code using Hamcrest matchers?

then:
that field, verifiedBy

where:
verifiedBy << [
    org.hamcrest.CoreMatchers.equalTo('12345')
    notAvailable(),
    isNumber(),
]

...

def notAvailable() {
    org.hamcrest.CoreMatchers.equalTo messageSource.getMessage('na', null, LocaleContextHolder.locale)
}

def isNumber() {
    [
        matches: { actual -> actual.number },
        describeTo: { Description description -> 'should be a number' }
    ] as BaseMatcher
}

Result – shorter and more expressive feature method code.

You can find the full source code with mocked messageSource in this gist. For more information about Hamcrest matchers, check Luke Daley’s article.