Writing tests is important. One popular reason is that it gives one the ability to predict what happens in code with a high degree of accuracy. With tests, one can prove with a high degree of confidence that all conditions being equal, code works as it is expected to work.

Another reason why tests are important is that it influences good design. If code is written in such a way that its components can be tested with minimal or no module mocks, there is a high chance that it was designed using SOLID principles. It suggests that its parts are not tightly coupled and therefore the code is maintainable because parts can be substituted with minimal changes and still work as expected.

An Unpredictable JavaScript Function

Lets have an example to illustrate this. Let us imagine that we have a function determineTruthiness as defined below:

function determineTruthiness() {
  if (Math.random() > 0.5) {
    return true;
  }
  return false;
}

One can predict what its output will be by looking at it; but how can we test it without using the testing framework to mock Math.random? How can we prove through code that determineTruthiness() will always work as expected? What if Math.random() is a call to an API outside our code and it can throw errors? How do we confidently predict the output of determineTruthiness() from any output of Math.random()?

The answer to this question is the Dependency Inversion Principle

Dependency Inversion Principle (DIP)

The dependency inversion principle states that

  1. High-level modules should not depend on low-level modules but abstractions. Both modules should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

Any module or code unit that depends on another module or code unit to work is a high-level module. determineTruthiness is a high-level module because it depends on another module - Math.random.

A Car-Steering Wheel Analogy

Let us use a car and its steering wheel for analogy. The steering wheel of a car is the tool with which car drivers control the direction of movement of a car. The steering wheel is an ‘abstraction’, that is, it doesn’t really control the direction of the car but it is connected to the actual mechanism that controls the direction of the car - the ‘concrete’ implementation. As long as drivers can rotate the steering clockwise and anticlockwise to control the direction of the car, all is well. Engineers can change the actual mechanism that causes the car to steer - the concrete implementation, and drivers would not be bothered. There is no tight coupling between the driver and the car-steering mechanism. How is this a good thing?

Imagine that there’s a better car-steering mechanism and the car owner wants to upgrade. If there was tight coupling between the driver and the car-steering mechanism, the driver would have to learn how to operate the new mechanism, or a new driver who understands the mechanism would have to be hired. In engineering terms, the driver would have to change with every new mechanism.

Another scenario: Imagine that there is only one driver in a company that understands the company’s car’s steering mechanism and the driver leaves the company. The company will have to find another driver who understands the mechanism or hire and train someone to operate the mechanism. Worse - the company changes the steering mechanism of all cars to one with popular expertise if they cannot find one. Makes everyone happy, right?

Benefits of the Dependency Inversion Principle

The interface for steering the car is the same across many cars - a handle that can be rotated clockwise and anticlockwise, and because of this, all a driver has to know is how to rotate the steering to change the car’s direction.

  1. It minimises the amount of changes: In programming, many parts change frequently with new demands (bug fixes, upgrades, new features e.t.c). If parts are tightly coupled, a change in one part will cascade down many parts. The DIP suggests that parts should not strongly depend on one another but there should be an abstraction, like the steering wheel. When the driver changes, we would not need to change the steering wheel, and vice versa. The DIP is one sure way to separate things that change from things that stay the same.

  2. It enables proper testing and eventually, good design: Tightly-coupled parts are hard to test because in tests, we would need to prove that X occurs when Y happens. If a function is tightly coupled to a concrete implementation that cannot be inspected from outside the function, there would be no way to swap that (concrete) implementation with another implementation that proves our test cases. One would have to mock whole modules (e.g the Math module in the determineTruthiness example) to prove what happens. Mocking whole modules in tests is bad practice. In fact, it is a sign that code design does not follow SOLID principles.

A Predictable JavaScript Function

If we made determineTruthiness to depend on an abstraction for getting the (random) number, we would be able to swap the source of the number for another if the demand arises. We will aslo be able to predict if it will return true or false depending on the output of the abstraction. This setup would look like:

function determineTruthiness({ getNumber = () => Math.random() } = {}) {
  if (getNumber() > 0.5) {
    return true;
  }
  return false;
}

What has been done here is that determineTruthiness now accepts an object as an argument. That object has a property getNumber with a default value of () => Math.random(); that is, a function that when called, will return the value of Math.random(). If determineTruthiness is called without passing in any argument, or passing in any value for getNumber, the value of getNumber will be () => Math.random(); and that is it. getNumber represents the abstraction for Math.random.

console.log(determineTruthiness()); // true or false, depending on getNumber()

With the abstraction through the getNumber method, we can swap () => Math.random() in tests without mocking Math.random and accurately predict what happens. Assuming that we are using Jest

describe("determineTruthiness", () => {
  it("returns true when the number is > 0.5", () => {
    const truthiness = determineTruthiness({
      getNumber: () => 0.51,
    });
    expect(truthiness).toBe(true);
  });

  it("returns false when the number is <= 0.5", () => {
    const truthiness = determineTruthiness({
      getNumber: () => 0.499,
    });
    expect(truthiness).toBe(false);
  });
});

Aside from being able to swap or mock it in tests, we can swap the value of getNumber with any function that looks like or has the same interface as getNumber when we execute determineTruthiness. This makes the code loosely coupled to Math.random, a concrete implementation. If we have to get the number from an API outside our code another day or in a different part of the code, we would not modify the determineTruthiness function. We would swap the getNumber abstraction with a call to the API.

Conclusion

I cannot remember who or where I heard the clause “Design Node.js code with tests in mind”, but following it has greatly helped my software engineering career. How? It has helped me move from the everyday programmer who just knows a programming language to being one that is conscious of and impelements software design principles like SOLID, testing and design patterns; and honestly, it has made me a better software developer.

I’ve heard developers say that one needs to learn SOLID to write maintainable code. I’ve found that if one focuses on writing code with tests in mind, they’ll write SOLID code, and eventually, maintainable code. Both are intertwined, like a catch-22.

Image credits