Stability during large refactors

Feature Change: the Node.JS library to avoid introducing breaking changes

Sharing is caring!

by Christian Tracy

04/10/2017

Have you ever had to decide to migrate a running application to a new version? Of saying now is the time, let’s leave the old implementation behind? Does having to push to production a major technology change sounds familiar?

Move fast and break nothing – Zack Holman

Our challenge: releasing a whole new version of a widely used app

This is where I stood a couple of months ago. With the team, we had been working on two different versions of the same application for months. We were waiting to be ready to migrate to the new version. And I was afraid because I was not sure if the new implementation would do the job.

Would the production environment remain stable and available? Or was I going to take a careless risk, with a possibility of blowing everything up?

The application I’m talking about is a large Node JS app that receives tens of thousands of requests a day and manages a large data flow. It is a core and critical element of our client’s business. Basically they use it to attract new clients and retain existing customers.

As you can imagine, there was no way we could afford to break it or have it down, even briefly. So we had to make sure the transition from the current version to the next was going to be smooth.

Finding the right tool: introducing feature change

To facilitate the move, we looked for solutions that allow developers to make sure their new implementation is good and ready to go live!

This is when we remembered a talk from Damian Schenkelman at the last NodeConf in Buenos Aires. While he worked at Auth0, Damian and his team created an open-source library called Feature Change. It logs differences between two different implementations of the same method.

The main idea of this library is to call both the current and the new implementation of a function at the same time. As always and without waiting for the new version to return the original call, it returns the results of the current method. But it also compares the results of both versions and logs differences in the form of errors.

The great advantage of using Feature change is that you get to compare the results of both applications without affecting the normal flow and timing of the app. It is fully transparent for the client.

Originally, Feature change was designed to log small methods. For example, when you change your database engine and you’re trying to avoid errors.

In our case, we are using this library to test the critical big methods of our API. We want to generate reports of errors and successes of the responses that are being returned by our new implementation. When we’ll have a full coincidence of results on a large enough number of requests, we’ll know that we’re ready to take the plunge!

A simple and quick solution to guarantee continuity in production

I can imagine that you’re thinking: “It’s a good idea but you could solve that problem only with solid test”. To what I answer: you’re right.

But let’s be honest. You don’t always have a good test implementation in place and it is time consuming to implement it. Not only that. This method allows you to confirm the stability of the new version with production data. And even if you have rock-strong tests, double-checking is never a bad option.

Especially when you have open access to a tool like Feature Change that does not require a long implementation.

It’s quick because it’s very simple to use:

Basically, as you can see, you only need two versions of your method and that’s it!

The plus: a custom comparator

When we started using the library, we realized that some errors that were logged were not errors but simple differences. In the basic usage of the library, it doesn’t matter if you have a little or a big difference: it’s an error. For us, sometimes the new results are a bit different but are still valid.

We thought that we’d extend the library to add a type of custom comparator. But that was not necessary! We started a small research on the library to include our comparator and found out that the library already includes the possibility to add this. You only need your own method to compare and return true or false, quite easy!

Comparing large applications: the importance of loading all results

We were happily surprised with the possibilities that this library offers. Especially in our specific context where we used it to run a big test in a complex application. The results were amazing and we will definitely keep using it.

The only additional thing we thought would be great to have is the possibility to keep a log of success results as well as errors. This function is not native in the library.

In the case of large applications, it’s important to keep a report of errors and successful comparisons, either in one or separate files. This is a good way to guarantee that all the requests were run successfully and that you didn’t have another kind of error during the comparison process.

We’re going to send our pull request very soon with this new functionality!
And once we do so, we will run the test again and hope to be able to take the final step towards the migration of our application, feeling reassured that it will be alright!

More articles to read

Previous blog post

Web Technologies

05/29/2017

Inversion of Control in NodeJS

Read the complete article

Next blog post

SMB

03/23/2017

Embedding videos in a native Android app

Read the complete article