All Posts

First Impressions of npm On-Site

For the longest time we committed our entire node_modules folder here at uShip. We recently made the switch to npm On-Site. This is our journey.

How We Use npm At uShip

Here at the ‘Ship we don’t actually run a node server in production, we just use node for our front-end build system. This includes transpiling our SCSS to CSS, transpiling our ECMAScript 6 to ECMAScript 5, linting, and most recently, bundling via webpack.

Our Old Way of Doing Things

Limitations/Issues We Came Across

Intuitively, it would seem that the best approach to building our dependencies would be to run npm install as a build step to install all our packages from our package.json. Unfortunately, if npm were ever down in this scenario, our build process would fail. Thus we chose to commit our node dependencies to source control as a fail-safe.

The first problem we encountered was the length of file paths for deeply-nested dependencies. Windows has problems with file paths longer than 260 characters, thus, if any dependencies had file paths in our node_modules folder that were longer than that, then we weren’t able to commit our dependencies. Our solution was to use a package called flatten-deps which moved all of our nested dependencies to the root of the node_modules directory. This worked well, but required us to rerun the flattening process with every change to our dependencies.. We ignored this annoyance for a while, though, since there was no better solution.

Our Transition to npm On-Site

The first step in our journey to npm On-Site was npm_lazy. Unfortunately, though, it had limitations. By default npm_lazy pulls from the public registry. If the case ever arises that the public registry is down, npm_lazy will deliver dependencies from its local cache, preventing your build process from going down along with the public registry. This approach lacked two features that we really wanted: the performance gained from always pulling dependencies from the local cache, and the ability to prevent others from installing undesired dependencies without the proper approval process.

npm Enterprise

As soon as I learned about npm Enterprise, I was immediately interested, since it solved the above-mentioned restrictions presented by npm_lazy: it allowed for defaulting to pulling from the local dependency cache, and adds whitelisting functionality to prevent arbitrary packages from being added without review.

We struggled to get npm Enterprise functioning properly at first due to our version of npm being out of date. After we fixed this, we still ran into problems with the original initialization of the registry while it scanned the entire public registry for an initial pass. We still don’t fully understand why, but it stopped in the middle and never finished. After talking with support, they recommended that we try out anew version of npm Enterprise more suited to our needs.. They call this new version npm On-Site.

How npm On-Site Is Different

npm On-Site has a few things that are truly convenient about it that make it much easier to manage than the original npm Enterprise. For starters, with npm On-Site they moved the original npm Enterprise registry to run within a docker container for stability purposes. This allowed them to add the main new feature of On-Site: an admin interface that allows you to start and stop the registry, update the version of On-Site you are running, and control the settings of the service all from an interface accessible through a local url. The admin interface also provides some useful graphs of memory and cpu usage and a few other things that make using the On-Site service much easier.

Final Impressions

My initial impression of On-Site was nerve racking as it took several hours to make the initial crawl over the entire npm public registry. Once I learned that this was roughly ~290k dependencies it made more sense. This initial “full” crawl of the npm public registry only has to be done one time though which makes for a slow setup, but it doesn’t really affect the speed of adding new dependencies to your registry on a day-to-day basis.

Now, in order to add a new dependency we only have to execute two steps. First, ssh into our On-Site server. Second, install the desired dependency which in turn whitelists that dependency’s required dependencies. This has completely transformed how we manage our dependencies and made our process much simpler. So far it has been working great aside from a small nuance I ran into one time where a package randomly started 404’ing. In that case, though, it was plenty easy to log into the On-Site server and add the missing package and things went right back to normal.