Solving a failed npm install on the Raspberry Pi

Standard

TL;DR If you are struggling with npm install failing on a package you are convinced is no longer a dependency of your current package.json structure. Try clearing out your node_modules folder or take a look at the npm prune command.

I know this post is quite a long one but most of the content was created in realtime as I was working the problem. I’m a voracious note taker so figured I’d just write up my notes as a blog post as I went along.

I have a little express website running on my raspberry pi B+ from home. I deploy to this site using a post-receive git hook that deploys the master branch and runs npm install --production to ensure the latest dependencies are installed. Notice that I use the –production switch to ensure devDependencies are not installed.

After trying to access this site following a push I found the site to be unavailable. I use forever to maintain the node process so I checked the logs and noticed a number of errors indicating new dependencies had not been installed.

I ssh’d into the server and run npm install --production manually and saw that it was taking an unusually long time to complete eventually failing due to native module compilation failure.

Here are the new packages my latest commit has introduced:

  • dependencies
  • passport
  • passport-local
  • devDependencies
  • browser-sync
  • gulp
  • gulp-nodemon

As you can see, passport and passport-local are the only packages that should be installed. However, after I traced the failed module up the dependency tree shown below you can see that npm was trying to install browser-sync whose dependency graph relies on native modules.

bufferutil -> ws -> engine.io -> socket.io -> http-proxy -> foxy -> browser-sync

At this point I checked online to see if there were any known issues but only came up with this github issue which is old and closed.

Let’s see what the status of the --production switch is in the current npm docs

With the –production flag (or when the NODE_ENV environment variable is set to production), npm will not install modules listed in devDependencies.

That seems consistent with my understanding. So let’s make sure we’re on the latest version of npm and see if that fixes it.

My npm version (at the the time of posting) is 3.3.12, let’s update with the utter brain f*ck that is self updating software.

sudo npm install npm -g

The update has taken us to 3.6.0 but having tried to run npm install --production again I’m getting the same result with a failed compile.

So I know that the dependency causing the problem is browser-sync. Let’s confirm this by removing it from package.json. When I now run npm install it shouldn’t matter whether it does or doesn’t install devDependencies as the package causing the problem isn’t in either section.

To my surprise this still fails! OK npm install is clearly not just running through the package.json and installing the specified packages and their dependencies as I expected.

The next logical step (yes I know, many of you may have arrived at this point long ago) is to delete my node_modules folder and rerun npm install.

Success! Everything installs as required.

What’s going on? Well, it would seem that npm install doesn’t just install packages and dependencies defined in your package.json. Instead it sees your package.json as simply the top level in the dependency graph. Once it’s completed the install for all the packages in your package.json, it must then start enumerating all of the packages under node_modules and repeat the process for their package.json files.

This behaviour makes sense and more importantly for me would explain how I ended up with this problem. My git hook script does an npm install without the –production switch. After I’d fixed this up as part of my analysis of this issue I’d fixed the problem for future deployments but I’d left my node_modules folder in an invalid state because the failing module (browser-sync) was still in there.

Further reading on this problem shows there is an npm command called prune that will clear all dangling dependencies from your node_modules folder based on your current package.json. In fact we can do the following to clear out just our devDependencies, a common requirement if we’ve inadvertently installed them on the production box…as if!

npm prune --production

So, problem solved.  I hope this post has been useful or at least mildly entertaining for the more initiated out there.

Making data available to all express views

Standard

In this post I’m going to show you how to make data available to all of your views in an express 4 website. I’ll be using jade in my sample code but this solution is not specific to any view engine.

Imagine we want to display the name of the logged in user in the top right hand corner of the screen on every page. One way you could do this is to include the user object in every route handler that returns a view like so:

res.render('viewname',{user:'Joe Bloggs'})

This approach is cumbersome and error prone as the developer is sure to forget to pass this data in at some point.

The logical place to put this is in the layout view which is shared by all your other views. Think _ViewStart.cshtml in an ASP.NET MVC application.

doctype html
html
head
title= title
body
div(style="float:right") Hi #{user.name}
block content

In the code above you can see that we are referencing a user object to output the name of the currently logged in user. All views in an express application have access to an implicit variable called locals which hangs off the response object. Jade allows us to access data hanging off locals without referencing it directly.

So #{user.name} is equivalent to #{locals.user.name}

We need to load up this data for all routes and we can do this in our express startup file, app.js, index.js etc.

var app = express();
app.use(function(req, res,next){
//locals is available to all our views including layout and because this middleware is fired for all routes we are therefore setting up the user for every view.
res.locals.user = {name: 'Joe Bloggs'};
});

That’s it, you now have shared data accessible from all your views.

I hope this post has been of some use. An incredibly simple solution but if like me you’re often switching between ASP.NET MVC and Node/Express, you sometimes need to take a step back and remember to “think inside the framework”.

Digging into remote branches in git

Standard

This post assumes a basic understanding of git and the principles of commits, branches and remote repositories. I wrote up this post after playing around with remotes beyond just configuring remote tracking branches for some unrelated work.

It looks at how a remote tracking branch is setup and might be useful to people confused by what a remote tracking branch really is. It also helps clear up some of the confusion regarding how git “connects” your local and remote repositories.

As this post is based off some notes I was taking for some unrelated work it only covers the porcelain, it doesn’t dig into anything regarding the reflog.

So what are we going to be doing?

  1. Creating a couple of local repositories. One of them will be our remote but it’ll all be configured using local folders.
  2. Setting up one of the repositories to use the other as a remote.
  3. Setting up a remote tracking branch for master.
  4. Seeing how we can actually have a local feature branch configured to pull from the same remote branch as our master. This demonstrates that there isn’t a 1-1 relationship between master on your local machine and master on the remote server.  And what is master anyway?
  5. Changing the remote tracking branch for the local feature branch.

You can follow along by simply reading the explanatory text and executing the command given immediately beneath.  In some circumstances I’ve included the output from the terminal as a result of executing the command for further discussion.  These commands assume your working in a *nix environment but if you’re on Windows you can adjust the paths accordingly.  Better still, just fire up git bash which would have most likely been installed as part of your git install.  For terminal (command line) work bash is so far ahead of cmd and powershell.

First let’s setup our folder structure.  The following command(s) will create the required folder structure under your home directory and navigate into it.

mkdir ~/remotes-tutorial && cd ~/remotes-tutorial
mkdir r1 && mkdir r2
cd r1

Create a new git repo in the folder r1 folder you’re now in

git init --bare

Check the manpage for details on the –bare option.

Change to the r2 directory

cd .. && cd r2

Initialise another git repo here (notice we don’t use –bare).

git init

You’ll see that we have no remotes configured.

git remote

Configure this repo to use r1 as a remote

git remote add origin ~/remotes-tutorial/r1

Let’s check what this has done

git remote show origin

The output in the terminal will show we now have the r1 repository configured as a remote, but HEAD branch: (unknown) shows we have no tracking branches yet. That is, none of our local branches have been based off of a remote commit. This is a significant part of understanding tracking branches. A branch does not exist in git in the same way as it does in Mercurial, SVN or TFS etc. A branch is merely a named commit.

When you “connect” your local branch to a remote branch you are merely basing your local branch (the next commit) off of a commit on the remote branch which is reflected in your local repository as the tracking branch. It is from this branch that you base your local branch.

When you see those messages in the terminal showing the number of commits you are ahead or behind the remote, git is merely counting the number of commits on your local branch since you merged with the local tracking branch. This number will change when you do a git fetch as you are now updating the tracking branch to reflect any changes made to the remote branch. A pull is different in that it will do is a fetch to the local tracking branch and immediately try and merge to your local branch.

OK, this is going off on a tangent. Let’s get back to where we were.

So we have our remote setup but no tracking branches. That is to say our local master isn’t tracking the remote master. Let’s set that up now by pushing our local master branch to the remote.

git push origin master

An error!
error: src refspec master does not match any

Don’t panic, any errors we see along the way are intentional. What this means is that git doesn’t know what master is. Remember a branch is just a named commit and we haven’t made any commits to our repo yet so somewhat unsurprisingly master doesn’t exist. Let’s prove this.

git branch

You should see nothing coming back in the terminal because we don’t have any branches yet or more specifically we don’t have any commits.

Let’s make a quick commit.

touch file
git add .
git commit -m 'initial commit'

Now lets see what branches we have

git branch

OK, we now have master. But why master? This name is simply the default name given to the branch when the first commit is made. If we’d wanted to we could have created a branch called wibble before making the first commit and master would never exist!

OK, so now let’s get back to pushing our master branch to the remote.

git push origin master

Now let’s see what our remote looks like in relation to our master branch

git remote show origin

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (up to date)

The key thing here is that our master is now tracking master in the remote repo. If we wanted to we could checkout the remote tracking branch. But remember, we’re checking out our local copy of the remote (The D in DVCS) and again, this is just a named commit in the repository. No changes we make here will affect the actual remote repository.

git checkout origin/master

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

HEAD is now at e764596... initial commit

Detached HEAD? Sounds painful. The warning message is quite self explanatory but essentially from this we can see that we cannot make any changes to this branch but we can create a new branch using this as our starting point. A branch (and that includes the remote tracking branch origin/master) is simply a commit within the repository.

Let’s go off piste a little here now. Remember, our local master is already tracking origin/master but we can still base a new branch off this branch because it’s just another commit in the repository.

Let’s assume we want to make some changes…

git checkout origin/master -b new_feature

Branch new_feature set up to track remote branch master from origin.
Switched to a new branch 'new_feature'

This creates a new branch called new_feature which has been setup automatically to track origin/master. Let’s look at how our remote is now configured

git remote show origin

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branch:
master tracked
Local branch configured for 'git pull':
new_feature merges with remote master
Local ref configured for 'git push':
master pushes to master (up to date)

As you can see our new_feature branch is tracking origin/master when we pull but significantly it isn’t configured to allow us to push to origin/master. Only one local branch (ref in the message above) can be configured to push which makes sense.

This now means that whenever we do a git pull origin master it will be merged automatically into both the master and new_feature branches. Typically this isn’t something you’d want to do, especially on master. However, this shows there isn’t a linear one to one relationship between a remote branch and a local branch. As I’ve eluded to a few times in this post already, a branch is nothing more than a named commit within the repository.

Let’s make a change on new_feature and push to the remote to see what happens. Remember, we have to make the commit because, repeat after me…”A branch is just a named commit” and without any commits on new_feature it doesn’t really exist (other than in the reflog but we’ll cover that little gem in another post).

touch fileonnew_feature
git add .
git commit -m 'added new file'

git push origin new_feature

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branches:
master tracked
new_feature tracked
Local branch configured for 'git pull':
new_feature merges with remote master
Local refs configured for 'git push':
master pushes to master (up to date)
new_feature pushes to new_feature (up to date)

OK we can see that new_feature now exists in the remote and we’re tracking it. However, notice that new_feature is configured to merge from remote master when we do a git pull. This is because if you remember we based new_feature off of origin/master. Although there is nothing wrong with doing this it’s not what we want so let’s see how we can change this behaviour so new_feature tracks changes to remote new_feature when we pull.

git branch new_feature --set-upstream-to origin/new_feature

Ok now let’s look at how our remotes are setup

git remote show origin

* remote origin
Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branches:
master tracked
new_feature tracked
Local branch configured for 'git pull':
new_feature merges with remote new_feature
Local refs configured for 'git push':
master pushes to master (up to date)
new_feature pushes to new_feature (up to date)

Excellent, we have new_feature tracking origin/new_feature for push and pull.

If there is one thing you should take away from this post it’s that branches are just named commits. I hope this post has helped you understand a little more about how remote tracking branches work and their relationship to local branches.

Did I mention, “branches are just named commits”? :-)

Have fun!

Alternative Blog

Standard

For anyone interested I also maintain another technology blog over at daxaar.github.io. It started primarily as a playground for understanding GitHub pages and Jekyll and a desire to jump ship from WordPress. I’m still undecided on where I’ll eventually end up but feel free to have a read.

Cannot access LocalDb from Visual Studio 2015 Community Edition

Standard

Today I was trying to spin up a quick Web API for an angular project I’m working on in a fresh install of Visual Studio 2015 using Entity Framework Code First.  Unfortunately after creating a simple model, when I tried to update-database to make sure all the moving parts were in a good state I was seeing the following error:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 – Local Database Runtime error occurred. The specified LocalDB instance does not exist.)

After some investigation and questioning of whether LocalDB was installed (it was).  I discovered this error is caused by the connection string created by Entity Framework targeting an instance called MSSQLLocalDB which won’t exist.  I believe the error occurs because VS 2015 sees that LocalDB is already installed during installation and therefore doesn’t create this new instance on install.  The fix is to simply replace the instance name of MSSQLLocalDB with v11.0 in DefaultConnection in web.config.

Fixing Angular injector unpr Unknown provider

Standard

Error: [$injector:unpr] Unknown provider: SomeServiceProvider <- SomeService <- MainController

TL;DR – Assuming you’ve checked you aren’t duplicating the module definition. Make sure you’ve included the service module in the dependencies for the module containing the controller you’re unable to inject into.

angular.module('MyApp',['other.module.here']).controller(...);

A little more detail…

You find yourself staring at this error in the dev console and every answer on stackoverflow points to having declared the module incorrectly by redefining it rather than importing.

Let’s look at how we arrive at this error in the first place.

This is what you do to declare a new module and you must only do this once across your entire app.

angular.module('MyApp',[]);

Note the [] at the end. Their inclusion means you’re defining a new module along with its dependencies that will be supplied in the brackets. If the MyApp module has already been defined in another file it will be overwritten thereby removing any components defined in those files from the module.

So what is the correct way?

Once you know you’ve declared your module, subsequent references to the module to declare services, controllers etc must therefore use the following form; sans brackets for deps (unless there are some, which is the key to this post!):

angular.module('MyApp');

It’s this common mistake that all my google searches suggested was the cause of my problem after creating my first service and trying to inject it into a controller. However, after much investigation and head scratching this wasn’t the case.

Here’s my app.js containing my applications single controller and the only place where the module ‘MyApp’ was defined. Believe me, I checked this several times before concluding the duplicate module definition wasn’t my problem.

angular.module('MyApp',[])
.controller('MainController',MainController);

MainController.$inject = ['SomeService'];

function MainController(someService){
//...
};

And here’s my myservice.js containing the one service in my app that I wish to inject.

angular.module('another.module')
.service('SomeService',SomeService);

function SomeService(){
//...
};

Strangely unit tests for verifying the injected service were passing. It turns out, somewhat obviously after the fact (isn’t that always the case) that I have to inject the service module into the ‘MyApp’ module as a dependency. So many answers online talk about removing the dependency brackets from the module definition I couldn’t see the woods for trees.

So, to fix this issue all I had to do was:

Change this:

angular.module('MyApp',[])

to this:

angular.module('MyApp',['another.module'])

Configuring ssh keys

Standard

In this post I’ll explain how to setup ssh keys on a Mac so you no longer have to enter a password when connecting to a remote machine (generally *nix based) using ssh.

If you don’t have homebrew installed consider doing so now. To quote the authors: “It’s the missing package manager for OSX”. Whilst not entirely necessary it will allow us to easily install the prerequisites.

  1. Install ssh-copy-id which allows us to upload the public key to the remote more easily.

brew install ssh-copy-id

  1. Create the public and private key pair file locally providing an optional password. This will get stored under ~/.ssh. Specifying a filename is optional.
    <br/><br/>ssh-keygen
  2. Now we need to put the public key on the remote server. The tool we installed in step 1 will allow us to easily do this.

ssh-copy-id user@serverip-or-name

  1. That’s it! ssh onto the machine using the user you specified in step 3 and if all went well you’ll connect without having to provide a password.