Obtaining Typescript Definition Files


typings or the types npm organisation?

In this post I’ll look at the current options available to us for managing the definition files (d.ts) in our typescript projects.

I recently started looking at the new features in typescript 2.0 and was surprised to find that tsd has been deprecated. The deprecation isn’t part of the 2.0 release as tsd is/was just external tooling but it quickly became part of my refresher that I thought I’d share here.

As a quick update tsd was a command line tool that allowed you to pull type definition files for external libraries (lodash, jquery etc) into your project.

I discovered we now have two options available to us that initially caused me some confusion as I wasn’t sure if they were related in some way. They’re not.


The typings project is a community supported option hosted on github that has been the primary replacement for tsd. It allows you to pull in definition files from a number of sources and continues to be supported. It has a solid upgrade path if you’re moving a project from tsd.

View the README on the project repository for more info.

@types (organisation on npm)

The @types organisation on npm has been created by Microsoft as a response to the communities feedback that obtaining definition files has been troublesome. At the time of writing the organisation contains 2247 separate packages. Since this is a regular npm repository you’ll just install the definition files as regular npm packages using the @prefix for the org:

npm install @types\lodash.

The repository is populated from a publisher service that continues to pull the definitions from the DefinitelyTyped repo. You could use it for pulling into a private repo if required.

Going forward I think I’ll be using @types. However, the new angular-cli uses typings which is how I was introduced to it’s existence. It’ll be interesting to see if they continue to use typings or move over to using @types.

I’ll be publishing another post soon describing how to configure a new typescript project to use @types.

Further reading:

Microsoft Blog Post – The Future of Declaration Files

Azure Code Samples


We’re certainly not short of demo solutions and todo apps but I was pleased to receive an email today with details on the now comprehensive list of Azure code samples available for download here.

Of particular interest to me are the identity management and authentication samples I’ll be hooking into Dynamics CRM 2016 and of course anything node related will always get my attention.

What is __proto__ in JavaScript?


If nothing else it has an amusing name, it’s pronounced “dunder proto” due to the double underscore notation it borrows from python.

The __proto__ property is used to access the prototype chain for an object instance. Don’t know what a prototype chain is? Go take a look here.

So where does it come from? It gets created on any new instance during construction.  If construction is done using a constructor function i.e by using the new keyword var a = new Person() it will point to the prototype of the constructor function, in this instance Person.prototype.  If done using object literal notation var b = {} it will point to Object.prototype. Note that as everything in JavaScript is an object, all objects can follow their prototype chain back to Object.prototype

You can also gain access to the prototype using Object.getPrototypeOf(...). MDN provides more on this. You should give this a read if you haven’t already to understand the history of this property.

The following code demonstrates the setup of two very simple object literal and constructor function prototype chains.

//Object literal
var o = {};
o.__proto__ === Object.prototype; //true
o.prototype === undefined;   //true

//Constructor function
function Shape(){}
var circle = new Shape();

circle.__proto__ === Shape.prototype;   //true
circle.__proto__.__proto__ == Object.prototype; //true
Shape.__proto__ === Function.prototype;   //true

More reading.

I’ve already linked them above but as always the MDN docs are a good place to start.  There is a great stackoverflow post that has some really valuable insight amongst the many answers and comments.

Solving a failed npm install on the Raspberry Pi


TL;DR If you are struggling with npm install failing on a package you are convinced is no longer a dependency of your current package.json structure. Try clearing out your node_modules folder or take a look at the npm prune command.

I know this post is quite a long one but most of the content was created in realtime as I was working the problem. I’m a voracious note taker so figured I’d just write up my notes as a blog post as I went along.

I have a little express website running on my raspberry pi B+ from home. I deploy to this site using a post-receive git hook that deploys the master branch and runs npm install --production to ensure the latest dependencies are installed. Notice that I use the –production switch to ensure devDependencies are not installed.

After trying to access this site following a push I found the site to be unavailable. I use forever to maintain the node process so I checked the logs and noticed a number of errors indicating new dependencies had not been installed.

I ssh’d into the server and run npm install --production manually and saw that it was taking an unusually long time to complete eventually failing due to native module compilation failure.

Here are the new packages my latest commit has introduced:

  • dependencies
  • passport
  • passport-local
  • devDependencies
  • browser-sync
  • gulp
  • gulp-nodemon

As you can see, passport and passport-local are the only packages that should be installed. However, after I traced the failed module up the dependency tree shown below you can see that npm was trying to install browser-sync whose dependency graph relies on native modules.

bufferutil -> ws -> engine.io -> socket.io -> http-proxy -> foxy -> browser-sync

At this point I checked online to see if there were any known issues but only came up with this github issue which is old and closed.

Let’s see what the status of the --production switch is in the current npm docs

With the –production flag (or when the NODE_ENV environment variable is set to production), npm will not install modules listed in devDependencies.

That seems consistent with my understanding. So let’s make sure we’re on the latest version of npm and see if that fixes it.

My npm version (at the the time of posting) is 3.3.12, let’s update with the utter brain f*ck that is self updating software.

sudo npm install npm -g

The update has taken us to 3.6.0 but having tried to run npm install --production again I’m getting the same result with a failed compile.

So I know that the dependency causing the problem is browser-sync. Let’s confirm this by removing it from package.json. When I now run npm install it shouldn’t matter whether it does or doesn’t install devDependencies as the package causing the problem isn’t in either section.

To my surprise this still fails! OK npm install is clearly not just running through the package.json and installing the specified packages and their dependencies as I expected.

The next logical step (yes I know, many of you may have arrived at this point long ago) is to delete my node_modules folder and rerun npm install.

Success! Everything installs as required.

What’s going on? Well, it would seem that npm install doesn’t just install packages and dependencies defined in your package.json. Instead it sees your package.json as simply the top level in the dependency graph. Once it’s completed the install for all the packages in your package.json, it must then start enumerating all of the packages under node_modules and repeat the process for their package.json files.

This behaviour makes sense and more importantly for me would explain how I ended up with this problem. My git hook script does an npm install without the –production switch. After I’d fixed this up as part of my analysis of this issue I’d fixed the problem for future deployments but I’d left my node_modules folder in an invalid state because the failing module (browser-sync) was still in there.

Further reading on this problem shows there is an npm command called prune that will clear all dangling dependencies from your node_modules folder based on your current package.json. In fact we can do the following to clear out just our devDependencies, a common requirement if we’ve inadvertently installed them on the production box…as if!

npm prune --production

So, problem solved.  I hope this post has been useful or at least mildly entertaining for the more initiated out there.

Making data available to all express views


In this post I’m going to show you how to make data available to all of your views in an express 4 website. I’ll be using jade in my sample code but this solution is not specific to any view engine.

Imagine we want to display the name of the logged in user in the top right hand corner of the screen on every page. One way you could do this is to include the user object in every route handler that returns a view like so:

res.render('viewname',{user:'Joe Bloggs'})

This approach is cumbersome and error prone as the developer is sure to forget to pass this data in at some point.

The logical place to put this is in the layout view which is shared by all your other views. Think _ViewStart.cshtml in an ASP.NET MVC application.

doctype html
title= title
div(style="float:right") Hi #{user.name}
block content

In the code above you can see that we are referencing a user object to output the name of the currently logged in user. All views in an express application have access to an implicit variable called locals which hangs off the response object. Jade allows us to access data hanging off locals without referencing it directly.

So #{user.name} is equivalent to #{locals.user.name}

We need to load up this data for all routes and we can do this in our express startup file, app.js, index.js etc.

var app = express();
app.use(function(req, res,next){
//locals is available to all our views including layout and because this middleware is fired for all routes we are therefore setting up the user for every view.
res.locals.user = {name: 'Joe Bloggs'};

That’s it, you now have shared data accessible from all your views.

I hope this post has been of some use. An incredibly simple solution but if like me you’re often switching between ASP.NET MVC and Node/Express, you sometimes need to take a step back and remember to “think inside the framework”.

Digging into remote branches in git


This post assumes a basic understanding of git and the principles of commits, branches and remote repositories. I wrote up this post after playing around with remotes beyond just configuring remote tracking branches for some unrelated work.

It looks at how a remote tracking branch is setup and might be useful to people confused by what a remote tracking branch really is. It also helps clear up some of the confusion regarding how git “connects” your local and remote repositories.

As this post is based off some notes I was taking for some unrelated work it only covers the porcelain, it doesn’t dig into anything regarding the reflog.

So what are we going to be doing?

  1. Creating a couple of local repositories. One of them will be our remote but it’ll all be configured using local folders.
  2. Setting up one of the repositories to use the other as a remote.
  3. Setting up a remote tracking branch for master.
  4. Seeing how we can actually have a local feature branch configured to pull from the same remote branch as our master. This demonstrates that there isn’t a 1-1 relationship between master on your local machine and master on the remote server.  And what is master anyway?
  5. Changing the remote tracking branch for the local feature branch.

You can follow along by simply reading the explanatory text and executing the command given immediately beneath.  In some circumstances I’ve included the output from the terminal as a result of executing the command for further discussion.  These commands assume your working in a *nix environment but if you’re on Windows you can adjust the paths accordingly.  Better still, just fire up git bash which would have most likely been installed as part of your git install.  For terminal (command line) work bash is so far ahead of cmd and powershell.

First let’s setup our folder structure.  The following command(s) will create the required folder structure under your home directory and navigate into it.

mkdir ~/remotes-tutorial && cd ~/remotes-tutorial
mkdir r1 && mkdir r2
cd r1

Create a new git repo in the folder r1 folder you’re now in

git init --bare

Check the manpage for details on the –bare option.

Change to the r2 directory

cd .. && cd r2

Initialise another git repo here (notice we don’t use –bare).

git init

You’ll see that we have no remotes configured.

git remote

Configure this repo to use r1 as a remote

git remote add origin ~/remotes-tutorial/r1

Let’s check what this has done

git remote show origin

The output in the terminal will show we now have the r1 repository configured as a remote, but HEAD branch: (unknown) shows we have no tracking branches yet. That is, none of our local branches have been based off of a remote commit. This is a significant part of understanding tracking branches. A branch does not exist in git in the same way as it does in Mercurial, SVN or TFS etc. A branch is merely a named commit.

When you “connect” your local branch to a remote branch you are merely basing your local branch (the next commit) off of a commit on the remote branch which is reflected in your local repository as the tracking branch. It is from this branch that you base your local branch.

When you see those messages in the terminal showing the number of commits you are ahead or behind the remote, git is merely counting the number of commits on your local branch since you merged with the local tracking branch. This number will change when you do a git fetch as you are now updating the tracking branch to reflect any changes made to the remote branch. A pull is different in that it will do is a fetch to the local tracking branch and immediately try and merge to your local branch.

OK, this is going off on a tangent. Let’s get back to where we were.

So we have our remote setup but no tracking branches. That is to say our local master isn’t tracking the remote master. Let’s set that up now by pushing our local master branch to the remote.

git push origin master

An error!
error: src refspec master does not match any

Don’t panic, any errors we see along the way are intentional. What this means is that git doesn’t know what master is. Remember a branch is just a named commit and we haven’t made any commits to our repo yet so somewhat unsurprisingly master doesn’t exist. Let’s prove this.

git branch

You should see nothing coming back in the terminal because we don’t have any branches yet or more specifically we don’t have any commits.

Let’s make a quick commit.

touch file
git add .
git commit -m 'initial commit'

Now lets see what branches we have

git branch

OK, we now have master. But why master? This name is simply the default name given to the branch when the first commit is made. If we’d wanted to we could have created a branch called wibble before making the first commit and master would never exist!

OK, so now let’s get back to pushing our master branch to the remote.

git push origin master

Now let’s see what our remote looks like in relation to our master branch

git remote show origin

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (up to date)

The key thing here is that our master is now tracking master in the remote repo. If we wanted to we could checkout the remote tracking branch. But remember, we’re checking out our local copy of the remote (The D in DVCS) and again, this is just a named commit in the repository. No changes we make here will affect the actual remote repository.

git checkout origin/master

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b

HEAD is now at e764596... initial commit

Detached HEAD? Sounds painful. The warning message is quite self explanatory but essentially from this we can see that we cannot make any changes to this branch but we can create a new branch using this as our starting point. A branch (and that includes the remote tracking branch origin/master) is simply a commit within the repository.

Let’s go off piste a little here now. Remember, our local master is already tracking origin/master but we can still base a new branch off this branch because it’s just another commit in the repository.

Let’s assume we want to make some changes…

git checkout origin/master -b new_feature

Branch new_feature set up to track remote branch master from origin.
Switched to a new branch 'new_feature'

This creates a new branch called new_feature which has been setup automatically to track origin/master. Let’s look at how our remote is now configured

git remote show origin

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branch:
master tracked
Local branch configured for 'git pull':
new_feature merges with remote master
Local ref configured for 'git push':
master pushes to master (up to date)

As you can see our new_feature branch is tracking origin/master when we pull but significantly it isn’t configured to allow us to push to origin/master. Only one local branch (ref in the message above) can be configured to push which makes sense.

This now means that whenever we do a git pull origin master it will be merged automatically into both the master and new_feature branches. Typically this isn’t something you’d want to do, especially on master. However, this shows there isn’t a linear one to one relationship between a remote branch and a local branch. As I’ve eluded to a few times in this post already, a branch is nothing more than a named commit within the repository.

Let’s make a change on new_feature and push to the remote to see what happens. Remember, we have to make the commit because, repeat after me…”A branch is just a named commit” and without any commits on new_feature it doesn’t really exist (other than in the reflog but we’ll cover that little gem in another post).

touch fileonnew_feature
git add .
git commit -m 'added new file'

git push origin new_feature

Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branches:
master tracked
new_feature tracked
Local branch configured for 'git pull':
new_feature merges with remote master
Local refs configured for 'git push':
master pushes to master (up to date)
new_feature pushes to new_feature (up to date)

OK we can see that new_feature now exists in the remote and we’re tracking it. However, notice that new_feature is configured to merge from remote master when we do a git pull. This is because if you remember we based new_feature off of origin/master. Although there is nothing wrong with doing this it’s not what we want so let’s see how we can change this behaviour so new_feature tracks changes to remote new_feature when we pull.

git branch new_feature --set-upstream-to origin/new_feature

Ok now let’s look at how our remotes are setup

git remote show origin

* remote origin
Fetch URL: /Users/darren/remotes-tutorial/r1
Push URL: /Users/darren/remotes-tutorial/r1
HEAD branch: master
Remote branches:
master tracked
new_feature tracked
Local branch configured for 'git pull':
new_feature merges with remote new_feature
Local refs configured for 'git push':
master pushes to master (up to date)
new_feature pushes to new_feature (up to date)

Excellent, we have new_feature tracking origin/new_feature for push and pull.

If there is one thing you should take away from this post it’s that branches are just named commits. I hope this post has helped you understand a little more about how remote tracking branches work and their relationship to local branches.

Did I mention, “branches are just named commits”?🙂

Have fun!

Alternative Blog


For anyone interested I also maintain another technology blog over at daxaar.github.io. It started primarily as a playground for understanding GitHub pages and Jekyll and a desire to jump ship from WordPress. I’m still undecided on where I’ll eventually end up but feel free to have a read.