Node.js on the Road: British Gas Connected Homes: Technical Overview

Node.js on the Road is an event series aimed at sharing Node.js production user stories with the broader community. Watch for key learnings, benefits, and patterns around deploying Node.js.

Technical overview of Node.js at British Gas, the various technologies they're using and how Node.js is working out for them.

Luke Bond Senior Consultant

My name is Luke Bond, as Rudi said, I work for YLD, and I'm working with Rudi on a project for British Gas. So I'm just going to give like a bit of a technical overview on how we're using Node, what other technology we're using, and that sort of thing. It was interesting really, good to hear the talks by these guys about bringing Node into big companies and stuff like that.

Before I worked at YLD, I worked for Sony Computer Entertainment. It was very much a Java shop. So working on service for games like fairly large scale Java services, and I sort of oversaw like introduction of Node.js into things that, so I remember that face, many times it's like this when I do this alone, you're like what, "Do not".

So I know what some of that is like with everyone. Yeah, things have gone well. So the project, I always mix up Rudi and Nuno, you know when I say Nuny or Rudo, there you go, now I'm doing it again, the components of this project we're working on is we have a Node API server, and we have a back-end kind of what we call in data transmission pipeline, data comes in, data gets transformed, pipes, pipes, pipes, and goes somewhere else, and we also have a mobile and web clients and we're doing continous deployments as Rudi said, so I'm going to talk about each of these things briefly. OUr Node API server is fairly standard stuff for anyone who's done this kind of thing, it's a restify server, doesn't have a web component so we're not using Express we're using Restify, and has Oauth2, we're using the Node, I'm sorry, the restify-oauth2 model, we're using Bunyan for logging, and we're
using Joi, it's part of Hapi, Joi is excellent, I recommend it. It's pretty good. If you haven't use it, it's for like validating your say you send JSON objects to your API so you can just use Joi to validate it. The database we're using behind our API service server is Couchbase at the moment. We've just recently switched to that, so it's kind of new for us, but we were using PouchDB which is also good, but it's like an embedded thing we wanted to sort of get something to scale a bit bigger I suppose.

In our mobile app, we have a requirement for like offline synching where, the user with mobile needs to continue to do his or her stuff when he's offline and so we want that to work and have like a database and synchronize when they come back online, so it'll be much like we're using this the Couchbase Sync Gateway for that.

There is a data transformation pipeline, it's mostly Node.js, it has a number of moving parts in it, some of it is in C, some of it is in R, some of it is in Java (and come back to where a bit later) but essentially it's because of where those bits of code come from as to why we're using them and this pipeline's all connected together with ZeroMQ and which is great fun, I don't know if you've ever played with ZeroMQ, it's good, it's good, a lightweight sort of sort of messaging thing, and we also have some parts of the pipeline where we have needed sort of traditional persistent queue type think, so we're probably using RabbitMQ, and tossing up between RabbitMQ and one or two other things at the moment. And one of the outputs of this pipeline is to go somewhere that for use for data analytics, data science as well, and some of it is also being posted to our API server.

Most of all this is JSON, check for some of the C parts, they're different, and at the moment we're using Couchbase here as well for the backend. So with the clients, we're using Angular for the web clients and we're using Apache Cordova for the HTML mobile apps and we'll as the same for we have this offline sync requirements, so we will be using some sort of CouchDB like thing like PouchDB or maybe Couchbase Lite to synchronize back to server when the signal comes back again. And as Rudi said we're doing continuous deployments. We're using Jenkins, so I have some feelings about Jenkins, mind you, but it works solid, so it is what it is.

We're using that with Gulp task runner to do all our testing: integration testing, unit testing, functional testing, that sort of thing. Also performance load testing and other things that go on in there, we're using eslint for linting, we're using Mocha, and Karma, supertests, JMeter for that kind of performance lab test stuff and also protractor for some of the UI stuff.

One of the cool things that I think is really cool about the way we're doing this is that our Jenkins stuff runs in, we're running this in docker containers so you are able to bring up environment and then tear it down, and that's repeatable and you can do it on your laptop as well, so that you're going have your whole sort of production stack running on your laptop which is really useful, you just test things end to end. It's really good. Doing that with Docker and with fig, and we're also using Nodejitsu's private NPM which is working very well for us. It's partly good to have something that you're not necessarily relying on the public NPM in case it goes down, perhaps you're trying to deploy stuff if you're using for deploying, we're also putting all our modules in there, stuff like that, so that's cool.

And also the new web features that have like the single packages and the dependencies and the licences are quite nice. Some of the things in this page are still a bit in flux because we're not in production yet, but the way we're going ahead with at the moment is that we're going to be using Docker, that's another thing, it's possibly even more scary for the big companies than Node.js, at least it's 1.0 now that helps that conversation, and we're going to be running on CoreOS which's if you don't know it's like a Linux distro, it's quite stripped down, so you don't necessarily need a massive Linux distro when you're going to be building up your stack in docker anyway, so their tools fleet and etcd.

If you're not familiar with those, etcd is like the Zookeeper, or DZD or maybe console, it like a distributed key/value store so highly consistent, and fleet is like a tool for your deploying your docker containers services on CoreOS cluster. If you imagine something like systemd at a cluster level you say, I want to start this, it might start here, it might start there and I use etcd to back it. We're using HAProxy and sort of doing dynamic configuration on that so we can have seamless deploys. Say you've got your service behind a load balancer with three boxes and then on version X, then you want to bring up version Y, and to sort of bring them up, and then you configure the load balancers, move the request across to here, and then shut the other ones down when they finish these requests, that's quite cool, and we're also working on something else that's on top of these things to a tie it all together because if you've used fleet before, you'll know that it can be tedious if you end up writing your unit files manually, then you're going to have, you say you want to run three instances of the server then you need to have three different unit files, you can't write one and run it three times, and there's this layer on top that needs to be done to generate these things automatically to make that a bit smoother, so you end up something like Docker PaaS, but not quite. So we're working towards something like that. And the great thing that it gives you is with docker and continuous deployment is if you can make your, this is not new I suppose, but if you can make your deployments as seamless as possible and as easy as possible, and quick, I think you'd be more agile and have more competency.

It's quick to deploy, its quick to roll back and when you break down that sort of resistance to continuous deployment where you're able to get things out quickly.

And we've got some cool monitoring stuff build by colleague Tom Gallacher, basically monitoring all the things, CPU, memory, like latency, requests, and all this kind of stuff. Sending over UDP to a service, we're using statsd which is a Node.js service demon for collecting all that, aggregating it and then it gets sent on to graphite so we can look at graphs of all that stuff and get quite deeply into what's going in all parts of the app, and it's quite cool. Perhaps later maybe we'll be using graphene for the front end. It's a bit nicer, that's cool too.

And lots of
different thoughts in Node.js. A lot of this touches on what Rudi was saying earlier, so, as you probably guessed because you guys are experienced with Node, a lot of stuff that we're doing is very suitable to Node.js, lots of asynchronous IO going on in a number of data services there also, you've got data being sent from one place to another, and Node.js is perfect for that like API services, and

the data pipeline type stuff.

And for the bits where it's either not so suited, or
we have other reasons to not use Node, we're just using something else and don't be afraid to do that basically. But that reason is largely sort of political or legacy reasons, some person's written that code, or where it's come from in that part of the business, it's written, what it's written and rather than feeling you have to rewrite everything in Node. And some of that's quite CPU intensive, as well so it's maybe best to be left in unit C or whatever it's in.

Node.js is cool and has built stuff fast, it's easy, you guys all know that, that's why we're here, and this one is almost like a micro-service type of architecture and sort of buzzword that, but we have lot of little bits and pieces that we can bring up and configure automatically with etcd and they just pass data from one place to another so it's all like you [xx] stuff so. But despite this is somewhat what Rudy was saying earlier but despite saying how Node.js is easy, it's fast, but it's also if you're not doing it properly, you don't understand how it works. It can be some dangerous gotchas, it's actually quite different. When you first play with things, everything's really easy, but if you don't get onto the bottom of it and understand how it works then it can lead to having errors and not knowing having them, memory leaks, strange flow control bugs, and you need to learn how to design things well, and that really comes with experience basically.

So if you're starting out with something new, and you're people who are new to Node, don't be afraid to get some training or whatever and get that right, because you don't want to be finding out on production like, oh actually you can't to do that, because that causes some problems, so that's all. Thank you very much, it's great to be here on stage with TJ and these guys.

I had a great time and LXJS thank you for everyone, thank you David for organizing that, so there is my contact details, that's a zero not an O in my twitter handle. There's some musician Henry Bonds, I don't know who it is, who has that. Anyway, cool, thanks very much.

Sign up now for Instant Cloud Access Get Started