Node.js on the Road: Nate Fitch

Node.js on the Road is an event series aimed at sharing Node.js production user stories with the broader community. Watch for key learnings, benefits, and patterns around deploying Node.js.

Nate Fitch, Software Engineer

My name's Nate Fitch. I work for Joyent as Eric said, and I'm going to talk to you a little bit about Manta. Before I start I want to take a quick poll. How many of you run Node on Windows? You don't have to be embarrassed, it's OK. It really is. OK, how many people run it on Linux? OK, good.

Some BSD based system, you know Macs, whatever. OK, great. And then SmartOS. You guys are killing me. It's OK, it's OK, you saw the download numbers. Alright.

So Joyent is exclusively
SmartOS shop, so some of the things I'm going to show you today are only available on SmartOS, but it does work on some of the other ones, and I'll show you a compatibility matrix at the end. So a little bit about me, I work for Joyent, I actually came to Joyent to work on Manta, and by the time I joined, we needed to do a couple of things, so I worked on garbage collection, and audit, and most recently I've taken over deployments for Manta. So, how many people are familiar with Manta?

Come on! OK, alright. So I'm going to get down into kind of some guts of Manta, but let me explain a little bit about what it is. So, Manta is Joyent's object store with built-in compute, and when I say built-in compute, I mean that compute runs on your data that's stored in Manta right where your data is, so you don't have to transfer your data to some other place to work on it. It is right there, and the interface that we give you to work on that data is Unix, so how many people are Unix nerds?

Alright. So, Manta, the way that you—so it's basically bolted on Map Reduce so the way that you actually form your MapReduce is literally, sort, pipe, correct, unique, and it is literally running those commands down in the zones. So I'm going to explain a little bit about what our architecture is, and from there I hope to give a small demo. Hopefully the demo gods will smile down on me tonight. OK, so a little bit, what I want you to notice about the high level architecture is how many places we have Node up there. So this is kind of your standard object store architecture, so you have the front end, up top that takes client requests, we cache some auth stuff to authenticate you. When you go to an index to figure out where an object is on some storage Node and then the storage Node all the way over there is where we actually store your object.

When you run a job in Manta to compute over your data, we have a job controller on the same physical host as the storage. And we have a whole bunch—how many people are familiar with virtual machines? VMs? Great. So what we have down on the compute nodes is a slab of Solaris zones. How many people are familiar with LXC?

Or Docker? Yeah, like that. So that's kind of Solaris's and SmartOS's version of virtualization, and so we have a slab of those compute Nodes, and we literally mount your object from the storage zone into the compute zone, and we run your unix command. Alright. And then of course we tie it all together down the pipeline.

Alright. So, I just wanted to highlight a couple of these places that—or, how we use Node. Ok, so most people I assume in this room run Node as a server. So this is the place that we're running Node as a server, so obviously our front-end would be a server. How many people are familiar with Restify? So that up there, actually all of these places up here are Restify servers.

We do that for DTrace, LDAP.js, if you know LDAP.js that's this guy right here. So we use Node as a server. What else do we do? We use Node as kind of as controllers for other things in our system. So in our auth cache for example, we pull from LDAP and we push into Redis, so we pull down from that LDAP.js and we push into Redis. This job controller is a very interesting one, so that job controller both proxies all the data in and out of those compute zones when you're going back through the front door Manta, but we also use it to mount the objects from the storage zone into the compute zones which is like low level file system junk. So, sometimes I forget how close Node is to C, and sometimes it bites me in the face, and sometimes it bites them in the face.

This Node process here coordinates with Zookeeper for master failover, so we have a Node process running next to each one of our postgres databases and coordinates with Zookeeper and will actually flip databases when one is detected to not be available. So those are places that we use Node as controllers, and then finally we use Node as kind of a—so some pieces of Manta are implemented in terms of Manta. So our ops box is where we kickoff jobs down in Manta that do garbage collection, so we actually run a MapReduce job to figure out what objects we can go garbage collect, we also use Node—so, actually this thing is kicking off Node apps down here. I put it up here, but it's really like Node all the way down to here. Anyways, we also do all our reporting and auditing through Manta as well. So those are the main ways that we use Node. We also use Node for CLIs, all of our CLIs are written in Node, the Manta deployment stuff, so actually managing our services across Manta,
we also use Node and then of course one-off Manta jobs.

Now let's hope that the demo
gods smile on me, and I'm going to show you, all right, let me give you a little taste of, and I promise I'm not here to sell Manta, although I am a little bit. All right, so what I'm going to do is—so this mfind command is going to find all the objects that are—so this is where we store all our—so Musky—hold on I got to take a little side note. So Manta is a fish name that starts with an 'M', all our internal services are also fish names that start with 'M', so Muskie is actually our front-end, yeah, so that's where that Muskie comes from.

Ok, so I have 2,892 objects that I'm going to run through this compute job. Each of them was like 500 MB or something. Anyway it will, move, the screen? You just want me to maximize it? Is that what? It actually looks great on mine down here. Sorry. I just want to get, okay, better? That's fine.

So this turns out to be about 750 GB of data at the end, so let me, hopefully this still might paste, no something's lagging on my computer. Alright, so all I wanted to do was find all the 500 errors, so I'll talk a little bit about the logging that we use in just a second. So this is going to literally map, the map phase in my job is grepping for status codes that start with 50 which is all 500s, all internal errors. And then I'm going to cat as a reduce phase to just put it all in one place.

So I'm going to go ahead and kick that off, and this is the dashboard that we used to view Manta, and if the demo gods smile, you'll see this baby light up. Come on, light up. I know you want to. Come on demo gods, I know you love me. Oh, whew. Whoosh, alright, yay, alright. So what that did is, it's literally mounting all those objects in there and running those compute commands, so I'm not going to stand here and watch it the entire time.

I did it right before—actually I did it on just a day, a day's worth of logs and it took like 20 seconds and actually you can catch the dashboard because it only refreshes every 15 seconds. So, anyways, so before I ran this, it all ran about two and a half minutes. OK. So, that's how we use Node, and a little bit of, this is how we view Node.

Oh, I just wanted to point out that all of those little squares are zones. Like each one of those little squares is a zone, you can see they're resetting now, the yellow means they're resetting. There you go, so the job is basically done at this point. All right. So, most of you, I assume, are running Node as a server rather than a CLI or anything else, so I want to focus on quickly four technologies and give you another demo of these four technologies that we use to run our Node processes in prod. So Restify, Bunyan, JSON, MDB.

Restify is a Node framework for building RESTful APIs. Unlike other frameworks, our main focus in doing it was observability and control, so it has automatic DTrace, how many people are familiar with DTrace? All right, great, so DTrace; you can think of it as little probes that fire and then things on top that take them and can aggregate, and it's awesome, and it's production safe which is the important thing.

Bunyan is the logging framework. It's both the logger itself, a tool for viewing logs and being able to extract the logs that you want, and then also real time log looking at with -p which I'll demo. JSON is, I only put this up here because I use it everyday, all of our internal services return JSON, and so I'm always piping stuff into JSON to extract the fields that I want.

So, for example this first one will extract just the remote ID from a stream of JSON, this next one will filter out to where this underscore audit is true which is what Bunyan does to recognize logs, and then finally it also sets stuff.

And then finally the Modular Debugger.
I'm going to demo this, but how many people are familiar with MDB?

Just think of it as a debugger. I'll show you what it does in a minute. All right, so demo time. And hopefully—so I'm going to use this Node app to demonstrate what's going on. So this is a full server, this is a full Restify server. Notice that I'm using, can people see that OK? So we're creating a logger that's going to go to /var/temp/demo.

This is on response, if I send error, it's going to error, if I send crash, it's going to process abort. Otherwise it's going to keep counters of status and total—success and total, sorry, and then this is actually where I start the server and that's it. So I'm going to use this to demo those other tools that I was showing, now I need to type.

All right, so first thing we're going to do is start the server, obviously. Now I did't actually print out any logs anywhere, so here, I just want to show you what a Bunyan log looks like. Now that doesn't look that great, so I'm going to pipe that through JSON, this is what I was talking about, bam. All right, so notice that you have a request so you can see the parameters, user agent, headers, hosts, etc, etc., and our response was successful.

What I have is I have two agents running, one on my Desktop and one in my dev VM that they're both hitting the server. OK, so some of the things that I can do with—this is my cheat sheet. OK, so some of the things that I can do with Bunyan. Alright so let me just tail -f that log for you. Demo.log, all right, so there's a bunch of Bunyan stuff coming out.

Can you kind of see it cycle a little bit? But that's super ugly, so we can pipe that through Bunyan, and we start getting pretty logs, right? So you see 200s go by, yeah all right, so requests, reponses, everything looks great. All right, but say, I had something that, let me bring you back to the code for just a second.

See here, I'm debug logging the params that are to it. If I want to get at those while my service is running in production, I can do Bunan -p, no I forgot, requires permissions. So you can give it a specific pid, but I'm just going to do it across all Node processes, so I don't know how you run your Node services in production but we usually have an HAProxy and then many Node processes underneath it.

So it's nice to be able to aggregate all our logs in one, so Bunyan -p is really, really nice for that, and so let's just give debug logs a go. There you go. So now you see we're getting these debug logs right here, right? We can do other things like, how about the, hopefully I'm writing this right. Sorry, so how about where we actually had this params as not undefined, so that's only giving the debugs logs because that's the only thing that doesn't have that field that's not undefined, for example.

Or, how about everything that's coming from remote address equals 127, how about everything that's coming from—double quotes vs single quotes—how about everything that's coming from a particular IP address? There we go, so that's everything coming from my dev VM versus my Mac. How many people have wanted to do that in production, like see all requests that are coming from a particular, oh don't worry you will.

If you don't, if you haven't wanted to do that yet, you will, you will. All right, so that's some of the stuff that you can do with Bunyan. Let me just show you, how about just filtering out errors? So I'm going to go ahead and send some errors through. Notice that just the errors are coming up. We can give a couple those, all right so we have a lots of error logs coming out.

So you can filter down just errors, all right, finally let me show you what happens when I crash. Oh, I just want to make sure, note that there's no core file there, right. All right, so we're going to go ahead and crash our process, and we should see this guy die, die. There he goes, all right, so now we should have a core there.

Great. Alright. So this is where I'm going to go and debug with MDB. So the first thing I want to do is load the V8 shared object because that's what gives me all the debugging info. My dev VM is super, super old, so I actually have a newer version in temp. OK, so I'm going to load that. Now the first thing that's awesome is I can go back and look at the stack trace.

Notice, so here is the code, here is the code for where we crashed, right? Process.abort, we're in this respond right here, and there we go. Respond. So up here you see where we actually abort and stuff. Alright. What's cool is I can do a -b, and guess what? I get the code right in front of me, where it crashed.

So that's nice, and notice right here I have, you can clap. It's OK. Notice that we have arguments here to the function, and you can do something like this, jsprint and there you go. There is the full request object that was there, that was right here when we died. So what we do in production is, every one of our production services uses this abort on uncaught, hold on,so this --abort-on-uncaught-exception flag, which means, how many people have dealt with uncaught exceptions in Node?

Everybody loves them. So what we do is we abort on uncaught exception, it core dumps, and then let me tell you a little bit about Thoth. Like I said, we implement some things in Manta in terms of Manta. Thoth is our debugging framework, so we have a little piece of software running in all of our nodes in production that look for core dumps and will upload them into Manta, and then we have this command called mlogin, so all our commands start with "m", like mls, mget, stuff like that. So mlogin actually kicks off a login session all the way down onto one of thsee compute Nodes, and actually put you there with your object mounted in the environment that you have as if you were running a MapReduce jump.

So what thoth does is, we upload all of our cores into Manta. We have a process that will go through and index all of our cores, we have some things that recognize them so we can automatically assign them to tickets if we need to, and then if we need to actually go in and run MDB next to them we can Thoth in, and actually it drops this on an MDB session like right there. So that's what we do for kind of postmortem debugging in prod. Compatibility matrix, I mentioned this right at the beginning, so Windows, Linux, BSD-based systems, and SmartOS. So you can run Restify, Bunyan, and JSON on anything.

You just don't get some of the magic like Bunyan -p, that uses DTrace under the hood, so does the Restify probes also. An interesting thing that TJ actually did recently was get, you can actually take core dump on Linux, upload it into Manta, and then get debugging in Manta, or actually any SmartOS VM with your core.

So you can do what I just showed you with the core, and I think TJ has said that some people have actually done that to debug things that they would not have been able to debug otherwise. You can correct me if I'm wrong, TJ's done that, for Node Core?

Yes. Yes, see? Alright. So to end my talk I wanted to talk about how we engage with Node Core. I have TJ in the chatroom all the time, but we try and engage with Node Core publicly, and the way that we do that is we use GitHub issues just like everybody else, and I learned pretty quickly when I joined that, number 1, you always provide a really clear description and number 2, always give a repro in Node.

I gave a curl repro to Isaac, and he didn't like that at all. So here's probably the best example I've ever seen by an engineer that I work with named Dave. So here is his clear description, that he describes what it should be, and finally down at the bottom actually like gives the MDB, like output for his stack.

So I gave some good examples, or what I think are good examples for engaging with Node Core. That's the way they like it. Alright. Finally, I have a full list of blogs and tutorials that go through everything that I just showed tonight. I'm not going to make you write all those down. You can find them all here on nfitch/node-demo.

Let me just show you what that is real quick . So this is the GitHub repo that I have my presentation, the full presentation, and also every, not every command, but pretty much an example of every command I ran as well. So if you want to go play around with it, spin up a SmartOS VM, go crash an app, you should just be able to go forth and conquer.

You know what? There's one more thing I wanted to show you that I didn't show you before, so I will end with this. Let me get back to my debugging session. One of the problems sometimes we have is finding the particular object that you wanted to find. So find JS objects we actually have in MDB. So it will actually search the entire heap for JavaScript—things that look like JavaScript objects, but what's even cooler is I can ask for properties, like that have the success property like I was talking about before.

So there's an example object, this is in the tutorial. The reason why I have to pipe again to find js objects but then I can jsprint them all. No, sorry. There you go. So because it's searching the entire heap, you may have garbage objects, that's what that first guy is, but otherwise there is all the total, there's the total, and remember that I'd pressed up and enter a bunch of times with the error?

Yeah, this is why. So now we see that we had 460 successes, and 471 failures before I crashed it. So if you have any questions afterwards or during the panel, just let me know, and you can find me there. Thanks.

Sign up now for Instant Cloud Access Get Started