NodeJS has demonstrated impressive performance potential as an HTTP server. By combining highly optimized HTTP parsing algorithms with the speedy V8 JavaScript engine and using an event-based architecture, Node has posted eye-opening request handling numbers. However, Node historically has been limited by its inability to provide true concurrent request handling, greatly limiting its ability to leverage increasingly multiple core servers. The latest release of Node introduced new functionality for socket sharing that can be coupled with child process spawning to achieve concurrent multi-process request handling for a single TCP socket. Multi-node exists to leverage this capability to make it simple to start up a multi-process Node server.

Multi-node is very easy to use. You simply create an HTTP server object as you would normally do, and rather than immediately calling listen(), you pass it to the multi-node listen function:

var server = require("http").createServer(function(request, response){
		... standard node request handler ...
var nodes = require("multi-node").listen({
	port: 80,
	nodes: 4
}, server);

As you can see, we can indicate the number of “nodes” or processes to start up, and the port number to listen on. Multi-node will automatically spawn the appropriate processes and pass the socket to each child process to share. The OS kernel then essentially acts as the load balancer, queuing up incoming TCP requests and handing them off to processes as they can accept them. This can actually be more efficient than a separate load balancer, since socket connections are handled as processes request them, making the load balancing immediately dynamic. If a process is loaded with more difficult processor intense requests, it won’t grab as many requests and the other process will pick up the load.

One decision to make is choosing the number of processes (“nodes”) to run. Perhaps the most common suggestion would be to use the same number of processes as the number of CPU cores on the machine. Indeed this is likely to provide the most efficient utilization of the machine’s CPU resources, with minimal context switching overhead. However there can be reasons for using more or less processes. If you don’t want the Node server to consume all of the machine’s resources, you might specify less processes to keep other CPU cores available for other machine tasks.

Alternately, there can be “fairness” advantages to running more processes. Node won’t start processing another request until it finishes its current queue of events. If it has a large queue of expensive event handlers to process, this may increase the latency handling of a request even if it is small, lightweight request. With few processes, this request processing granularity is large. However, with the addition of more processes, the request processing granularity decreases and smaller requests have better odds of being serviced sooner rather than ending up being queued behind other performance expensive operations in the event queue of a few processes. More processes can slightly increase context switching overhead, but this may be worth the improved fairness and latency reduction for quick requests.

One of the core capabilities that multi-node also provides is setting up fully connected inter-process sockets so that processes can easily communicate with other processes. This is very important functionality for situations where data and messages need to be accessible between different processes. Real-time Comet applications like chat demonstrate this need. Chat messages received on one process may need to be sent to other processes so they can deliver messages to their connected client. Even for simple session management such capability can be useful. Since browsers utilize multiple connections to servers, it is very easy for one web application to be connected to different processes on the server, which may need to share session data.

To use the inter-process communication, simply add a “node” listener to the object returned from the listen() call. The “node” event is fired for each new Node process with a “stream” argument that can be used to communicate to that process:

var nodes = require("multi-node").listen(...);
nodes.addListener("node", function(stream){
	stream.addListener("data", function(data){
		... receiving data from this other node process ...
		stream.write(... write data to other process...);

This event should be fired for every other Node process (for this server) that is created, allowing you to communicate to any other process.

Multi-node allows you to easily leverage Node’s new socket sharing capabilities and build truly scalable, concurrent, Node-based web application servers. While multi-node is intended to be usable on its own, multi-node is a sub-project of Persevere and the Persevere example wiki demonstrates how multi-node can easily be used with Persevere applications. Next we will look at how another Persevere sub-project, Tunguska, can leverage multi-node’s inter-process communication for building scalable multi-process real-time applications.