JAVASCRIPT CHAPTER 6 2018-12-19T12:54:01+00:00

JAVASCRIPT  Chapter 6

Topics:- (Project: A Pixel Art Editor, Node.js, The node command, Modules, Installing NPM, The file system module, The HTTP module, Steams, A file server, Project: Skill-sharing Website)

Project: A Pixel Art Editor

The material from the previous chapters gives you all the elements you need to build a basic web application.  Our application will be a pixel drawing program, where you can modify a picture pixel by pixel by manipulating a zoomedin view of it, shown as a grid of colored squares. You can use the program to open image files, scribble on them with your mouse or other pointer device, and save them. This is what it will look like: Painting on a computer is great. You don’t need to worry about materials, skill, or talent. You just start smearing.

Components

The interface for the application shows a big <canvas> element on top, with a number of form fields below it. The user draws on the picture by selecting a tool from a <select> field and then clicking, touching, or dragging across the canvas. There are tools for drawing single pixels or rectangles, for filling an area, and for picking a color from the picture.We will structure the editor interface as a number of components, objects that are responsible for a piece of the DOM and that may contain other components inside them. The state of the application consists of the current picture, the selected tool, and the selected color. We’ll set things up so that the state lives in a single value, and the interface components always base the way they look on the current state. To see why this is important, let’s consider the alternative—distributing pieces of state throughout the interface. Up to a certain point, this is easier to program. We can just put in a color field and read its value when we need to know the current color.

But then we add the color picker—a tool that lets you click the picture to select the color of a given pixel. To keep the color field showing the correct color, that tool would have to know that it exists and update it whenever it picks a new color. If you ever add another place that makes the color visible (maybe the mouse cursor could show it), you have to update your color-changing code to keep that synchronized. In effect, this creates a problem where each part of the interface needs to know about all other parts, which is not very modular. For small applications like the one in this, that may not be a problem. For bigger projects, it can turn into a real nightmare. To avoid this nightmare on principle, we’re going to be strict about data flow. There is a state, and the interface is drawn based on that state. An interface component may respond to user actions by updating the state, at which point the components get a chance to synchronize themselves with this new state.

In practice, each component is set up so that when it is given a new state, it also notifies its child components, insofar as those need to be updated. Setting this up is a bit of a hassle. Making this more convenient is the main selling point of many browser programming libraries. But for a small application like this, we can do it without such infrastructure. Updates to the state are represented as objects, which we’ll call actions. Components may create such actions and dispatch them—give them to a central state management function. That function computes the next state, after which the interface components update themselves to this new state. We’re taking the messy task of running a user interface and applying some structure to it. Though the DOM-related pieces are still full of side effects, they are held up by a conceptually simple backbone: the state update cycle. The state determines what the DOM looks like, and the only way DOM events can change the state is by dispatching actions to the state.

There are many variants of this approach, each with its own benefits and problems, but their central idea is the same: state changes should go through a single well-defined channel, not happen all over the place. Our components will be classes conforming to an interface. Their constructor is given a state—which may be the whole application state or some smaller value if it doesn’t need access to everything—and uses that to build up a dom property. This is the DOM element that represents the component. Most constructors will also take some other values that won’t change over time, such as the function they can use to dispatch an action. Each component has a syncState method that is used to synchronize it to a new state value. The method takes one argument, the state, which is of the same type as the first argument to its constructor.

The state

The application state will be an object with picture, tool, and color properties. The picture is itself an object that stores the width, height, and pixel content of the picture. The pixels are stored in an array, in the same way as the matrix class—row by row, from top to bottom.

class Picture {

constructor(width, height, pixels) {

  this.width = width;

  this.height = height;

  this.pixels = pixels;

}

static empty(width, height, color) {

   let pixels = new Array(width * height).fill(color);

   return new Picture(width, height, pixels);

}

pixel(x, y) {

  return this.pixels[x + y * this.width];

}

draw(pixels) {

  let copy = this.pixels.slice();

        for (let {x, y, color} of pixels) {

            copy[x + y * this.width] = color;

}

return new Picture(this.width, this.height, copy);

       }

}

We want to be able to treat a picture as an immutable value, for reasons that we’ll get back to later. But we also sometimes need to update a whole bunch of pixels at a time. To be able to do that, the class has a draw method that expects an array of updated pixels—objects with x, y, and color properties—and creates a new picture with those pixels overwritten. This method uses slice without arguments to copy the entire pixel array—the start of the slice defaults to 0, and the end defaults to the array’s length. The empty method uses two pieces of array functionality that we haven’t seen before. The Array constructor can be called with a number to create an empty array of the given length. The fill method can then be used to fill this array with a given value. These are used to create an array in which all pixels have the same color.

Colors are stored as strings containing traditional CSS color codes made up of a hash sign (#) followed by six hexadecimal (base-16) digits—two for the red component, two for the green component, and two for the blue component. This is a somewhat cryptic and inconvenient way to write colors, but it is the format the HTML color input field uses, and it can be used in the fillColor property of a canvas drawing context, so for the ways we’ll use colors in this program, it is practical enough.

Black, where all components are zero, is written “#000000”, and bright pink looks like “#ff00ff”, where the red and blue components have the maximum value of 255, written ff in hexadecimal digits (which use a to f to represent digits 10 to 15). We’ll allow the interface to dispatch actions as objects whose properties overwrite the properties of the previous state. The color field, when the user changes it, could dispatch an object like {color: field.value}, from which this update function can compute a new state.

function updateState(state, action) {

returnObject.assign({}, state, action);

}

This rather cumbersome pattern, in which Object.assign is used to first add the properties of state to an empty object and then overwrite some of those with the properties from action, is common in JavaScript code that uses immutable objects. A more convenient notation for this, in which the triple-dot operator is used to include all properties from another object in an object expression, is in the final stages of being standardized. With that addition, you could write {…state, …action} instead. At the time of writing, this doesn’t yet work in all browsers.

DOM building

One of the main things that interface components do is creating DOM structure. We again don’t want to directly use the verbose DOM methods for that, so here’s a slightly expanded version of the elt function:

function elt(type, props, …children) {

let dom = document.createElement(type);

if (props) Object.assign(dom, props);

for (let child of children) {

  if (typeof child != “string”) dom.appendChild(child);

       else dom.appendChild(document.createTextNode(child));

   }

return dom;

}

The main difference between this version and the one we used before is that it assigns properties to DOM nodes, not attributes. This means we can’t use it to set arbitrary attributes, but we can use it to set properties whose value isn’t a string, such as onclick, which can be set to a function to register a click event handler. This allows the following style of registering event handlers:

<body>

<script>

    document.body.appendChild(elt(“button”, {

        onclick: () => console.log(“click”)

}, “The button”));

</script>

</body>

The canvas

The first component we’ll define is the part of the interface that displays the picture as a grid of colored boxes. This component is responsible for two things: showing a picture and communicating pointer events on that picture to the rest of the application. As such, we can define it as a component that knows about only the current picture, not the whole application state. Because it doesn’t know how the application as a whole works, it cannot directly dispatch actions. Rather, when responding to pointer events, it calls a callback function provided by the code that created it, which will handle the application-specific parts.

const scale = 10;

class PictureCanvas {

  constructor(picture, pointerDown) {

      this.dom = elt(“canvas”, {

      onmousedown: event => this.mouse(event, pointerDown),

      ontouchstart: event => this.touch(event, pointerDown)

    });

    this.syncState(picture);

}

syncState(picture) {

     if (this.picture == picture) return;

     this.picture = picture;

     drawPicture(this.picture, this.dom, scale);

     }

}

We draw each pixel as a 10-by-10 square, as determined by the scale constant. To avoid unnecessary work, the component keeps track of its current picture and does a redraw only when syncState is given a new picture. The actual drawing function sets the size of the canvas based on the scale and picture size and fills it with a series of squares, one for each pixel.

function drawPicture(picture, canvas, scale) {

      canvas.width = picture.width * scale;

      canvas.height = picture.height * scale;

      let cx = canvas.getContext(“2d”);

for (let y = 0; y < picture.height; y++) {

        for (let x = 0; x < picture.width; x++) {

            cx.fillStyle = picture.pixel(x, y);

            cx.fillRect(x * scale, y * scale, scale, scale);

         }

    }

}

When the left mouse button is pressed while the mouse is over the picture canvas, the component calls the pointerDown callback, giving it the position of the pixel that was clicked—in picture coordinates. This will be used to implement mouse interaction with the picture. The callback may return another callback function to be notified when the pointer is moved to a different pixel while the button is held down.

PictureCanvas.prototype.mouse = function(downEvent, onDown) {

       if (downEvent.button != 0) return;

let pos = pointerPosition(downEvent, this.dom);

let onMove = onDown(pos); if (!onMove) return;

let move = moveEvent => {

if (moveEvent.buttons == 0) {

this.dom.removeEventListener(“mousemove”, move);

} else {

     let newPos = pointerPosition(moveEvent, this.dom);

     if (newPos.x == pos.x && newPos.y == pos.y) return;

     pos = newPos;

     onMove(newPos);

     }

};

this.dom.addEventListener(“mousemove”, move);

};

function pointerPosition(pos, domNode) {

     let rect = domNode.getBoundingClientRect();

    return {x: Math.floor((pos.clientX – rect.left) / scale),

                  y: Math.floor((pos.clientY – rect.top) / scale)};

}

Since we know the size of the pixels and we can use getBoundingClientRect to find the position of the canvas on the screen, it is possible to go from mouse event coordinates (clientX and clientY) to picture coordinates. These are always rounded down so that they refer to a specific pixel. With touch events, we have to do something similar, but using different events and making sure we call preventDefault on the “touchstart” event to prevent panning.

PictureCanvas.prototype.touch = function(startEvent, onDown) {

  let pos = pointerPosition(startEvent.touches[0], this.dom);

  let onMove = onDown(pos);

startEvent.preventDefault();

if (!onMove) return;

let move = moveEvent => {

let newPos = pointerPosition(moveEvent.touches[0], this.dom);

   if (newPos.x == pos.x && newPos.y == pos.y) return;

      pos = newPos;

      onMove(newPos);

};

let end = () => {

  this.dom.removeEventListener(“touchmove”, move);      this.dom.removeEventListener(“touchend”, end);

};

this.dom.addEventListener(“touchmove”, move);

this.dom.addEventListener(“touchend”, end);

};

For touch events, clientX and clientY aren’t available directly on the event object, but we can use the coordinates of the first touch object in the touches property.

The application

To make it possible to build the application piece by piece, we’ll implement the main component as a shell around a picture canvas and a dynamic set of tools and controls that we pass to its constructor. The controls are the interface elements that appear below the picture. They’ll be provided as an array of component constructors.

Tools are things like drawing pixels or filling in an area. The application shows the set of available tools as a <select> field. The currently selected tool determines what happens when the user interacts with the picture with a pointer device. The set of available tools is provided as an object that maps the names that appear in the drop-down field to functions that implement the tools. Such functions get a picture position, a current application state, and a dispatch function as arguments. They may return a move handler function that gets called with a new position and a current state when the pointer moves to a different pixel.

class PixelEditor {

  constructor(state, config) {

      let {tools, controls, dispatch} = config;

      this.state = state;

    this.canvas = new PictureCanvas(state.picture, pos => {

let tool = tools[this.state.tool];

let onMove = tool(pos, this.state, dispatch);

if (onMove) return pos => onMove(pos, this.state);

 });

this.controls = controls.map(

     Control => new Control(state, config));

this.dom = elt(“div”, {}, this.canvas.dom, elt(“br”),

           …this.controls.reduce(

(a, c) => a.concat(” “, c.dom), []));

}

syncState(state) {

       this.state = state;

     this.canvas.syncState(state.picture);

     for (let ctrl of this.controls) ctrl.syncState(state);

     }

}

The pointer handler given to PictureCanvas calls the currently selected tool with the appropriate arguments and, if that returns a move handler, adapts it to also receive the state. All controls are constructed and stored in this.controls so that they can be updated when the application state changes. The call to reduce introduces spaces between the controls’ DOM elements. That way they don’t look so pressed together. The first control is the tool selection menu. It creates a <select> element with an option for each tool and sets up a “change” event handler that updates the application state when the user selects a different tool.

class ToolSelect {

  constructor(state, {tools, dispatch}) {

     this.select = elt(“select”, {

            onchange: () => dispatch({tool: this.select.value})

}, …Object.keys(tools).map(name => elt(“option”, {

   selected: name == state.tool

}, name)));

    this.dom = elt(“label”, null, “ Tool: “, this.select);

}

   syncState(state) { this.select.value = state.tool; }

}

By wrapping the label text and the field in a <label> element, we tell the browser that the label belongs to that field so that you can, for example, click the label to focus the field. We also need to be able to change the color, so let’s add a control for that. An HTML <input> element with a type attribute of color gives us a form field that is specialized for selecting colors. Such a field’s value is always a CSS color code in “#RRGGBB” format (red, green, and blue components, two digits per color). The browser will show a color picker interface when the user interacts with it. Depending on the browser, the color picker might look like this:

This control creates such a field and wires it up to stay synchronized with the application state’s color property.

class ColorSelect {

  constructor(state, {dispatch}) {

     this.input = elt(“input”, {

        type: “color”,

       value: state.color,

onchange: () => dispatch({color: this.input.value})

    });

this.dom = elt(“label”, null, “ Color: “, this.input);

  }

syncState(state) { this.input.value = state.color; }

}

Drawing tools

Before we can draw anything, we need to implement the tools that will control the functionality of mouse or touch events on the canvas.

The most basic tool is the draw tool, which changes any pixel you click or tap to the currently selected color. It dispatches an action that updates the picture to a version in which the pointed-at pixel is given the currently selected color.

function draw(pos, state, dispatch) {

   function drawPixel({x, y}, state) {

      let drawn = {x, y, color: state.color};

      dispatch({picture: state.picture.draw([drawn])});

   }

drawPixel(pos, state);

return drawPixel;

}

The function immediately calls the drawPixel function but then also returns it so that it is called again for newly touched pixels when the user drags or swipes over the picture. To draw larger shapes, it can be useful to quickly create rectangles. The rectangle tool draws a rectangle between the point where you start dragging and the point that you drag to.

function rectangle(start, state, dispatch) {

    function drawRectangle(pos) {

let xStart = Math.min(start.x, pos.x);

let yStart = Math.min(start.y, pos.y);

let xEnd = Math.max(start.x, pos.x);

let yEnd = Math.max(start.y, pos.y);

let drawn = [];

for (let y = yStart; y <= yEnd; y++) {

      for (let x = xStart; x <= xEnd; x++) {

             drawn.push({x, y, color: state.color});

       }

}

dispatch({picture: state.picture.draw(drawn)});

}

drawRectangle(start);

return drawRectangle;

}

An important detail in this implementation is that when dragging, the rectangle is redrawn on the picture from the original state. That way, you can make the rectangle larger and smaller again while creating it, without the intermediate rectangles sticking around in the final picture. This is one of the reasons why immutable picture objects are useful—we’ll see another reason later. Implementing flood fill is somewhat more involved. This is a tool that fills the pixel under the pointer and all adjacent pixels that have the same color. “Adjacent” means directly horizontally or vertically adjacent, not diagonally. This picture illustrates the set of pixels colored when the flood fill tool is used at the marked pixel:

Interestingly, the way we’ll do this looks a bit like the pathfinding code. Whereas that code searched through a graph to find a route, this code searches through a grid to find all “connected” pixels. The problem of keeping track of a branching set of possible routes is similar.

const around = [{dx: -1, dy: 0}, {dx: 1, dy: 0},

                              {dx: 0, dy: -1}, {dx: 0, dy: 1}];

function fill({x, y}, state, dispatch) {

  let targetColor = state.picture.pixel(x, y);

  let drawn = [{x, y, color: state.color}];

for (let done = 0; done < drawn.length; done++) {

      for (let {dx, dy} of around) {

let x = drawn[done].x + dx, y = drawn[done].y + dy;

    if (x >= 0 && x < state.picture.width &&

y >= 0 && y < state.picture.height &&

state.picture.pixel(x, y) == targetColor &&

!drawn.some(p => p.x == x && p.y == y)) {

drawn.push({x, y, color: state.color});

         }

     }

}

dispatch({picture: state.picture.draw(drawn)});

}

The array of drawn pixels doubles as the function’s work list. For each pixel reached, we have to see whether any adjacent pixels have the same color and haven’t already been painted over. The loop counter lags behind the length of the drawn array as new pixels are added. Any pixels ahead of it still need to be explored. When it catches up with the length, no unexplored pixels remain, and the function is done. The final tool is a color picker, which allows you to point at a color in the picture to use it as the current drawing color.

function pick(pos, state, dispatch) {

dispatch({color: state.picture.pixel(pos.x, pos.y)});

}

Saving and loading

When we’ve drawn our masterpiece, we’ll want to save it for later. We should add a button for downloading the current picture as an image file. This control provides that button:

class SaveButton {

  constructor(state) {

     this.picture = state.picture;

    this.dom = elt(“button”, {

            onclick: () => this.save()

    }, “💾 Save”);

}

save() {

     let canvas = elt(“canvas”);

     drawPicture(this.picture, canvas, 1);

      let link = elt(“a”, {

href: canvas.toDataURL(),

download: “pixelart.png”

});

document.body.appendChild(link);

link.click();

link.remove();

}

syncState(state) { this.picture = state.picture; }

}

The component keeps track of the current picture so that it can access it when saving. To create the image file, it uses a <canvas> element that it draws the picture on (at a scale of one pixel per pixel). The toDataURL method on a canvas element creates a URL that starts with data:. Unlike http: and https: URLs, data URLs contain the whole resource in the URL. They are usually very long, but they allow us to create working links to arbitrary pictures, right here in the browser. To actually get the browser to download the picture, we then create a link element that points at this URL and has a download attribute. Such links, when clicked, make the browser show a file save dialog. We add that link to the document, simulate a click on it, and remove it again.

You can do a lot with browser technology, but sometimes the way to do it is rather odd. And it gets worse. We’ll also want to be able to load existing image files into our application. To do that, we again define a button component.

class LoadButton {

    constructor(_, {dispatch}) {

      this.dom = elt(“button”, {

          onclick: () => startLoad(dispatch)

       }, “📁 Load”);

 }

syncState() {}

}

function startLoad(dispatch) {

  let input = elt(“input”, {

      type: “file”,

onchange: () => finishLoad(input.files[0], dispatch)

});

document.body.appendChild(input);

  input.click();

  input.remove();

}

To get access to a file on the user’s computer, we need the user to select the file through a file input field. But I don’t want the load button to look like a file input field, so we create the file input when the button is clicked and then pretend that it itself was clicked. When the user has selected a file, we can use FileReader to get access to its contents, again as a data URL. That URL can be used to create an <img> element, but because we can’t get direct access to the pixels in such an image, we can’t create a Picture object from that.

function finishLoad(file, dispatch) {

   if (file == null) return;

  let reader = new FileReader();

  reader.addEventListener(“load”, () => {

        let image = elt(“img”, {

             onload: () => dispatch({

                       picture: pictureFromImage(image)

}),

src: reader.result

    });

});

reader.readAsDataURL(file);

}

To get access to the pixels, we must first draw the picture to a <canvas> element. The canvas context has a getImageData method that allows a script to read its pixels. So, once the picture is on the canvas, we can access it and construct a Picture object.

function pictureFromImage(image) {

let width = Math.min(100, image.width);

let height = Math.min(100, image.height);

let canvas = elt(“canvas”, {width, height});

let cx = canvas.getContext(“2d”);

cx.drawImage(image, 0, 0);

let pixels = [];

let {data} = cx.getImageData(0, 0, width, height);

function hex(n) {

      return n.toString(16).padStart(2, “0”);

}

for (let i = 0; i < data.length; i += 4) {

     let [r, g, b] = data.slice(i, i + 3);

     pixels.push(“#” + hex(r) + hex(g) + hex(b));

}

return new Picture(width, height, pixels);

}

We’ll limit the size of images to 100 by 100 pixels since anything bigger will look huge on our display and might slow down the interface. The data property of the object returned by getImageData is an array of color components. For each pixel in the rectangle specified by the arguments, it contains four values, which represent the red, green, blue, and alpha components of the pixel’s color, as numbers between 0 and 255. The alpha part represents opacity—when it is zero, the pixel is fully transparent, and when it is 255, it is fully opaque. For our purpose, we can ignore it.

The two hexadecimal digits per component, as used in our color notation, correspond precisely to the 0 to 255 range—two base-16 digits can express 162 = 256 different numbers. The toString method of numbers can be given a base as argument, so n.toString(16) will produce a string representation in base 16. We have to make sure that each number takes up two digits, so the hex helper function calls padStart to add a leading zero when necessary. We can load and save now! That leaves one more feature before we’re done.

Undo history

Half of the process of editing is making little mistakes and correcting them. So an important feature in a drawing program is an undo history. To be able to undo changes, we need to store previous versions of the picture. Since it’s an immutable value, that is easy. But it does require an additional field in the application state. We’ll add a done array to keep previous versions of the picture. Maintaining this property requires a more complicated state update function that adds pictures to the array.

But we don’t want to store every change, only changes a certain amount of time apart. To be able to do that, we’ll need a second property, doneAt, tracking the time at which we last stored a picture in the history.

function historyUpdateState(state, action) {

if (action.undo == true) {

     if (state.done.length == 0) return state;

     return Object.assign({}, state, {

           picture: state.done[0],

           done: state.done.slice(1),

           doneAt: 0

          });

} else if (action.picture &&

                                   state.doneAt < Date.now() – 1000) {

return Object.assign({}, state, action, {

    done: [state.picture, …state.done],

   doneAt: Date.now()

    });

} else {

    return Object.assign({}, state, action);

    }

}

When the action is an undo action, the function takes the most recent picture from the history and makes that the current picture. It sets doneAt to zero so that the next change is guaranteed to store the picture back in the history, allowing you to revert to it another time if you want. Otherwise, if the action contains a new picture and the last time we stored something is more than a second (1000 milliseconds) ago, the done and doneAt properties are updated to store the previous picture. The undo button component doesn’t do much. It dispatches undo actions when clicked and disables itself when there is nothing to undo.

class UndoButton {

    constructor(state, {dispatch}) {

        this.dom = elt(“button”, {

             onclick: () => dispatch({undo: true}),

            disabled: state.done.length == 0

}, “ ➦Undo”);

}

syncState(state) {

   this.dom.disabled = state.done.length == 0;

   }

}

Let’s draw

To set up the application, we need to create a state, a set of tools, a set of controls, and a dispatch function. We can pass them to the PixelEditor constructor to create the main component. Since we’ll need to create several editors in the exercises, we first define some bindings.

const startState = {

  tool: “draw”,

  color: “#000000”,

  picture: Picture.empty(60, 30, “#f0f0f0”),

  done: [],

  doneAt: 0

};

const baseTools = {draw, fill, rectangle, pick};

const baseControls = [

   ToolSelect, ColorSelect, SaveButton, LoadButton, UndoButton

];

function startPixelEditor({state = startState,

                                                 tools = baseTools,

                                                controls = baseControls}) {

let app = new PixelEditor(state, {

tools,

controls,

dispatch(action) {

   state = historyUpdateState(state, action);

  app.syncState(state);

    }

});

return app.dom;

}

When destructuring an object or array, you can use = after a binding name to give the binding a default value, which is used when the property is missing or holds undefined. The startPixelEditor function makes use of this to accept an object with a number of optional properties as an argument. If you don’t provide a tools property, for example, tools will be bound to baseTools.

This is how we get an actual editor on the screen:

<div></div>

<script>

document.querySelector(“div”)

.appendChild(startPixelEditor({}));

</script>

Why is this so hard?

Browser technology is amazing. It provides a powerful set of interface building blocks, ways to style and manipulate them, and tools to inspect and debug your applications. The software you write for the browser can be run on almost every computer and phone on the planet. At the same time, browser technology is ridiculous. You have to learn a large number of silly tricks and obscure facts to master it, and the default programming model it provides is so problematic that most programmers prefer to cover it in several layers of abstraction rather than deal with it directly. And though the situation is definitely improving, it mostly does so in the form of more elements being added to address shortcomings—creating even more complexity. A feature used by a million websites can’t really be replaced. Even if it could, it would be hard to decide what it should be replaced with.

Technology never exists in a vacuum—we’re constrained by our tools and the social, economic, and historical factors that produced them. This can be annoying, but it is generally more productive to try to build a good understanding of how the existing technical reality works—and why it is the way it is—than to rage against it or hold out for another reality. New abstractions can be helpful. The component model and data flow convention we used  is a crude form of that. As mentioned, there are libraries that try to make user interface programming more pleasant. At the time of writing, React and Angular are popular choices, but there’s a whole cottage industry of such frameworks. If you’re interested in programming web applications, I recommend investigating a few of them to understand how they work and what benefits they provide.

Node.js

So far, we have used the JavaScript language in a single environment: the browser. This section will briefly introduce Node.js, a program that allows you to apply your JavaScript skills outside of the browser. With it, you can build anything from small command line tools to HTTP servers that power dynamic websites. Here we aim to teach you the main concepts that Node.js uses and to give you enough information to write useful programs for it. They do not try to be a complete, or even a thorough, treatment of the platform. If you want to follow along and run the code , you’ll need to install Node.js version 10.1 or higher. To do so, go to https://nodejs.org and follow the installation instructions for your operating system. You can also find further documentation for Node.js there.

Background

One of the more difficult problems with writing systems that communicate over the network is managing input and output—that is, the reading and writing of data to and from the network and hard drive. Moving data around takes time, and scheduling it cleverly can make a big difference in how quickly a system responds to the user or to network requests. In such programs, asynchronous programming is often helpful. It allows the program to send and receive data from and to multiple devices at the same time without complicated thread management and synchronization.

Node was initially conceived for the purpose of making asynchronous programming easy and convenient. JavaScript lends itself well to a system like Node. It is one of the few programming languages that does not have a built-in way to do in- and output. Thus, JavaScript could be fit onto Node’s rather eccentric approach to in- and output without ending up with two inconsistent interfaces. In 2009, when Node was being designed, people were already doing callback-based programming in the browser, so the community around thelanguage was used to an asynchronous programming style.

The node command

When Node.js is installed on a system, it provides a program called node, which is used to run JavaScript files. Say you have a file hello.js, containing this code:

let message = “Hello world”;

console.log(message);

You can then run node from the command line like this to execute the program:

$ node hello.js

Hello world

The console.log method in Node does something similar to what it does in the browser. It prints out a piece of text. But in Node, the text will go to the process’s standard output stream, rather than to a browser’s JavaScript console. When running node from the command line, that means you see the logged values in your terminal. If you run node without giving it a file, it provides you with a prompt at which you can type JavaScript code and immediately see the result.

$ node

> 1 + 1

2

> [-1, -2, -3].map(Math.abs)

[1, 2, 3]

> process.exit(0)

$

The process binding, just like the console binding, is available globally in Node. It provides various ways to inspect and manipulate the current program. The exit method ends the process and can be given an exit status code, which tells the program that started node (in this case, the command line shell) whether the program completed successfully (code zero) or encountered an error (any other code). To find the command line arguments given to your script, you can read process.argv, which is an array of strings. Note that it also includes the name of the node command and your script name, so the actual arguments start at index 2. If showargv.js contains the statement console.log(process.argv)you could run it like this:

$ node showargv.js one –and two

[“node”, “/tmp/showargv.js”, “one”, “–and”, “two”]

All the standard JavaScript global bindings, such as Array, Math, and JSON, are also present in Node’s environment. Browser-related functionality, such as document or prompt, is not.

Modules

Beyond the bindings we mentioned, such as console and process, Node puts few additional bindings in the global scope. If you want to access built-in functionality, you have to ask the module system for it.The CommonJS module system, based on the require function, was described. This system is built into Node and is used to load anything from built-in modules to downloaded packages to files that are part of your own program.

When require is called, Node has to resolve the given string to an actual file that it can load. Pathnames that start with /, ./, or ../ are resolved relative to the current module’s path, where . stands for the current directory, ../ for one directory up, and / for the root of the file system. So if you ask for “./graph” from the file /tmp/robot/robot.js, Node will try to load the file /tmp/robot/graph.jsThe .js extension may be omitted, and Node will add it if such a file exists. If the required path refers to a directory, Node will try to load the file named index.js in that directory.

When a string that does not look like a relative or absolute path is given to require, it is assumed to refer to either a built-in module or a module installed in a node_modules directory. For example, require(“fs”) will give you Node’s built-in file system module. And require(“robot”) might try to load the library found in node_modules/robot/. A common way to install such libraries is by using NPM, which we’ll come back to in a moment. Let’s set up a small project consisting of two files. The first one, called main .js, defines a script that can be called from the command line to reverse a string.

const {reverse} = require(“./reverse”);

// Index 2 holds the first actual command line argument

let argument = process.argv[2];

console.log(reverse(argument));

The file reverse.js defines a library for reversing strings, which can be used both by this command line tool and by other scripts that need direct access to a string-reversing function.

exports.reverse = function(string) {

   return Array.from(string).reverse().join(“”);

};

Remember that adding properties to exports adds them to the interface of the module. Since Node.js treats files as CommonJS modules, main.js can take the exported reverse function from reverse.jsWe can now call our tool like this:

$ node main.js JavaScript

tpircSavaJ

Installing with NPM

NPM, which was introduced, is an online repository of JavaScript modules, many of which are specifically written for Node. When you install Node on your computer, you also get the npm command, which you can use to interact with this repository. NPM’s main use is downloading packages. We saw the ini package before. We can use NPM to fetch and install that package on our computer.

$ npm install ini

npm WARN enoent ENOENT: no such file or directory,

                           open ‘/tmp/package.json’

+ ini@1.3.5

added 1 package in 0.552s

$ node

> const {parse} = require(“ini”);

> parse(“x = 1\ny = 2”);

{ x: ‘1’, y: ‘2’ }

After running npm install, NPM will have created a directory called node_modules. Inside that directory will be an ini directory that contains the library. You can open it and look at the code. When we call require(“ini”), this library is loaded, and we can call its parse property to parse a configuration file. By default NPM installs packages under the current directory, rather than in a central place. If you are used to other package managers, this may seem unusual, but it has advantages—it puts each application in full control of the packages it installs and makes it easier to manage versions and clean up when removing an application.

Package files

In the npm install example, you could see a warning about the fact that the package.json file did not exist. It is recommended to create such a file for each project, either manually or by running npm init. It contains some information about the project, such as its name and version, and lists its dependencies. The robot simulation, as modularized in the exercise, might have a package.json file like this:

{

“author”: “Marijn Haverbeke”,

“name”: “eloquent-javascript-robot”,

“description”: “Simulation of a package-delivery robot”,

“version”: “1.0.0”,

“main”: “run.js”,

“dependencies”: {

“dijkstrajs”: “^1.0.1”,

“random-item”: “^1.0.0”

},

“license”: “ISC”

}

When you run npm install without naming a package to install, NPM will install the dependencies listed in package.json. When you install a specific package that is not already listed as a dependency, NPM will add it to package .json.

Versions

A package.json file lists both the program’s own version and versions for its dependencies. Versions are a way to deal with the fact that packages evolve separately, and code written to work with a package as it existed at one point may not work with a later, modified version of the package. NPM demands that its packages follow a schema called semantic versioning, which encodes some information about which versions are compatible (don’t break the old interface) in the version number. A semantic version consists of three numbers, separated by periods, such as 2.3.0. Every time new functionality is added, the middle number has to be incremented. Every time compatibility is broken, so that existing code that uses the package might not work with the new version, the first number has to be incremented.

A caret character (^) in front of the version number for a dependency in package.json indicates that any version compatible with the given number may be installed. So, for example, “^2.3.0” would mean that any version greater than or equal to 2.3.0 and less than 3.0.0 is allowed. The npm command is also used to publish new packages or new versions of packages. If you run npm publish in a directory that has a package.json file, it will publish a package with the name and version listed in the JSON file to the registry. Anyone can publish packages to NPM—though only under a package name that isn’t in use yet since it would be somewhat scary if random people could update existing packages. Since the npm program is a piece of software that talks to an open system— the package registry—there is nothing unique about what it does. Another program, yarn, which can be installed from the NPM registry, fills the same role as npm using a somewhat different interface and installation strategy.

The file system module

One of the most commonly used built-in modules in Node is the fs module, which stands for file system. It exports functions for working with files and directories. For example, the function called readFile reads a file and then calls a call-back with the file’s contents.

let {readFile} = require(“fs”);

readFile(“file.txt”, “utf8”, (error, text) => {

if (error) throw error;

console.log(“The file contains:”, text);

});

The second argument to readFile indicates the character encoding used to decode the file into a string. There are several ways in which text can be encoded to binary data, but most modern systems use UTF-8. So unless you have reasons to believe another encoding is used, pass “utf8” when reading a text file. If you do not pass an encoding, Node will assume you are interested in the binary data and will give you a Buffer object instead of a string. This is an array-like object that contains numbers representing the bytes (8-bit chunks of data) in the files.

const {readFile} = require(“fs”); readFile(“file.txt”, (error, buffer) => {

  if (error) throw error;
   console.log(“The file contained”, buffer.length, “bytes.”,

                                        “The first byte is:”, buffer[0]);

});

A similar function, writeFile, is used to write a file to disk.

const {writeFile} = require(“fs”);

writeFile(“graffiti.txt”, “Node was here”, err => {

  if (err) console.log(`Failed to write file: ${err}`);

  else console.log(“File written.”);

});

Here it was not necessary to specify the encoding—writeFile will assume that when it is given a string to write, rather than a Buffer object, it should write it out as text using its default character encoding, which is UTF-8. The fs module contains many other useful functions: readdir will return the files in a directory as an array of strings, stat will retrieve information about a file, rename will rename a file, unlink will remove one, and so on. See the documentation at https://nodejs.org for specifics. Most of these take a callback function as the last parameter, which they call either with an error (the first argument) or with a successful result (the second). There are downsides to this style of programming—the biggest one being that error handling becomes verbose and error-prone.

Though promises have been part of JavaScript for a while, at the time of writing their integration into Node.js is still a work in progress. There is an object promises exported from the fs package since version 10.1 that contains most of the same functions as fs but uses promises rather than callback functions.

const {readFile} = require(“fs”).promises;

readFile(“file.txt”, “utf8”)

   .then(text => console.log(“The file contains:”, text));

Sometimes you don’t need asynchronicity, and it just gets in the way. Many of the functions in fs also have a synchronous variant, which has the same name with Sync added to the end. For example, the synchronous version of readFile is called readFileSync.

const {readFileSync} = require(“fs”);

console.log(“The file contains:”,

                     readFileSync(“file.txt”, “utf8”));

Do note that while such a synchronous operation is being performed, your program is stopped entirely. If it should be responding to the user or to other machines on the network, being stuck on a synchronous action might produce annoying delays.

The HTTP module

Another central module is called http. It provides functionality for running HTTP servers and making HTTP requests. This is all it takes to start an HTTP server:

const {createServer} = require(“http”);

let server = createServer((request, response) => {

  response.writeHead(200, {“Content-Type”: “text/html”});

  response.write(`

<h1>Hello!</h1>

<p>You asked for <code>${request.url}</code></p>`);

response.end();

});

server.listen(8000);

console.log(“Listening! (port 8000)”);

If you run this script on your own machine, you can point your web browser at http://localhost:8000/hello to make a request to your server. It will respond with a small HTML page. The function passed as argument to createServer is called every time a client connects to the server. The request and response bindings are objects representing the incoming and outgoing data. The first contains information about the request, such as its url property, which tells us to what URL the request was made.

So, when you open that page in your browser, it sends a request to your own computer. This causes the server function to run and send back a response, which you can then see in the browser. To send something back, you call methods on the response object. The first, writeHead, will write out the response headers. You give it the status code (200 for “OK” in this case) and an object that contains header values. The example sets the Content-Type header to inform the client that we’ll be sending back an HTML document. Next, the actual response body (the document itself) is sent with response .write. You are allowed to call this method multiple times if you want to send the response piece by piece, for example to stream data to the client as it becomes available. Finally, response.end signals the end of the response.

The call to server.listen causes the server to start waiting for connections on port 8000. This is why you have to connect to localhost:8000 to speak to this server, rather than just localhost, which would use the default port 80. When you run this script, the process just sits there and waits. When a script is listening for events—in this case, network connections—node will not automatically exit when it reaches the end of the script. To close it, press control-C. A real web server usually does more than the one in the example—it looks at the request’s method (the method property) to see what action the client is trying to perform and looks at the request’s URL to find out which resource this action is being performed on. We’ll see a more advanced server laterTo act as an HTTP client, we can use the request function in the http module.

const {request} = require(“http”);

let requestStream = request({

  hostname: “eloquentjavascript.net”,

  path: “/20_node.html”,

  method: “GET”,

  headers: {Accept: “text/html”}

}, response => {

console.log(“Server responded with status code”,

            response.statusCode);

});

requestStream.end();

The first argument to request configures the request, telling Node what server to talk to, what path to request from that server, which method to use, and so on. The second argument is the function that should be called when a response comes in. It is given an object that allows us to inspect the response, for example to find out its status code.

Just like the response object we saw in the server, the object returned by request allows us to stream data into the request with the write method and finish the request with the end method. The example does not use write because GET requests should not contain data in their request body. There’s a similar request function in the https module that can be used to make requests to https: URLs. Making requests with Node’s raw functionality is rather verbose. There are much more convenient wrapper packages available on NPM. For example, node-fetch provides the promise-based fetch interface that we know from the browser.

Streams

We have seen two instances of writable streams in the HTTP examples— namely, the response object that the server could write to and the request object that was returned from requestWritable streams are a widely used concept in Node. Such objects have a write method that can be passed a string or a Buffer object to write something to the stream. Their end method closes the stream and optionally takes a value to write to the stream before closing. Both of these methods can also be given a callback as an additional argument, which they will call when the writing or closing has finished.

It is possible to create a writable stream that points at a file with the createWriteStream function from the fs module. Then you can use the write method on the resulting object to write the file one piece at a time, rather than in one shot as with writeFileReadable streams are a little more involved. Both the request binding that was passed to the HTTP server’s callback and the response binding passed to the HTTP client’s callback are readable streams—a server reads requests and then writes responses, whereas a client first writes a request and then reads a response. Reading from a stream is done using event handlers, rather than methods.

Objects that emit events in Node have a method called on that is similar to the addEventListener method in the browser. You give it an event name and then a function, and it will register that function to be called whenever the given event occurs. Readable streams have “data” and “end” events. The first is fired every time data comes in, and the second is called whenever the stream is at its end. This model is most suited for streaming data that can be immediately processed, even when the whole document isn’t available yet. A file can be read as a readable stream by using the createReadStream function from fsThis code creates a server that reads request bodies and streams them back to the client as all-uppercase text:

const {createServer} = require(“http”);

createServer((request, response) => {

  response.writeHead(200, {“Content-Type”: “text/plain”});

  request.on(“data”, chunk =>

        response.write(chunk.toString().toUpperCase()));

 request.on(“end”, () => response.end());

}).listen(8000);

The chunk value passed to the data handler will be a binary Buffer. We can convert this to a string by decoding it as UTF-8 encoded characters with its toString method. The following piece of code, when run with the uppercasing server active, will send a request to that server and write out the response it gets:

const {request} = require(“http”);

request({

hostname: “localhost”,

port: 8000,

method: “POST”

}, response => {

response.on(“data”, chunk => process.stdout.write(chunk.toString()));

}).end(“Hello server”);

// → HELLO SERVER

The example writes to process.stdout (the process’s standard output, which is a writable stream) instead of using console.log. We can’t use console.log because it adds an extra newline character after each piece of text that it writes, which isn’t appropriate here since the response may come in as multiple chunks.

A file server

Let’s combine our newfound knowledge about HTTP servers and working with the file system to create a bridge between the two: an HTTP server that allows remote access to a file system. Such a server has all kinds of uses—it allows web applications to store and share data, or it can give a group of people shared access to a bunch of files. When we treat files as HTTP resources, the HTTP methods GET, PUT, and DELETE can be used to read, write, and delete the files, respectively. We will interpret the path in the request as the path of the file that the request refers to.

We probably don’t want to share our whole file system, so we’ll interpret these paths as starting in the server’s working directory, which is the directory in which it was started. If we ran the server from /tmp/public/ (or C:\tmp\public \ on Windows), then a request for /file.txt should refer to /tmp/public/file .txt (or C:\tmp\public\file.txt). We’ll build the program piece by piece, using an object called methods to store the functions that handle the various HTTP methods. Method handlers are async functions that get the request object as argument and return a promise that resolves to an object that describes the response.

const {createServer} = require(“http”);

const methods = Object.create(null);

  createServer((request, response) => {

      let handler = methods[request.method] || notAllowed;

     handler(request)

           .catch(error => {

if (error.status != null) return error;

return {body: String(error), status: 500};

})

.then(({body, status = 200, type = “text/plain”}) => {

      response.writeHead(status, {“Content-Type”: type});

      if (body && body.pipe) body.pipe(response);

      else response.end(body);

      });

}).listen(8000);

async function notAllowed(request) {

  return {

     status: 405,

    body: `Method ${request.method} not allowed.`

    };

}

This starts a server that just returns 405 error responses, which is the code used to indicate that the server refuses to handle a given method. When a request handler’s promise is rejected, the catch call translates the error into a response object, if it isn’t one already, so that the server can send back an error response to inform the client that it failed to handle the request. The status field of the response description may be omitted, in which case it defaults to 200 (OK). The content type, in the type property, can also be left off, in which case the response is assumed to be plain text.

When the value of body is a readable stream, it will have a pipe method that is used to forward all content from a readable stream to a writable stream. If not, it is assumed to be either null (no body), a string, or a buffer, and it is passed directly to the response’s end method. To figure out which file path corresponds to a request URL, the urlPath function uses Node’s built-in url module to parse the URL. It takes its path-name, which will be something like “/file.txt”, decodes that to get rid of the %20-style escape codes, and resolves it relative to the program’s working directory.

const {parse} = require(“url”);

const {resolve, sep} = require(“path”);

const baseDirectory = process.cwd();

function urlPath(url) {

  let {pathname} = parse(url);

  let path = resolve(decodeURIComponent(pathname).slice(1));

  if (path != baseDirectory &&

        !path.startsWith(baseDirectory + sep)) {

    throw {status: 403, body: “Forbidden”};

  }

return path;

}

As soon as you set up a program to accept network requests, you have to start worrying about security. In this case, if we aren’t careful, it is likely that we’ll accidentally expose our whole file system to the network. File paths are strings in Node. To map such a string to an actual file, there is a nontrivial amount of interpretation going on. Paths may, for example, include ../ to refer to a parent directory. So one obvious source of problems would be requests for paths like /../secret_file.

To avoid such problems, urlPath uses the resolve function from the path module, which resolves relative paths. It then verifies that the result is below the working directory. The process.cwd function (where cwd stands for “current working directory”) can be used to find this working directory. The sep variable from the path package is the system’s path separator—a backslash on Windows and a forward slash on most other systems. When the path doesn’t start with the base directory, the function throws an error response object, using the HTTP status code indicating that access to the resource is forbidden.

We’ll set up the GET method to return a list of files when reading a directory and to return the file’s content when reading a regular file. One tricky question is what kind of Content-Type header we should set when returning a file’s content. Since these files could be anything, our server can’t simply return the same content type for all of them. NPM can help us again here. The mime package (content type indicators like text/plain are also called MIME types) knows the correct type for a large number of file extensions. The following npm command, in the directory where the server script lives, installs a specific version of mime:

$ npm install mime@2.2.0

When a requested file does not exist, the correct HTTP status code to return is 404. We’ll use the stat function, which looks up information about a file, to find out both whether the file exists and whether it is a directory.

const {createReadStream} = require(“fs”);

const {stat, readdir} = require(“fs”).promises;

const mime = require(“mime”);

 methods.GET = async function(request) {
   let path = urlPath(request.url);
   let stats;
   try {

stats = await stat(path);

} catch (error) {

       if (error.code != “ENOENT”) throw error;

       else return {status: 404, body: “File not found”};

}

if (stats.isDirectory()) {

     return {body: (await readdir(path)).join(“\n”)};

} else {

    return {body: createReadStream(path),

                 type: mime.getType(path)};

     }

};

Because it has to touch the disk and thus might take a while, stat is asynchronous. Since we’re using promises rather than callback style, it has to be imported from promises instead of directly from fsWhen the file does not exist, stat will throw an error object with a code property of “ENOENT”. These somewhat obscure, Unix-inspired codes are how you recognize error types in Node.

The stats object returned by stat tells us a number of things about a file, such as its size (size property) and its modification date (mtime property). Here we are interested in the question of whether it is a directory or a regular file, which the isDirectory method tells us. We use readdir to read the array of files in a directory and return it to the client. For normal files, we create a readable stream with createReadStream and return that as the body, along with the content type that the mime package gives us for the file’s name. The code to handle DELETE requests is slightly simpler.

const {rmdir, unlink} = require(“fs”).promises;

methods.DELETE = async function(request) {

  let path = urlPath(request.url);

  let stats;

try {

     stats = await stat(path);

} catch (error) {

   if (error.code != “ENOENT”) throw error;

   else return {status: 204};

}

if (stats.isDirectory()) await rmdir(path);

else await unlink(path);

return {status: 204};

};

When an HTTP response does not contain any data, the status code 204 (“no content”) can be used to indicate this. Since the response to deletion doesn’t need to transmit any information beyond whether the operation succeeded, that is a sensible thing to return here.

You may be wondering why trying to delete a nonexistent file returns a success status code, rather than an error. When the file that is being deleted is not there, you could say that the request’s objective is already fulfilled. The HTTP standard encourages us to make requests idempotent, which means that making the same request multiple times produces the same result as making it once. In a way, if you try to delete something that’s already gone, the effect you were trying to do has been achieved—the thing is no longer there. This is the handler for PUT requests:

const {createWriteStream} = require(“fs”);

function pipeStream(from, to) {

return new Promise((resolve, reject) => {

   from.on(“error”, reject);

   to.on(“error”, reject);

   to.on(“finish”, resolve);

   from.pipe(to);

});

}

methods.PUT = async function(request) {

    let path = urlPath(request.url);

    await pipeStream(request, createWriteStream(path));

    return {status: 204};

};

We don’t need to check whether the file exists this time—if it does, we’ll just overwrite it. We again use pipe to move data from a readable stream to a writable one, in this case from the request to the file. But since pipe isn’t written to return a promise, we have to write a wrapper, pipeStream, that creates a promise around the outcome of calling pipeWhen something goes wrong when opening the file, createWriteStream will still return a stream, but that stream will fire an “error” event. The output stream to the request may also fail, for example if the network goes down. So we wire up both streams’ “error” events to reject the promise. When pipe is done, it will close the output stream, which causes it to fire a “finish” event. That’s the point where we can successfully resolve the promise (returning nothing).

The command line tool curl, widely available on Unix-like systems (such as macOS and Linux), can be used to make HTTP requests. The following session briefly tests our server. The -X option is used to set the request’s method, and -d is used to include a request body.

$ curl http://localhost:8000/file.txt

File not found

$ curl -X PUT -d hello http://localhost:8000/file.txt

$ curl http://localhost:8000/file.txt

hello

$ curl -X DELETE http://localhost:8000/file.txt

$ curl http://localhost:8000/file.txt

File not found

The first request for file.txt fails since the file does not exist yet. The PUT request creates the file, and behold, the next request successfully retrieves it. After deleting it with a DELETE request, the file is again missing.

Summary

Node is a nice, small system that lets us run JavaScript in a nonbrowser context. It was originally designed for network tasks to play the role of a node in a network. But it lends itself to all kinds of scripting tasks, and if writing JavaScript is something you enjoy, automating tasks with Node works well. NPM provides packages for everything you can think of (and quite a few things you’d probably never think of), and it allows you to fetch and install those packages with the npm program. Node comes with a number of built-in modules, including the fs module for working with the file system and the http module for running HTTP servers and making HTTP requests. All input and output in Node is done asynchronously, unless you explicitly use a synchronous variant of a function, such as readFileSync. When calling such asynchronous functions, you provide callback functions, and Node will call them with an error value and (if available) a result when it is ready.

Practice Project: Skill-Sharing Website

A skill-sharing meeting is an event where people with a shared interest come together and give small, informal presentations about things they know. At a gardening skill-sharing meeting, someone might explain how to cultivate celery. Or in a programming skill-sharing group, you could drop by and tell people about Node.js. Such meetups—also often called users’ groups when they are about computers— are a great way to broaden your horizon, learn about new developments, or simply meet people with similar interests. Many larger cities have JavaScript meetups. They are typically free to attend, and I’ve found the ones I’ve visited to be friendly and welcoming. In this final project, our goal is to set up a website for managing talks given at a skill-sharing meeting. Imagine a small group of people meeting up regularly in the office of one of the members to talk about unicycling. The previous organizer of the meetings moved to another town, and nobody stepped forward to take over this task. We want a system that will let the participants propose and discuss talks among themselves, without a central organizer.

Design

There is a server part to this project, written for Node.js, and a client part, written for the browser. The server stores the system’s data and provides it to the client. It also serves the files that implement the client-side system. The server keeps the list of talks proposed for the next meeting, and the client shows this list. Each talk has a presenter name, a title, a summary, and an array of comments associated with it. The client allows users to propose new talks (adding them to the list), delete talks, and comment on existing talks. Whenever the user makes such a change, the client makes an HTTP request to tell the server about it.

 The application will be set up to show a live view of the current proposed talks and their comments. Whenever someone, somewhere, submits a new talk or adds a comment, all people who have the page open in their browsers should immediately see the change. This poses a bit of a challenge—there is no way for a web server to open a connection to a client, nor is there a good way to know which clients are currently looking at a given website. A common solution to this problem is called long polling, which happens to be one of the motivations for Node’s design.

Long polling

To be able to immediately notify a client that something changed, we need a connection to that client. Since web browsers do not traditionally accept connections and clients are often behind routers that would block such connections anyway, having the server initiate this connection is not practical. We can arrange for the client to open the connection and keep it around so that the server can use it to send information when it needs to do so. But an HTTP request allows only a simple flow of information: the client sends a request, the server comes back with a single response, and that is it. There is a technology called WebSockets, supported by modern browsers, that makes it possible to open connections for arbitrary data exchange. But using them properly is somewhat tricky.

We use a simpler technique—long polling—where clients continuously ask the server for new information using regular HTTP requests, and the server stalls its answer when it has nothing new to report. As long as the client makes sure it constantly has a polling request open, it will receive information from the server quickly after it becomes available. For example, if Fatma has our skill-sharing application open in her browser, that browser will have made a request for updates and will be waiting for a response to that request. When Iman submits a talk on Extreme Downhill Unicycling, the server will notice that Fatma is waiting for updates and send a response containing the new talk to her pending request. Fatma’s browser will receive the data and update the screen to show the talk.

To prevent connections from timing out (being aborted because of a lack of activity), long polling techniques usually set a maximum time for each request, after which the server will respond anyway, even though it has nothing to report, after which the client will start a new request. Periodically restarting the request also makes the technique more robust, allowing clients to recover from temporary connection failures or server problems. A busy server that is using long polling may have thousands of waiting requests, and thus TCP connections, open. Node, which makes it easy to manage many connections without creating a separate thread of control for each one, is a good fit for such a system.

HTTP interface

Before we start designing either the server or the client, let’s think about the point where they touch: the HTTP interface over which they communicate. We will use JSON as the format of our request and response body. We’ll try to make good use of HTTP methods and headers. The interface is centered around the /talks path. Paths that do not start with /talks will be used for serving static files—the HTML and JavaScript code for the client-side system.A GET request to /talks returns a JSON document like this:

[{“title”: “Unituning”,

“presenter”: “Jamal”,

“summary”: “Modifying your cycle for extra style”,

“comments”: []}]}

Creating a new talk is done by making a PUT request to a URL like /talks/ Unituning, where the part after the second slash is the title of the talk. The PUT request’s body should contain a JSON object that has presenter and summary properties. Since talk titles may contain spaces and other characters that may not appear normally in a URL, title strings must be encoded with the encodeURIComponent function when building up such a URL.

console.log(“/talks/” + encodeURIComponent(“How to Idle”));

 // → /talks/How%20to%20Idle

A request to create a talk about idling might look something like this:

PUT /talks/How%20to%20Idle HTTP/1.1

Content-Type: application/json

Content-Length: 92

{“presenter”: “Maureen”,

“summary”: “Standing still on a unicycle”}

Such URLs also support GET requests to retrieve the JSON representation of a talk and DELETE requests to delete a talk. Adding a comment to a talk is done with a POST request to a URL like talks/Unituning/comments, with a JSON body that has author and message properties.

POST /talks/Unituning/comments HTTP/1.1

Content-Type: application/json

Content-Length: 72

{“author”: “Iman”,

“message”: “Will you talk about raising a cycle?”}

To support long polling, GET requests to /talks may include extra headers that inform the server to delay the response if no new information is available. We’ll use a pair of headers normally intended to manage caching: ETag and If-None-MatchServers may include an ETag (“entity tag”) header in a response. Its value is a string that identifies the current version of the resource. Clients, when they later request that resource again, may make a conditional request by including an If-None-Match header whose value holds that same string. If the resource hasn’t changed, the server will respond with status code 304, which means “not modified”, telling the client that its cached version is still current. When the tag does not match, the server responds as normal.

We need something like this, where the client can tell the server which version of the list of talks it has, and the server responds only when that list has changed. But instead of immediately returning a 304 response, the server should stall the response and return only when something new is available or a given amount of time has elapsed. To distinguish long polling requests from normal conditional requests, we give them another header, Prefer: wait=90, which tells the server that the client is willing to wait up to 90 seconds for the response. The server will keep a version number that it updates every time the talks change and will use that as the ETag value. Clients can make requests like this to be notified when the talks change:

GET /talks HTTP/1.1

If-None-Match: “4”

Prefer: wait=90

(time passes)

HTTP/1.1 200 OK

Content-Type: application/json

ETag: “5”

Content-Length: 295

[….]

The protocol described here does not do any access control. Everybody can comment, modify talks, and even delete them. (Since the Internet is full of hooligans, putting such a system online without further protection probably wouldn’t end well.)

The server

Let’s start by building the server-side part of the program. The code in this section runs on Node.js.

Routing

Our server will use createServer to start an HTTP server. In the function that handles a new request, we must distinguish between the various kinds of requests (as determined by the method and the path) that we support. This can be done with a long chain of if statements, but there is a nicer way. A router is a component that helps dispatch a request to the function that can handle it. You can tell the router, for example, that PUT requests with a path that matches the regular expression /^\/talks\/([^\/]+)$/ (/talks/ followed by a talk title) can be handled by a given function. In addition, it can help extract the meaningful parts of the path (in this case the talk title), wrapped in parentheses in the regular expression, and pass them to the handler function. There are a number of good router packages on NPM, but here we’ll write one ourselves to illustrate the principle. This is router.js, which we will later require from our server module:

const {parse} = require(“url”);

module.exports = class Router {

  constructor() {

    this.routes = [];

}

add(method, url, handler) {

  this.routes.push({method, url, handler});

}

resolve(context, request) {

  let path = parse(request.url).pathname;

for (let {method, url, handler} of this.routes) {

  let match = url.exec(path);

  if (!match || request.method != method) continue;

  let urlParts = match.slice(1).map(decodeURIComponent);

  return handler(context, …urlParts, request);

}

return null;

    }

};

The module exports the Router class. A router object allows new handlers to be registered with the add method and can resolve requests with its resolve method. The latter will return a response when a handler was found, and null other-wise. It tries the routes one at a time (in the order in which they were defined) until a matching one is found. The handler functions are called with the context value (which will be the server instance in our case), match strings for any groups they defined in their regular expression, and the request object. The strings have to be URL-decoded since the raw URL may contain %20-style codes.

Serving files

When a request matches none of the request types defined in our router, the server must interpret it as a request for a file in the public directory. It would be possible to use the file server to serve such files, but we neither need nor want to support PUT and DELETE requests on files, and we would like to have advanced features such as support for caching. So let’s use a solid, well-tested static file server from NPM instead.

I opted for ecstatic. This isn’t the only such server on NPM, but it works well and fits our purposes. The ecstatic package exports a function that can be called with a configuration object to produce a request handler function. We use the root option to tell the server where it should look for files. The handler function accepts request and response parameters and can be passed directly to createServer to create a server that serves only files. We want to first check for requests that we should handle specially, though, so we wrap it in another function.

const {createServer} = require(“http”);

const Router = require(“./router”);

const ecstatic = require(“ecstatic”);

const router = new Router();

const defaultHeaders = {“Content-Type”: “text/plain”};

class SkillShareServer {

  constructor(talks) {

     this.talks = talks;

    this.version = 0;

     this.waiting = [];

let fileServer = ecstatic({root: “./public”});

  this.server = createServer((request, response) => {

     let resolved = router.resolve(this, request);

    if (resolved) {

       resolved.catch(error => {

            if (error.status != null) return error;

return {body: String(error), status: 500};

}).then(({body,

                 status = 200,

                headers = defaultHeaders}) => {

response.writeHead(status, headers);

response.end(body);

    });

} else {

    fileServer(request, response);

}

    });

}

start(port) {

    this.server.listen(port);

}

stop() {

    this.server.close();

   }

}

This uses a similar convention as the file server for responses—handlers return promises that resolve to objects describing the response. It wraps the server in an object that also holds its state.

Talks as resources

The talks that have been proposed are stored in the talks property of the server, an object whose property names are the talk titles. These will be exposed as HTTP resources under /talks/[title], so we need to add handlers to our router that implement the various methods that clients can use to work with them. The handler for requests that GET a single talk must look up the talk and respond either with the talk’s JSON data or with a 404 error response.

const talkPath = /^\/talks\/([^\/]+)$/;

  router.add(“GET”, talkPath, async (server, title) => {

if (title in server.talks) {

   return {body: JSON.stringify(server.talks[title]),

                headers: {“Content-Type”: “application/json”}};

} else {

  return {status: 404, body: `No talk ‘${title}’ found`};

   }

});

Deleting a talk is done by removing it from the talks object.

router.add(“DELETE”, talkPath, async (server, title) => {

  if (title in server.talks) {

    delete server.talks[title];

    server.updated();

  }

return {status: 204};

});

The updated method, which we will define later, notifies waiting long polling requests about the change. To retrieve the content of a request body, we define a function called readStream. which reads all content from a readable stream and returns a promise that resolves to a string.

function readStream(stream) {

  return new Promise((resolve, reject) => {

    let data = “”;

   stream.on(“error”, reject);

   stream.on(“data”, chunk => data += chunk.toString());

    stream.on(“end”, () => resolve(data));

      });

}

One handler that needs to read request bodies is the PUT handler, which is used to create new talks. It has to check whether the data it was given has presenter and summary properties, which are strings. Any data coming from outside the system might be nonsense, and we don’t want to corrupt our internal data model or crash when bad requests come in. If the data looks valid, the handler stores an object that represents the new talk in the talks object, possibly overwriting an existing talk with this title, and again calls updated.

router.add(“PUT”, talkPath,

                async (server, title, request) => {

let requestBody = await readStream(request);

let talk;

try { talk = JSON.parse(requestBody); }

catch (_) { return {status: 400, body: “Invalid JSON”}; }

if (!talk ||

    typeof talk.presenter != “string” ||

    typeof talk.summary != “string”) {

return {status: 400, body: “Bad talk data”};

}

server.talks[title] = {title,

                                      presenter: talk.presenter,

                                      summary: talk.summary,

                                     comments: []};

server.updated();

return {status: 204};

});

Adding a comment to a talk works similarly. We use readStream to get the content of the request, validate the resulting data, and store it as a comment when it looks valid.

router.add(“POST”, /^\/talks\/([^\/]+)\/comments$/,

             async (server, title, request) => {

let requestBody = await readStream(request);

let comment;

try { comment = JSON.parse(requestBody); }

catch (_) { return {status: 400, body: “Invalid JSON”}; }

if (!comment ||

    typeof comment.author != “string” || typeof comment.message != “string”) {

return {status: 400, body: “Bad comment data”};

} else if (title in server.talks) {

   server.talks[title].comments.push(comment);

   server.updated();

   return {status: 204};

} else {

   return {status: 404, body: `No talk ‘${title}’ found`};

    }

});

Trying to add a comment to a nonexistent talk returns a 404 error.

Long polling support

The most interesting aspect of the server is the part that handles long polling. When a GET request comes in for /talks, it may be either a regular request or a long polling request. There will be multiple places in which we have to send an array of talks to the client, so we first define a helper method that builds up such an array and includes an ETag header in the response.

SkillShareServer.prototype.talkResponse = function() {

let talks = [];

for (let title of Object.keys(this.talks)) {

      talks.push(this.talks[title]);

}

return {

   body: JSON.stringify(talks),

  headers: {“Content-Type”: “application/json”,

                   “ETag”: `”${this.version}”`}

   };

};

The handler itself needs to look at the request headers to see whether If-None-Match and Prefer headers are present. Node stores headers, whose names are specified to be case insensitive, under their lowercase names.

router.add(“GET”, /^\/talks$/, async (server, request) => {

let tag = /”(.*)”/.exec(request.headers[“if-none-match”]);

let wait = /\bwait=(\d+)/.exec(request.headers[“prefer”]);

if (!tag || tag[1] != server.version) {

  return server.talkResponse();

} else if (!wait) {

  return {status: 304};

} else {

return server.waitForChanges(Number(wait[1]));

    }

});

If no tag was given or a tag was given that doesn’t match the server’s current version, the handler responds with the list of talks. If the request is conditional and the talks did not change, we consult the Prefer header to see whether we should delay the response or respond right away. Callback functions for delayed requests are stored in the server’s waiting array so that they can be notified when something happens. The waitForChanges method also immediately sets a timer to respond with a 304 status when the request has waited long enough.

SkillShareServer.prototype.waitForChanges = function(time) {

  return new Promise(resolve => {

    this.waiting.push(resolve);

setTimeout(() => {

  if (!this.waiting.includes(resolve)) return;

  this.waiting = this.waiting.filter(r => r != resolve);

   resolve({status: 304});

       }, time * 1000);

   });

};

Registering a change with updated increases the version property and wakes up all waiting requests.

SkillShareServer.prototype.updated = function() {

  this.version++;

  let response = this.talkResponse();

  this.waiting.forEach(resolve => resolve(response));

   this.waiting = [];

};

That concludes the server code. If we create an instance of SkillShareServer and start it on port 8000, the resulting HTTP server serves files from the public subdirectory alongside a talk-managing interface under the /talks URL.

new SkillShareServer(Object.create(null)).start(8000);

The client

The client-side part of the skill-sharing website consists of three files: a tiny HTML page, a style sheet, and a JavaScript file.

HTML

It is a widely used convention for web servers to try to serve a file named index.html when a request is made directly to a path that corresponds to a directory. The file server module we use, ecstatic, supports this convention. When a request is made to the path /, the server looks for the file ./public/ index.html (./public being the root we gave it) and returns that file if found. Thus, if we want a page to show up when a browser is pointed at our server, we should put it in public/index.html. This is our index file:

<!doctype html>

<meta charset=”utf-8″>

<title>Skill Sharing</title>

<link rel=”stylesheet” href=”skillsharing.css”>

<h1>Skill Sharing</h1>

<script src=”skillsharing_client.js”></script>

It defines the document title and includes a style sheet, which defines a few styles to, among other things, make sure there is some space between talks. At the bottom, it adds a heading at the top of the page and loads the script that contains the client-side application.

Actions

The application state consists of the list of talks and the name of the user, and we’ll store it in a {talks, user} object. We don’t allow the user interface to directly manipulate the state or send off HTTP requests. Rather, it may emit actions that describe what the user is trying to do. The handleAction function takes such an action and makes it happen. Because our state updates are so simple, state changes are handled in the same function.

function handleAction(state, action) {

if (action.type == “setUser”) {

  localStorage.setItem(“userName”, action.user);

  return Object.assign({}, state, {user: action.user});

} else if (action.type == “setTalks”) {

  return Object.assign({}, state, {talks: action.talks});

} else if (action.type == “newTalk”) {

  fetchOK(talkURL(action.title), {

    method: “PUT”,

    headers: {“Content-Type”: “application/json”},

    body: JSON.stringify({

    presenter: state.user,

    summary: action.summary

          })

}).catch(reportError);

} else if (action.type == “deleteTalk”) {

  fetchOK(talkURL(action.talk), {method: “DELETE”})

    .catch(reportError);

} else if (action.type == “newComment”) {

   fetchOK(talkURL(action.talk) + “/comments”, {

      method: “POST”,

     headers: {“Content-Type”: “application/json”},

     body: JSON.stringify({

             author: state.user,

             message: action.message

            })

      }).catch(reportError);

  }

return state;

}

We’ll store the user’s name in localStorage so that it can be restored when the page is loaded. The actions that need to involve the server make network requests, using fetch, to the HTTP interface described earlier. We use a wrapper function, fetchOK, which makes sure the returned promise is rejected when the server returns an error code.

function fetchOK(url, options) {

return fetch(url, options).then(response => {

 if (response.status < 400) return response;

  else throw new Error(response.statusText);

   });

}

This helper function is used to build up a URL for a talk with a given title.

function talkURL(title) {

return “talks/” + encodeURIComponent(title);

}

When the request fails, we don’t want to have our page just sit there, doing nothing without explanation. So we define a function called reportError, which at least shows the user a dialog that tells them something went wrong.

function reportError(error) {

alert(String(error));

}

Rendering components

We’ll use an approach similar to the one we saw before, splitting the application into components. But since some of the components either never need to update or are always fully redrawn when updated, we’ll define those not as classes but as functions that directly return a DOM node. For example, here is a component that shows the field where the user can enter their name:

function renderUserField(name, dispatch) {

  return elt(“label”, {}, “Your name: “, elt(“input”, {

 type: “text”,

  value: name,

   onchange(event) {

    dispatch({type: “setUser”, user: event.target.value});

     }

    }));

}

A similar function is used to render talks, which include a list of comments and a form for adding a new comment.

function renderTalk(talk, dispatch) {
   return elt(

“section”, {className: “talk”},
elt(“h2″, null, talk.title, ” “, elt(“button”, {

type: “button”, onclick() {

dispatch({type: “deleteTalk”, talk: talk.title}); }

}, “Delete”)), elt(“div”, null, “by “,

elt(“strong”, null, talk.presenter)), elt(“p”, null, talk.summary), …talk.comments.map(renderComment), elt(“form”, {

       onsubmit(event) {
         event.preventDefault();

}

let form = event.target; dispatch({type: “newComment”,

              talk: talk.title,
              message: form.elements.comment.value});
    form.reset();

}
}, elt(“input”, {type: “text”, name: “comment”}), ” “,

elt(“button”, {type: “submit”}, “Add comment”)));

}

The “submit” event handler calls form.reset to clear the form’s content after creating a “newComment” action. When creating moderately complex pieces of DOM, this style of programming starts to look rather messy. There’s a widely used (non-standard) JavaScript extension called JSX that lets you write HTML directly in your scripts, which can make such code prettier (depending on what you consider pretty). Before you can actually run such code, you have to run a program on your script to convert the pseudo-HTML into JavaScript function calls much like the ones we use here. Comments are simpler to render.

function renderComment(comment) {

return elt(“p”, {className: “comment”},

elt(“strong”, null, comment.author),

“: “, comment.message);

}

Finally, the form that the user can use to create a new talk is rendered like this:

function renderTalkForm(dispatch) {
let title = elt(“input”, {type: “text”}); let summary = elt(“input”, {type: “text”}); return elt(“form”, {

onsubmit(event) { event.preventDefault(); dispatch({type: “newTalk”,

                 title: title.value,
                 summary: summary.value});
       event.target.reset();

}
}, elt(“h3”, null, “Submit a Talk”),

elt(“label”, null, “Title: “, title),

elt(“label”, null, “Summary: “, summary),

elt(“button”, {type: “submit”}, “Submit”));

}

Polling

To start the app we need the current list of talks. Since the initial load is closely related to the long polling process—the ETag from the load must be used when polling—we’ll write a function that keeps polling the server for /talks and calls a callback function when a new set of talks is available.

 async function pollTalks(update) {
   let tag = undefined;
   for (;;) {
     let response;
     try {

response = await fetchOK(“/talks”, { headers: tag && {“If-None-Match”: tag,

“Prefer”: “wait=90”}

       });
     } catch (e) {

console.log(“Request failed: ” + e);

await new Promise(resolve => setTimeout(resolve, 500));

continue;

}
if (response.status == 304) continue;

tag = response.headers.get(“ETag”);

update(await response.json());

}

}

This is an async function so that looping and waiting for the request is easier. It runs an infinite loop that, on each iteration, retrieves the list of talks—either normally or, if this isn’t the first request, with the headers included that make it a long polling request. When a request fails, the function waits a moment and then tries again. This way, if your network connection goes away for a while and then comes back, the application can recover and continue updating. The promise resolved via setTimeout is a way to force the async function to wait.

When the server gives back a 304 response, that means a long polling request timed out, so the function should just immediately start the next request. If the response is a normal 200 response, its body is read as JSON and passed to the callback, and its ETag header value is stored for the next iteration.

The application

The following component ties the whole user interface together:

class SkillShareApp {

  constructor(state, dispatch) {

    this.dispatch = dispatch;

    this.talkDOM = elt(“div”, {className: “talks”});

     this.dom = elt(“div”, null,

              renderUserField(state.user, dispatch),

              this.talkDOM,

              renderTalkForm(dispatch));

this.syncState(state);

}

syncState(state) {

  if (state.talks != this.talks) {

      this.talkDOM.textContent = “”;

for (let talk of state.talks) {

     this.talkDOM.appendChild(

renderTalk(talk, this.dispatch));

}

this.talks = state.talks;

    }

  }

}

When the talks change, this component redraws all of them. This is simple but also wasteful. We’ll get back to that in the exercises. We can start the application like this:

function runApp() {

  let user = localStorage.getItem(“userName”) || “Anon”;

  let state, app;

function dispatch(action) {

  state = handleAction(state, action);

  app.syncState(state);

}

pollTalks(talks => {

    if (!app) {

        state = {user, talks};

       app = new SkillShareApp(state, dispatch);

document.body.appendChild(app.dom);

} else {

   dispatch({type: “setTalks”, talks});

     }

   }).catch(reportError);

}

runApp();

If you run the server and open two browser windows for http://localhost:8000 next to each other, you can see that the actions you perform in one window are immediately visible in the other.

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.