Deploying a Dojo App with Docker

By on July 16, 2019 7:44 am

So you’ve built an amazing app using Dojo and now you are ready to go live. After a bit of research, you learn that traditional deployments are challenging! Luckily, the days of FTPing files are long gone, and we can rely on Docker for fast, reliable deployments. Using Docker will not only document your build process, but it will also give you a Docker image you can easily deploy to production or run locally.

In this article, we’ll step you through how you can use Docker to build a Docker image that will serve your Dojo app. We’ll start off with a simple, though naive, approach to building our Dojo app. Then, we’ll make some improvements to our image’s file size. Finally, we’ll streamline our build process to take full advantage of the Docker build cache. We won’t go into how to deploy the Docker image, but once you have the image, running it in a production environment should be a cinch.

Building Manually

Before we start configuring our Docker build, it’s helpful to take a step back and think about the steps we perform manually to build our Dojo app. To build a Dojo app from the source we need to:

  • Install the correct version of Node.js and npm
  • Install the Dojo CLI
  • Install application dependencies
  • Build the app (dojo build)

We don’t necessarily need to perform all of these steps every time, but if we are setting up an app on a brand new environment, we’ll need to perform these steps.

Building with Docker

A Dockerfile gives Docker instructions on how to build your image. We’ll start off by constructing a simple Dockerfile that mirrors our manual build process, and then extend it with a few optimizations from there.

If you just want to browse, feel free to check out the example Docker + Dojo project.

Create a Dockerfile in the project directory that looks like this:

FROM nginx:1.17

RUN apt-get update && apt-get install -y curl && \\
    curl -sL https://deb.nodesource.com/setup_8.x | bash - && \\
    apt-get install -y build-essential nodejs && \\
    npm i -g @dojo/cli

COPY . /usr/local/app

WORKDIR /usr/local/app

RUN npm i
RUN dojo build

RUN cp -R /usr/local/app/output/dist/* /usr/share/nginx/html/

If the above Dockerfile example does not make sense, this will help explain the details:

  1. FROM nginx:1.17 – We’re going to start from an official nginx image and apply our changes on top of it. This is great because we get a web server that is pre-built and has already been configured for us with sensible defaults. You can read more about the official nginx image on its Docker Hub page.
  2. RUN apt-get update... – This is where we install our build tooling. The official nginx image does not have Node.js installed, so we need to install it. After Node.js gets installed, we can install the Dojo CLI globally.
  3. COPY . /usr/local/app – Here we copy all of the files in our project root directory (the same location as our Dockerfile) into the Docker image. We’re copying these files to a temporary directory, I chose /usr/local/app, because we need a place to build our application.
  4. WORKDIR /usr/local/app – Set our working directory to our project root. Any RUN commands we execute will now occur within the context of this directory.
  5. RUN npm i – Install our application’s build dependencies.
  6. RUN dojo build – Build the Dojo app. When this is finished, our built files reside in /usr/local/app/output/dist.
  7. RUN cp -R /usr/local/app/output/dist/* /usr/share/nginx/html/ – Copy our compiled files into nginx’s hosting directory. In our nginx image, anything copied to /usr/share/nginx/html get served as static content.

Now that we have our Dockerfile configured, we can build our image with a simple command:

docker build -t my-amazing-app.

This command tells Docker to execute the instructions in our Dockerfile and tag the final image with “my-amazing-app.” You don’t need to tag your image, but it makes it a bit easier to track its purpose.

After the image is created, you can easily try by running this command:

docker run -p 8080:80 my-amazing-app

The nginx image doesn’t print anything to the screen when it gets started, so you will not get much feedback. With your image running, visit http://localhost:8080 to see your application in action. You can hit CTRL+C when you are finished and want to shut down the Docker container.

Building Smarter

We’ve done it! We’ve successfully created a Docker image of our built Dojo web application. We can use this image to test in a continuous integration environment, or just spin up this image on our web server and release our app in the wild.

Although we successfully built our image, we took a somewhat naive approach. Let’s inspect our image to learn how we can do better.

docker run -it my-amazing-app du -h --max-depth=1 /usr/local/app

You should see something like this:

56K	/usr/local/app/tests
88K	/usr/local/app/src
7.0M	/usr/local/app/output
24K	/usr/local/app/.idea
450M	/usr/local/app/node_modules
458M	/usr/local/app

If you recall our application build process, we copied the app into our image, built it, and then copied the built files into the hosted directory. Looking at the output we just printed, the old files we left as a result of our build are taking up 450MB! We can confirm this by looking at the size of our image and comparing that to the size of the nginx image we’re using.

docker image inspect my-amazing-app --format='{{.Size}}'
844208614
docker image inspect nginx:1.17 --format='{{.Size}}'
109258867

Uh oh! It looks like the problem is much worse than expected. Our application is over 700MB larger than the base nginx image. This is because we’ve also installed Node.js, curl, and the Dojo CLI which we only needed to build our application.

Let’s re-evaluate our approach. We really have two distinct processes that we’ve munged into our single Dockerfile. First, we want to build our application (this includes installing Node.js dependencies and running the Dojo CLI to create our production-ready assets). Second, we want to configure the nginx image so that it can host our files.

We could add extra RUN commands to our Dockerfile to remove our old /usr/local/app directory after we build, and uninstall Node.js and the dojo CLI, but that is really going to pollute our Dockerfile with extraneous commands. Additionally, these additional steps take additional time.

Luckily for us, there is a better way! Docker 17.05 and newer includes a multi-stage builds feature that provides exactly what we need. A multi-stage build lets us perform some build steps in one image, and then switch to another image for additional build steps. The last image we switch to is the basis for our final Docker image.

Let’s refactor our Dockerfile to use a multi-stage build.

FROM node AS builder
RUN npm i -g @dojo/cli

COPY . /usr/local/app
WORKDIR /usr/local/app

RUN npm i
RUN dojo build

FROM nginx:1.17
COPY --from=builder /usr/local/app/output/dist/ /usr/share/nginx/html/

Our refactored Dockerfile is easier to read after converting it to a mult-stage build! We’ve taken our two phases, build and configure, and executed them in different Docker images. For the build phase, we use the official Node.js image, install the Dojo CLI, copy our project files in, and perform the build. Then, in the configure phase, we copy the files we built during the build phase (—from=builder) and put them in nginx’s static hosting directory. The build phase gets discarded, leaving us with the nginx image with our files copied in. No extra Node.js, node_modules, Dojo CLI, or anything else we only needed to build our app.

If we build our image now (same build command as before), and inspect the size of our final image, we’ll see that it’s now only marginally larger than the nginx base image.

docker image inspect my-amazing-app --format='{{.Size}}'
110671871

Building Faster

If you’re on Windows or Mac, you might have noticed that “Sending build context to Docker daemon” takes longer than we would like, especially if you are building often. By default, Docker is sending our entire project directory over to the Docker virtual machine where the actual building of the image will take place. On our computers, this project directory consists of things like our hidden IDE directories, installed node_modules, etc. Our Docker image doesn’t need these, and they are just wasting time being copied over with the build context. Similar to a .gitignore file, which tells Git what files to ignore, Docker let’s us ignore files when creating the build context via a .dockerignore file.

Our .dockerignore file will simply look like,

node_modules/
output/

With our .dockerignore file in place, rebuild the app and you’ll notice that sending the build context now takes only a fraction of the time. You may also notice that installing our node_modules takes a lot longer! Now that we’re not sending our node_modules anymore, the npm i command we are running has some serious work to do!

You may not be surprised to know that Docker has a solution for this too! At the end of each command in our Dockerfile, Docker will cache the result of the command. If we run the build command twice in a row, the second run finishes very quickly -all of the commands are already cached. If a file involved in a build step changes, the cache is invalidated and every command after is also invalidated. In a practical sense, this means that every time we change our application, which results in a change to our COPY . /usr/local/app step, we invalidate every build step that is dependent on those files, like our RUN npm i. This is not ideal as our dependencies should only be reinstalled when we change our package.json file.

With some small refactors to our Dockerfile, we can use the Docker build cache to our advantage.

FROM node AS builder
RUN npm i -g @dojo/cli

COPY package.json /usr/local/app/
COPY package-lock.json /usr/local/app/

WORKDIR /usr/local/app

RUN npm i

COPY . /usr/local/app
RUN dojo build

FROM nginx:1.17
COPY --from=builder /usr/local/app/output/dist/ /usr/share/nginx/html/

The first thing we do now is copy in our package.json and package-lock.json files. With these changes in place, we can install our Node.js dependencies. Now, if we change one of our application source code files, the cache is not invalidated because those files have not yet been copied into the image. This means that we can change our application source files, rebuild our Docker image, and our RUN npm i cache is still valid, skipping that step entirely and picking up at the COPY . /usr/local/app step, which will in turn invalidate the RUN dojo build step and copy the new files to a new nginx image.

Summary

In this article, we’ve taken a Dojo application and demonstrated how to build and create a Docker image of the application. We then demonstrated how we can use a multi-stage build to heavily streamline our final Docker image. We wrapped up with some simple tweaks to our Dockerfile to show you how we can take advantage of the Docker build cache to drastically increase the performance of our repeat builds.

We did not cover how to deploy your built docker image, but that can be as simple as pushing the image up to docker hub and pulling it down on your web host. With so many web hosts supporting Docker now, deploying your Dojo app has never been easier.

You can check out our final Dojo + Docker example project.

Learning More

Need help optimizing your application for production, defining a perfect DevOps workflow for your next JavaScript or TypeScript application, or building an application that is efficient to create, update, deploy and maintain? If so, contact us to discuss how we can with your next project!

Follow SitePen for more articles just like this
TwitterFacebookLinkedIn


Need help improving the performance of your application?