Docker from Scratch, Part 1: Installing and your First Container

 

The last few months I’ve been engaged in a quest to understand something new to me. Since the mid-2000s, I’ve been using virtualization technologies to run classroom environments, test software, and run local web servers for development. First, it was using VMware Player, then Server, and today everything I virtualize runs on the open source Virtualbox. So when I heard about containerization, I was stunned.

“How can you start tens of virtualized instances in only seconds? It’s unreal!” And yet, the demos didn’t lie. I wanted to dig in myself, but having discovered Vagrant, I wanted to wait until there was a similar ecosystem for containers. Today, there’s no excuse; Docker is ready.

I’d been using Docker at work for months now, but how it worked and how you build your own remained a mystery to me. Unlike virtualization, there’s more than one key concept to learn. So one day, I just decided to sit down and dig in.

Installing Docker

Docker is easiest to install on a machine running Linux. There are fewer dependencies, and it’s really Docker’s native environment. Consult your distro’s documentation on how to install Docker.

Now if you are using Docker on Mac or Windows, you’re not out of luck! You can still run Docker, but the installation process is a bit more complicated and involves several steps. First, download and install Virtualbox. Then, go to https://www.docker.com/ and click “Get Started”. Their documentation is well written, and will get you set up.

Once that’s done, we’re ready to start running containers.

Running Your First Container

There are many containers ready to download and run online. Instead of hunting for them and installing them yourself, Docker assumes you have an Internet connection and maintains a listing of containers for use -- the Docker Registry. To download one of these pre-made containers for use, you issue a “pull” command. It’s rather like pulling an repository from github.


$ docker pull debian

You might notice docker pulling what looks like multiple commits for the same container. Once the image has been pulled, then we can start up with the “run” command.


$ docker run -i -t debian /bin/bash

If everything goes well, you’ll be dropped into a bash shell. Huzzah! You’ve just run your first container. Now go ahead and poke around. One thing that’s particularly interesting is to list the contents of the root directory. When you do, you’ll notice that it looks very much like a basic Debian installation. There’s a typical Linux directory structure, and many utilities are available just as you’d expect.

This isn’t much different from any time you installed and run a Linux VM. It downloaded and started up faster than a normal VM, and it’s actually taking up less memory and disk space on your system. WHAT IS THIS MAGIC!?

First, let’s break down that docker command a bit. “run” instructs docker to run a container. The “-i” means run interactively. We don’t need to run a container interactively, and often we won’t. The “-t” is actually related to the “-i”: it instructs docker to emulate a terminal. You can run the command without the “-t”, but it would look a little strange. The next part, “debian” is the container we want to run. In this case, we’re running the Debian container we pulled earlier. The last bit’s the command we want to run in the container.

Listing Containers

Go ahead and exit the interactive session like you would any command line application: use Control-C or enter “exit”. Once you’re certain your back to your host OS, let’s take a look at all of our running Docker containers by issuing the “docker ps” command.


$ docker ps
CONTAINER ID       IMAGE        COMMAND            CREATED             STATUS

Wait, “ps”? Like the list processes command on Linux? Yeah, but with Docker it only lists running containers. That’s kind of a weird thing to call it, but we’ll explain that later. You may notice the output of the command is empty. The hell? Isn’t our container still running like any VM would? The answer is no. Let’s list all our containers, running or stopped, by using the “-a” switch on the “ps” command:


$ docker ps -a
CONTAINER ID       IMAGE        COMMAND            CREATED             STATUS                     
0c6142a4488a       debian       "/bin/bash"        8 seconds ago       Exited

There’s our container! But why did it stop? If you’ve used VMs before, you’re used to the idea that they work just like Linux running on physical hardware. You need to boot them up and shut them down. Containers aren’t like that. In fact, containers are a subtly different beast entirely from VMs.

"A Container Is Just a Process"

I’ve heard this before from others trying to explain to me just what in the heck a container is. To say a container is “just a process” only mystifies it further. Yet it gets at the key reason why containers are so much faster than VMs.

Containers aren’t VMs. When you run a VM, you are essentially emulating all the physical hardware necessary for the guest OS to run. The guest OS itself has its own kernel, background processes, and services necessary to make a functional Linux environment. This requires a lot of overhead to run, and makes VMs memory-, disk-, and CPU-intensive. Containers, on the other hand, don’t run an operating system.

When we ran our first container, we specified what to run in the container -- /bin/bash, or the Bash shell. This was the only thing running in that container. There’s no additional Linux kernel, no background processes, no services. Just the one process we need. Containers are better thought of not as VMs, but applications.

A container runs a single Linux process. While the container doesn’t have a kernel of its own, it does have all the files, libraries, and utilities necessary for the process to run. It’s like we took all the OS out of Linux, and made it into a specially boxed and packaged runtime. This is why the container stopped when we quit the shell. The container is still in its own little world, in a way, but without the virtual hardware or OS baggage to slow things down. This is also why when we listed the root directory, it looked very much like a full Linux OS. All the files are there to support the Linux userland, but none of the kernel.

Summary

So far we’ve installed Docker, and started our first container. We learned that containers only run a single Linux process, rather than an entire OS. Next time, we’ll start building a repeatable docker container by writing a DockerFile.

Read Part 2.