Introduction to Docker
Máximo Martinez Soria
Devops
- It works locally.
- It works in staging.
- Which version are you using?
- Did you install service X?
These are some of the questions and comments we usually hear within the team when everyone sets up their environment independently, and as a result, we all end up with different environments. I’m sure that if you have some experience developing software, you’ve run into this situation at least once.
Maintaining environment parity, both on each developer’s machine and across staging, production, and other environments, is extremely important to avoid these kinds of problems, save time, and reduce stress.
Virtualization
One of the solutions we have today to solve these problems is the creation of virtual machines. In simple terms, this consists of “creating” one computer inside another.
Going a bit deeper, virtualization is made possible by a Hypervisor. A piece of software that allows a host machine to share its resources virtually with guest machines.
Virtual machines have several disadvantages. Among the most problematic are speed and size.
Since they are literally full computers, we are not simply running our software. We are booting an entire machine and then running our software on top of it. This means we have to wait for the operating system to start and for all required processes to finish before the machine becomes available. Only then can we run our software.
On top of that, we are duplicating many things. The operating system is probably the heaviest component, but it’s not the only one being duplicated. We are also duplicating interpreters and or compilers for the languages we need, as well as all the services required to run a project. Things like Python, PHP, Node, Redis, PostgreSQL, and many others.
This makes it common to find virtual machines that we consider lightweight but still take up 2 GB.
As developers, this also means we need much more disk space on our computers to store all the virtual machines for every project we work on.
Containerization
Many years ago, before containers existed, ports suffered from a lack of standardization whenever a cargo ship arrived. Each product, each part, each spare piece was different. Different size, weight, and volume. This made loading and unloading extremely difficult.
Until someone came up with the idea of creating rectangular metal structures and basing the entire logistics process around them. Ships would be designed specifically to transport them. Carts would be designed specifically to move them. And they would all have the same size, weight, and volume.
These metal structures were called containers, and they solved many problems in the logistics industry.
In software, containers follow the same idea. Instead of having many different ways to build, distribute, and run software, we have a single standardized way, designed to run anywhere.
Containers have many advantages over virtual machines. Among them:
- Lightweight. They can reuse files that were downloaded for other containers. This means that if we have three projects running on Ubuntu 16, we only need to install that operating system once.
- Fast. They can use files that already exist on the host machine. This avoids waiting for a machine to boot.
- Portable. They are designed to run anywhere.
- Secure. They are completely isolated. Code running inside a container does not know it is inside one and has no access to anything outside of it, unless we explicitly allow it.
What Is Docker
Docker is a platform that provides tools to build, distribute, and run software. It allows us to run applications in isolated environments, containers, which include everything the application needs to run.
Docker images
In Docker, containers are executable instances of an image. An image is a template that contains the instructions needed to create a container.
To better understand this, let’s look at an example.
How do we use an Ubuntu instance?
We need to create a container that includes Ubuntu.
And how do we create that container?
It turns out that there is a hub, a kind of App Store for images, where we can find an Ubuntu image that we can use to create a container.
Images can be based on other images. For example, if we look at the PHP image, we can see that it is based on the Debian image.
Docker networks
As mentioned earlier, containers are isolated environments. So much so that they do not even know anything exists outside of them. But what happens if we have multiple services, each running in its own container, that need to communicate with each other?
This is the problem that networks solve.
In simple terms, they help us expose certain ports of a container so they can be accessed from outside.
Docker storage
Docker has two mechanisms for storing files on the host machine so they are not deleted when a container stops running.
There are many reasons why this is necessary, but these two examples help illustrate why this functionality exists:
- The source code files for our app live in a Git repository that is cloned onto our machine. As we work, we do not want to restart the container every time we make a change.
- When we use databases, they would be empty every time the container starts. This would be a huge waste of time, since we would have to reload the data every time we run the project.
To solve these and many similar situations, Docker provides the following mechanisms:
Bind mounts
This consists of binding a directory from our computer to a directory inside the container. This binding means both directories are mirrored. Any change made in one will be reflected in the other.
This is very convenient, but it partially breaks the idea that containers should be isolated environments. In fact, it can be somewhat unsafe. When using an image from Docker Hub that we did not create ourselves, we cannot be 100% sure what it will do. Giving it write access to a directory on our computer is not a great idea. Additionally, this forces us to have these directories created locally, which can get messy.
Volumes
Volumes are very similar to bind mounts, but much safer. This is because they are fully managed by Docker, and no one, not even us from our computer, can access them. To do so, we would need to enter the container and have the required permissions.
Docker compose
With all the tools we’ve seen so far, we are ready to use Docker in our applications. One problem we’ll quickly run into is that starting everything separately is very tedious. If we have our code, a database, a network, and storage, we would need to run four commands every time we want to use our application.
Docker Compose takes a yml file where everything needed is defined and starts all the services for us. For us, this is transparent. All we need to do is write that file and run docker-compose up.
Conclusion
Docker is very complex and has many features. Even the basic topics covered in this post have much deeper and more advanced concepts behind them. If you’re interested in learning more about Docker, I recommend reading our practical guide on how to create a development environment for Laravel using Docker, where we revisit all the concepts explained here in a hands-on way.
And if you’re like me and enjoy understanding the why behind everything, I recommend reading the official Docker documentation, which includes many interesting guides on how the tool works. I’d love to hear your comments about what interesting things you discovered about Docker.
Feb 26, 2026
WordPress
7 Essentials for a Robust WordPress Security Checklist
Apr 3, 2026
WordPress
Step-by-step content creation for agile enterprise teams
Apr 14, 2026
WordPress