The easiest Docker development workflow for multi-system Laravel projects
Intro
While working on multi-system projects you will have to take care of various factors that can influence the final goal of the system. One of the most important is time. It is a common misunderstanding, that in software development (in our case, web development), a big part of the work is writing code. Naturally, this changes from project to project, but often DevOps activities eat up - sometimes without reason - a huge chunk of allocated time. In this post, we talk about preparation, or as we call it the phase of “onboarding”. This means choosing the right tools and settings for the development environment.
What is a local development environment?
During web development, except in some rare cases, we hardly work on production code (accessed by customers and users). To them the inner logic of a system is not important, they want to see the product and the solutions. On the other side, it is crucial for developers to reproduce the production environment on their own computers. Since the dawn of information technology, this is one of the most researched and discussed topics of development.
How to guarantee that what works locally will work the same way in the production environment?
To address this challenge, in professional circles, developers started to map the live system to the local environment. That meant replicating the setup of the server and the web server that runs on it, along with the structure of the database engine, file system, etc. We call the replica the development environment, and there are many solutions for its local setup and fine-tuning.
Setup steps for simple multitenant systems and experiences
We talk about multitenant systems when one codebase serves several users on a server. Webcapital typically works with such systems. In these cases, the cooperation of 2, sometimes 3 or more components makes up the project. The most common components are:
- Source code (Describes the logic of business operations)
- Database (Persistent information storage)
- File-based contents (images, documents) - optional
The two biggest players in the server operating system race are Linux and Windows. Without exception, we deploy on servers running Linux. But, this does not mean that developers cannot work on their respective projects either on Windows or macOS-based machines. The most common development tools are also available for all three operating systems. It is usually the developer's preference to decide which one to choose. In this post, we will not go into the specifics of each system. But it is worth mentioning that even in the field of web development, problems may occur due to the unique solutions of manufacturers.
Currently available solutions to manage multi-system projects in a local environment
With the rise of microservice architectures, it has become common for smaller subsystems to support main system operations. In smaller development teams thus, it is usual for members to work on all participating codebases. That is why it is important to see at what points sub-systems influence each other's operation. In the past, the management of such sub-systems was difficult due to the small number of available software solutions. Nowadays, solutions like virtualization and docker reduced the time to set up a development environment. Let's take a look at the solutions a developer can use if he has to work on a multi-system project and wants to manage all subsystems on one machine while keeping server settings.
Operating system-specific solutions
If we take nGinx and Apache as examples of popular web servers, one can install versions of these on all three mentioned operating systems. At Webcapital, we use PHP as our weapon of choice for development. Fortunately, for both web servers and all three operating systems, we can find a suitable patch or installable version of the language. Developers usually work with ready-made packages. These contain the source files of the web server, the programming language, the database engine, and in some cases even a mail client. Good examples are Laragon or XAMPP. The advantage is that developers can install and configure them according to their own tastes. Their disadvantage is that if a project has special environment dependencies, these must also be set in the local environment for each new installation. Moreover, if these dependencies are not documented developers can spend valuable hours configuring a new installation.
Virtualization
Before the rise of Docker, the solution for advanced professionals was to run complete servers in a local environment with the help of virtualization. Applications designed for this use case are Virtualbox, Multipass or Vagrant. Their advantage is that the configured systems can be easily moved and copied, but their initial setup requires a lot of time. For some edge cases, developers need advanced knowledge, so we do not recommend them to juniors and trainees. Their resource hunger is also higher, so older devices often had to be expanded to function without obstacles.
A universal solution: Docker
Docker itself would be worth countless separate posts, but we cover only two aspects:
- like the already mentioned XAMPP and Laragon packages, it can be installed on all operating systems,
- and by using it, the description of the necessary dependencies becomes part of our project. In this way, the runtime environment is documented (using the Dockerfile).
Docker also runs our application in a virtualized environment. The difference is that it uses so-called containers, smaller virtualized units. It also creates the environmental conditions that are important and coordinate them. An important difference compared to the previous virtualization solutions is that Docker can be set up and started with commands.
A simple example: we run a command using PHP 8.1 (we install the dependencies of a Laravel project, source here):
docker run --rm
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
By running the command, Docker provides us with all the basic programs and modules for our PHP script to work, including the operating system.
Although the operation is simpler than manually installing and configuring a virtual server, writing individual commands can be considered an advanced task.
By default, Dockerfiles describe the unique environment settings for Docker-based developments. Coordination of these and other services is done using the docker-compose
command.
The docker-compose
docker-compose is an ideal tool for re-utilization of Docker commands and services and to make them documentable and versionable. The example above can be rewritten in a simple configuration file in yaml
format. For example, running a Node command in such a docker-compose.yml file is as simple as this:
version: '3'
services:
node:
image: node:latest
command: "node -e 'console.log(123)'"
The example above is of course not very detailed. We only illustrate how easy it is to run a JavaScript command even if Node.js is not installed on our machine. With the help of docker-compose.yml, docker will install Node.js into a container for execution. The most important aspect for developers is that the docker-compose.yml file can be part of our project from now on, we can add it to a version tracking system (e.g. GIT). In this way, the runtime of our project will be documented and the entire development team can work with the same environment.
Laravel Sail
Most of Webcapital’s projects use the Laravel framework. So we were particularly pleased with Laravel 8, in which they introduced the Laravel Sail plugin.
Sail takes the functionality of docker-compose
one step further. In a simplified way, it provides us with a docker-compose.yml file for Laravel projects, which we can run out of the box.
Setup steps and issues for multi-system Laravel projects with Docker
After learning about Docker and docker-compose
, let’s examine what problems can arise when setting up a Laravel project built from the cooperation of several subsystems.
A simple example: given a CRM product. Subscribing users can use the service as they wish, but we verify the validity of subscriptions in a different system. In our example, for simplicity, we run the two systems on the same server, but with separate databases.
Thus, if we use docker-compose
, we will have two docker-compose files. Both files describe the environment required for the given application.
So we have a new difficulty: Docker groups and organizes communication of services into networks.
We need to sync the content of the two docker-compose.yml files so that our applications can communicate with each other while running.
In our case, the configuration needs to meet three conditions:
- The two codebases must be able to work together. Scripts from one application must be able to call scripts from the other application, through CLI.
- The two codebases must have access to their own databases.
- The two projects should also work independently, without having to change docker-compose.yml.
The relevant part of the two docker-compose.yml files in the default situation:
# CRM: (containing folder name: crm)
version: '3'
...
services:
laravel.test:
networks:
- sail
volumes:
- '.:/var/www/html'
mysql:
networks:
- sail
networks:
sail:
driver: bridge
...
# Subscriptions: (containing folder name: subs)
version: '3'
...
services:
laravel.test:
networks:
- sail
volumes:
- '.:/var/www/html'
mysql:
networks:
- sail
networks:
sail:
driver: bridge
...
The two configuration details are the same, but with these settings, Docker will create two separate networks. One for both systems. Also, it will generate the names for the services from the specified name and the folder's name containing the project. You can check existing networks with the docker network ls
command:
webcapital@webcapital:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0ff495b6d149 crm_sail bridge local
3fa732044205 subs_sail bridge local
Webcapital's solution for managing a multi-system project with Laravel Sail
Based on the example above, the first condition is very easy to fulfill. Docker allows you to attach a file system folder to two containers at the same time. Here, the connection's direction does not matter. We will connect the subscription management system to the CRM.
It is important that after all modifications, we need to restart the containers with docker-compose up
, or sail up
in the case of Laravel Sail.
# CRM: (containing folder name: crm)
version: '3'
...
services:
laravel.test:
volumes:
- '.:/var/www/html'
- '/absolute/path/to/subs:/var/www/subs
...
# Subscriptions: (containing folder name: subs)
version: '3'
...
services:
laravel.test:
volumes:
- '.:/var/www/html'
...
In this way, the two codebases are able to communicate in the laravel.test
image of the CRM container.
But the second condition is not yet fulfilled. The application that manages the subscriptions won't be able to access its own database this way, since it's on a different network. If we pack all services into one network, we must ensure that there is no name or port match. Each codebase must know, for example, which database to search for:
# CRM: (containing folder name: crm)
version: '3'
...
services:
laravel.test-crm:
networks:
- sail
mysql_crm:
networks:
- sail
ports:
- '3307:3306'
networks:
sail:
name: 'common-network’
...
# Subscriptions: (containaing folder name: subs)
version: '3'
...
services:
laravel.test-subs:
networks:
- sail
mysql_subs:
networks:
- sail
ports:
- '3306:3306'
networks:
sail:
name: 'common-network'
...
With this change, each service will run under its own name. If we check the networks now, we can see that although both containers are running, only one common network is displayed:
webcapital@webcapital:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0ff495b6d149 crm_sail bridge local
3fa732044205 subs_sail bridge local
caed1f17fc3e common-network bridge local
(We can delete existing networks with the docker network rm
command)
The new DB service names (mysql-crm
and mysql-subs
) must be set among the environment variables of our applications. With that, both sides will be able to reach their own database and we can run them from any image.
We only have one task left. Staying with the above example, if we only want to work on the CRM system and the subscription application code is not included in the project (e.g. due to a smaller task delegated to a junior developer), docker-compose
will throw an error because it will not find the specified /absolute/path/to/subs
folder.
We can overcome this with docker-compose
's default settings and environment variables. Laravel Sail will parse these environment variables, so we can create a path to the subscription application, e.g. SUBS_APP_VOLUME
.
After modifying the docker-compose.yml file, if the above environment variable is not present, the folder will still have a default fallback value:
# CRM: (containing folder name: crm)
version: '3'
...
services:
laravel.test:
volumes:
- '.:/var/www/html'
- '${SUBS_APP_VOLUME-/dev/null}:/var/www/subs’
...
The possible variation of settings can cause complexity in your project. That is why we recommend starting with the smallest set in the beginning.
This post is a brief introduction on managing multi-projects with Laravel Sail. There are much more robust solutions. But in the case of simpler projects, docker-compose
can solve even complex configuration tasks. If you have any comments or suggestions, feel free to write to contact@webcapital.hu!