Modularizing the Laravel Monolith

About the Monolith

They are straightforward. Everyone is familiar with them. They are developer-friendly. They are referred to as Monoliths.

Everyone who has experience working with popular frameworks like Laravel has built a monolithic application. The majority of these frameworks are designed to guide you in building a monolithic application. And there is nothing wrong with that. Monoliths have many advantages and are a great choice for smaller applications.

Monolithic applications operate as a cohesive entity with all their components consolidated in one location and interdependent. This simplifies the development process as no complex pre-planning is necessary. Additionally, testing and deployment of these applications become more straightforward.

But the issues occur when the application becomes more complex. The interdependence of system components increases, resulting in what is commonly known as "a big ball of mud". At certain periods of time, certain components may experience a larger increase in workload, and to handle this increased demand, we must scale the entire system.

Microservices to the rescue

Microservice is the architectural approach that is widely popular as an alternative to the monolithic approach.

Microservices address the complexity problem found in monolithic applications by dividing the application into multiple smaller parts. Each part represents a significant business concept and functions as an independent self-deployable unit.

This enables us to independently scale each service. If a particular part of the system experiences a surge in traffic, we no longer have to scale the entire system.

Because every part of the system is now loosely coupled, we no longer have to be concerned about using a unified technology stack. We can have multiple teams developing services using different technologies, which can accelerate the development process.

This also means that we no longer have a single point of failure. If one part of the system fails, the other parts can still operate without any problems.

This is still not a magic solution.

Microservices also have their disadvantages. Microservices bring in complexity when it comes to managing interactions between services. Developers have to design and maintain communication protocols, deal with network delays, and ensure data consistency.

The best of both worlds

modular monolith overview

We noticed that both approaches have their advantages and disadvantages. Therefore, the question is: Is it possible to combine the strengths of both and find a middle ground solution?

It would be great if we could maintain the simplicity of Monolith by using just one deployment artifact. We don't want the complications that come with network calls between different parts of the application. Testing and managing the entire system can be complex, so we'd like to avoid that. However, we still want to divide the application into modules that can be developed independently. Each module should handle its own data, and we want multiple teams to work simultaneously by reducing the impact of new changes.

Welcome to the Modular Monolith.

Modularizing the Laravel monolith

Now, let's explore how we can modularize the Laravel Monolith. It's important to remember that Laravel is a mature framework with a well-defined structure based on the MVC architecture. While deconstructing that structure, my aim is to achieve a high level of modularity while retaining as much of Laravel's existing power as possible. This means I won't strictly adhere to all architectural guidelines when creating my modules. The goal of this post is not to implement a strict clean architecture or blindly follow Laravel's standard architecture.

Database setup

Before we begin building a modular monolith, it's important to clarify one thing: the data from each module MUST be separated!

There are two potential solutions:

  1. Utilizing separate databases for each module. 

  2. Using a single database that contains multiple schemas, with one schema designated for each module.

In my case, I chose the second approach of having a single database with several schemas. I'm using PostgreSQL because it allows for the creation of multiple schemas within a database. A helpful tip in Laravel when dealing with multiple schemas is to create a separate database connection for each schema. While this isn't required, it can prevent a lot of headaches, especially when managing migrations and validating data.

Here's an example of how we can establish a connection for a PostgreSQL schema:

// config/database.php

'user' => [
    'driver' => 'pgsql',
    'url' => env('DB_URL'),
    'host' => env('DB_HOST', '127.0.0.1'),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'laravel'),
    'username' => env('DB_USERNAME', 'root'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => env('DB_CHARSET', 'utf8'),
    'prefix' => '',
    'prefix_indexes' => true,
    'search_path' => 'user',
    'sslmode' => 'prefer',
],

Now in our Model or Migration classes we can say:

protected $connection = 'user';

We'll talk a bit more about data handling in the following chapters, especially in the Infrastructure chapter.

Folder structure

When we perform a fresh installation of a Laravel application, we will come across a folder structure that looks like this:

app/
├── Console/
├── Exceptions/
├── Http/
│   ├── Controllers/
│   ├── Middleware/
│   └── Requests/
├── Models/
├── Policies/
├── Providers/
├── Services/
└── ...
bootstrap/
config/
database/

In essence, the files are organized based on their types. The structure of folders primarily depends on technical aspects and terminology, rather than being influenced by features or business concepts.

If we decide to use a modular approach, our folder structure might look like this:

Modules/
├── User/
│   ├── Application/
│   ├── Domain/
│   ├── Infrastructure/
│   ├── IntegrationEvents/
│   └── Presentation/
├── Flight/
│   ├── Application/
│   ├── Domain/
│   ├── Infrastructure/
│   ├── IntegrationEvents/
│   └── Presentation/
└── Booking/
│   ├── Application/
│   ├── Domain/
│   ├── Infrastructure/
│   ├── IntegrationEvents/
│   └── Presentation/
bootstrap/
config/
database/

Some developers prefer to structure modules using various architectures, such as clean architecture, vertical slice architecture, or standard Laravel architecture. Regardless of the approach, the ultimate goal remains the same: to organize code based on context instead of technical considerations.

To be able to autoload functionality from the newly created "Modules" directory, we need to update the composer.json file like this:

{
    "autoload": {
        "psr-4": {
            "App\\": "app/",
            "Modules\\": "modules/",
            "Database\\Factories\\": "database/factories/",
            "Database\\Seeders\\": "database/seeders/"
        }
    },
}

Then we can run:

composer dump-autoload

Now, this allows us to load classes like this:

use Modules\User\Domain\Role\Models\Role;

Module organization

In the example above, I utilized clean architecture along with some principles of Domain Driven Design.

Domain

This directory includes the fundamental business logic and domain models. The primary components are referred to as Entities. Entities typically hold the data from the tables represented by the model. They handle the essential logic, can interact with other domain models, and can dispatch domain events.

This is how the structure of the domain directory could look like:

Domain/
├── Role/
└── User/
    ├── Concerns/
    ├── Enums/
    ├── Events/
    ├── Models/
    └── ValueObjects/

In Laravel, an equivalent to an entity is an Eloquent Model. However, there is a significant difference. Eloquent Models are closely tied to the database interaction logic, which is considered problematic from the perspective of clean architecture. Essentially, this means that the domain layer depends on a higher layer, or the infrastructure layer. Entity objects, on the other hand, focus more on holding data without understanding how to retrieve it, while also managing some fundamental operations related to the data they represent.

I intentionally decided to concentrate on the domain layer first. Laravel is an MVC framework with a clear folder structure and established concepts. Although the Eloquent model clashes with certain clean architecture principles, switching it out for basic entity objects would lead to considerable loss of functionality. In this post, I plan to make the most of Laravel's features while keeping the application modular. The main aim here is not to strictly follow clean architecture but to highlight the application's modularity.

This is an example of the User model:

<?php
namespace Modules\User\Domain\User\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Relations\BelongsToMany;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
use Modules\User\Domain\Role\Models\Role;
use Modules\User\Domain\User\Concerns\HasRoles;

class User extends Authenticatable
{
    use HasFactory, Notifiable, HasRoles;

    protected $connection = 'user';

    protected $fillable = [
        'name',
        'email',
        'password',
    ];

    protected $hidden = [
        'password',
        'remember_token',
    ];

    protected function casts(): array
    {
        return [
            'email_verified_at' => 'datetime',
            'password' => 'hashed',
        ];
    }

    public function roles(): BelongsToMany
    {
        return $this->belongsToMany(Role::class);
    }
}

It's quite simple: managing data, interacting with other modules, and handling core business logic.

Another important aspect to highlight in the domain directory are the domain events. These events indicate significant occurrences or changes in the state of a domain model. They serve as a way to communicate crucial updates within the system, fostering loose coupling between different components of the application.

<?php

namespace Modules\User\Domain\User\Events;

use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;

class UserRegisteredDomainEvent
{
    use Dispatchable, SerializesModels;

    public function __construct(
        public string $id,
        public string $occurredOnUtc,
        public string $name,
        public string $email,
    ) {}
}

A domain event could be fired from a domain model itself, or as in our case from the application layer where our use cases live.

Application

The application layer context acts as the bridge between the presentation and the domain layer. Its function includes overseeing the application's use cases, coordinating different activities, and ensuring that the correct domain logic is applied.

I'm using CQRS in this example. The primary objective of CQRS is to separate concerns, which allows us to independently optimize the read and write models. If we decide to enhance performance later by separating the read and write storage, this pattern will facilitate that effort. Additionally, CQRS simplifies testing by distinguishing the logic for writing data from the logic for reading it. Another beneficial aspect is its connection to event sourcing, where the write operations can handle commands to create events, while the read operations can be used to rebuild the current state from those captured events

<?php

namespace Modules\User\Application\User;

use App\Bus\Command;
use Modules\User\Domain\User\ValueObjects\EmailValueObject;
use Modules\User\Domain\User\ValueObjects\PasswordValueObject;

class RegisterUserCommand extends Command
{
    public function __construct(
        private readonly string $name,
        private readonly EmailValueObject $email,
        private readonly PasswordValueObject $password
    ) {}

    public function getName(): string
    {
        return $this->name;
    }

    public function getEmail(): EmailValueObject
    {
        return $this->email;
    }

    public function getPassword(): PasswordValueObject
    {
        return $this->password;
    }
}
<?php

namespace Modules\User\Application\User;

use App\Bus\Query;

class FindUserQuery extends Query
{
    public function __construct(
        private readonly int $id
    ) {}

    public function getId(): int
    {
        return $this->id;
    }
}

As we can see, commands and queries are data holders in essence. The real action happens in their respective handlers:

<?php

namespace Modules\User\Application\User;

use App\Bus\CommandHandler;
use Illuminate\Support\Facades\Hash;
use Modules\User\Domain\User\Enums\RoleEnum;
use Modules\User\Domain\User\Events\UserRegisteredDomainEvent;
use Modules\User\Domain\User\Models\User;

class RegisterUserCommandHandler extends CommandHandler
{
    public function handle(RegisterUserCommand $command)
    {
        $user = User::query()->create([
            'name' => $command->getName(),
            'email' => $command->getEmail()->toNative(),
            'password' => $command->getPassword()->toHash(),
        ]);

        $user->assignRole(RoleEnum::USER);

        UserRegisteredDomainEvent::dispatch(
            $user->getKey(),
            now()->toDateTimeString(),
            $user->name,
            $user->email,
        );

        return $user->getKey();
    }
} 

In this section, we can see that the UserRegisteredDomainEvent has been dispatched. This event can be utilized throughout the module, and if we wish to share this event with other modules, we have a different concept known as IntegrationEvents. We will go through this more in depth a little bit below.

<?php

namespace Modules\User\Application\User;

use App\Bus\Query;
use Modules\User\Domain\User\Models\User;

class FindUserQueryHandler extends Query
{
    public function handle(FindUserQuery $query): ?User
    {
        return User::query()
            ->with('roles')
            ->findOrFail($query->getId());
    }
}

Infrastructure

The infrastructure layer primarily manages database access and third-party services. In this layer, you can find database migrations, seeders, and factories, as well as configuration files, middleware, logging, and more.

Infrastructure/
├── Storage/
├── Providers/
└── Persistance/
    ├── Factories/
    ├── Migrations/
    ├── Seeders/

An essential component of every module is the Service Provider. This is where we handle the loading of all migrations, routes, and other resources, making them accessible within the framework. Additionally, the Service Provider connects Commands and Queries with their corresponding handlers in our system.

<?php

namespace Modules\User\Infrastructure\Providers;

use App\Bus\CommandBus;
use App\Bus\QueryBus;
use Illuminate\Support\ServiceProvider;
use Modules\User\Application\User\FindUserQuery;
use Modules\User\Application\User\FindUserQueryHandler;
use Modules\User\Application\User\RegisterUserCommand;
use Modules\User\Application\User\RegisterUserCommandHandler;

class UserServiceProvider extends ServiceProvider
{
    public function register(): void
    {
        //
    }

    public function boot(): void
    {
        $this->loadMigrationsFrom(__DIR__ . '/../Persistance/Migrations');
        $this->loadRoutesFrom(__DIR__ . '/../../Presentation/User/user.php');

        $commandBus = app(CommandBus::class);

        $commandBus->register([
            RegisterUserCommand::class => RegisterUserCommandHandler::class,
        ]);

        $queryBus = app(QueryBus::class);

        $queryBus->register([
            FindUserQuery::class => FindUserQueryHandler::class,
        ]);
    }
}

Presentation

In the end, the presentation layer provides a public API. In this instance, we are utilizing Laravel's controllers and routes to make the functionality accessible.

Presentation/
└── User/
    ├── Controllers/
    ├── FormRequests/
    ├── Routes/
<?php

namespace Modules\User\Presentation\User\FormRequests;

use Illuminate\Foundation\Http\FormRequest;

class RegisterUserRequest extends FormRequest
{
    public function rules(): array
    {
        return [
            'name' => 'required|string',
            'email' => 'required|email|unique:user.users,email',
            'password' => 'required|string|min:6',
        ];
    }
}
<?php

namespace Modules\User\Presentation\User\Controllers;

use App\Bus\CommandBus;
use App\Bus\QueryBus;
use App\Http\Controllers\Controller;
use Illuminate\Http\JsonResponse;
use Modules\User\Application\User\FindUserQuery;
use Modules\User\Application\User\RegisterUserCommand;
use Modules\User\Domain\User\ValueObjects\EmailValueObject;
use Modules\User\Domain\User\ValueObjects\PasswordValueObject;
use Modules\User\Presentation\User\FormRequests\RegisterUserRequest;

class RegisterUserController extends Controller
{
    public function __construct(
        protected CommandBus $commandBus,
        protected QueryBus $queryBus
    ) {}

    public function __invoke(RegisterUserRequest $request): JsonResponse
    {
        $id = $this->commandBus->dispatch(
            new RegisterUserCommand(
                name: $request->validated('name'),
                email: EmailValueObject::from(
                    $request->validated('email')
                ),
                password: PasswordValueObject::from(
                    $request->validated('password')
                )
            )
        );

        $user = $this->queryBus->ask(
            new FindUserQuery(
                id: $id
            )
        );

        return response()->json($user);
    }
}

Module communication

Communication between modules is entirely asynchronous and relies on integration events. These events serve to map domain events and make them accessible to other modules. Typically, this is achieved through an Event Bus, which acts as a message broker. In this instance, I've utilized Laravel's built-in events system, but for production applications, I would suggest a more adaptable solution by implementing Redis or RabbitMQ as the broker.

<?php

namespace Modules\User\IntegrationEvents;

use Illuminate\Foundation\Events\Dispatchable;

class UserRegisteredIntegrationEvent
{
    use Dispatchable;

    public function __construct(
        public string $id,
        public string $occurredOnUtc,
        public string $name,
        public string $email,
    ) {
    }
}

This event can be dispatched within a command or domain event listener / handler

<?php

namespace Modules\User\Application\User;

use Illuminate\Support\Facades\Event;
use Modules\User\Domain\User\Events\UserRegisteredDomainEvent;
use Modules\User\IntegrationEvents\UserRegisteredIntegrationEvent;

class UserRegisteredDomainEventListener
{
    public function __construct()
    {
        //
    }

    public function handle(UserRegisteredDomainEvent $event): void
    {
        Event::dispatch(new UserRegisteredIntegrationEvent(
            $event->id,
            $event->occurredOnUtc,
            $event->name,
            $event->email,
        ));
    }
}

When working with modules or microservices, one of the key questions is how to handle synchronous communication. It's essential to primarily focus on asynchronous communication through events and message brokers. Our modules should be designed to convert synchronous calls into asynchronous ones. For instance, imagine we have three modules: User, Flight, and Booking. There will be instances where we need to access user data from the User module in either the Flight or Booking module. Instead of making direct queries to the User model, we can listen for user events from the User module and keep that data synchronized in both the Booking and Flight modules. Some may argue that this approach creates data duplication, and while that's partially true, it's not necessarily a drawback. The user object does not have to be the same in both the Booking and Flight modules, nor does it need to follow the same structure. For instance, in the Booking module, the user could be viewed as the Payer or Booker, while in the Flight module, the user might be considered a Passenger.

module async communication

If we really need synchronous communication, there are a few options to think about, like using gRPC calls provided by a service in the presentation layer that acts like a public API. However, we should always prioritize asynchronous communication when we can.

Because our modules utilize asynchronous communication, we are implementing eventual consistency. For instance, when a user updates some data on a page, that data is immediately saved in one module. However, another module is also monitoring that data. Consequently, the user interface will update right away with the data from the first module, but the information from the second module won't appear until later, or until a refresh is done. There are several potential solutions, such as displaying placeholder data for the second piece of information on the user interface, introducing a delay in updating the data (after the second module has been updated), or achieving immediate consistency by merging the modules.

Key Takeaways

  1. The goal of developing a modular monolith is to keep the simplicity of a traditional monolith by using a single deployment artifact. At the same time, it should enable a logical separation that allows us to remove any module from the application when necessary and scale it independently as a microservice.

  2. Data isolation is essential. We should never store all the data in one place. Instead, we should separate it by using multiple databases or by creating a single database with various schemas.

  3. Take enough time to clearly define modules and their boundaries. If the modules are overly interactive, it could suggest that there are too many of them, so consider merging some of them.

  4. Focus on implementing asynchronous communication through the use of events and message brokers.

  5. Keep in mind eventual consistency and ensure that the user interface and experience remain unaffected by it.