By Ali Ehsan & Zahra Mahmoodzadeh
Project Overview
This project is a fullstack chat application designed for real-time communication between users. It allows users to register, log in, send and receive messages, create channels, and upload and manage files such as profile pictures. The app features channel-based conversations, access control with token-based authentication, and persistent storage for app data—including users, channels, messages, and more—using MySQL. At the same time, user-uploaded files are stored in MinIO. Message broadcasting and user status (online, idle, offline) are implemented using Server-Sent-Events (SSE).
The frontend is developed with React, styled using Tailwind CSS, and Shadcn/ui components. It includes a Storybook setup for isolated UI component development and testing. TypeScript is used across the codebase for type safety and maintainability. Code quality is enforced with Husky and pre-commit hooks, which automatically run ESLint and Prettier before each commit.
App services are containerized and orchestrated with Docker Compose, while GitHub Actions automate both testing and deployment workflows.
App Architecture
The application is structured into four main services — backend, frontend proxy, database, and object storage — which work together seamlessly via Docker Compose and are built using multi-stage Docker builds for optimized deployment.
1. Node-Express (Backend)
- A Node.js Express server connects to the MySQL database via the internal Docker network using the hostname mysql.
- Uses environment variables to access MySQL and MinIO.
2. Client-Proxy (Frontend Proxy + Static File Server)
- Serves the Vite-built static frontend.
- Proxies:
- “/api/” requests to the backend (node-express:3000).
- “/files/” requests to MinIO (minio:9000) with a rewritten path.
- Built with multi-stage Docker: first builds the client, then serves the output using a lightweight Node.js HTTP server.
3. MySQL (Relational Database)
- A MySQL 9.2 service.
4. MinIO (Object Storage)
- A MinIO server is used to manage and serve user-uploaded files (like profile pictures).
- Accessed internally by the backend and externally through the proxy.
App architecture: From outside, the application is accessible via port 4173 (client-proxy). Other ports (3000 for backend, 9000/9001 for MinIO, and 3306 for MySQL) are used internally between services. In local development, ports 5173 (Vite dev server) and 3001 (backend dev) are used.
CI/CD with GitHub Actions
The application uses GitHub Actions to automate both the Continuous Integration (CI) and Continuous Delivery (CD) processes using Docker and GitHub Container Registry (GHCR).
Continuous Integration (CI)
On every pull request, a GitHub Actions workflow runs to:
- Detecting Changes in the Project
The workflow detects changes in the api (backend), client (frontend), and client-proxy, using the paths-filter action. Based on the detected changes, specific jobs are triggered.
- Testing Affected Parts
Run TypeScript compilation, linting, formatting checks, and relevant tests for affected parts, including executing unit tests for the api directory, building the client, and running smoke tests for both the api and client-proxy directories.
- End-to-End (E2E) Testing
Execute Playwright end-to-end tests if both frontend and backend tests succeed.
The continuous integration workflow ensures the code is consistently tested and verified before being merged.
Continuous Delivery (CD)
When changes are pushed to the main branch, another GitHub Actions workflow is triggered to:
- Check out the latest code.
- Build Docker images for the node-express (API) and client-proxy (frontend) services.
- Push these images to GitHub Container Registry (GHCR).
This ensures that both frontend and backend services are continuously integrated, tested, containerized, and ready for deployment with the latest changes.
Automated pipeline triggered by GitHub Actions for both Continuous Integration and Continuous Delivery.
Test Suite Strategy
The backend test suites are designed to verify the logic of various routes in a fast and isolated environment.
Each test suite targets a specific group of API routes, such as authentication or messaging, and validates both access control and functional behavior, including user registration, login, posting messages, and retrieving data from channels.
The tests are written using Vitest, following a clear and consistent structure with describe, it, and expect blocks.
A separate SQLite database is used specifically for testing, providing a lightweight and efficient way to mimic the real database while ensuring isolation from production or development data. Migrations and seed files are run before the tests to prepare the necessary test data.
The test suite employs beforeAll to start the application and seed the database, ensuring a consistent initial state, while afterAll is used to close the application and destroy the database connection, ensuring proper cleanup after tests run.
Most test environments run entirely locally without Docker or separate containers to keep the setup simple and fast. However, in specific cases like testing MinIO, containers are used—this will be discussed further in the next section.
Reliable File Uploads with MinIO: Testing Strategies, Performance Tradeoffs, and Lessons Learned
Challenge with Testing Uploads
MinIO is used in our production environment as a cloud storage service mimicking AWS S3. This adds complexity to testing file uploads, as it involves interacting with an actual external service. While we use mocks to isolate certain unit tests, we also ensure that direct integration with MinIO is tested to validate real upload behavior.
The application further depends on specific environment configurations to run these tests correctly. For example, environment variables such as MINIO_ENABLE_TEST must be set during testing to conditionally enable MinIO-related functionality.
Solution to the Testing Challenge
To address these challenges, we implemented a combination of mocking, asynchronous simulation, and tailored configuration for test environments. The solution includes logic to initialize the MinIO client only when necessary—specifically when MINIO_ENABLE_TEST is set to true—thus avoiding redundant setup during tests.
Running Integration Tests Using Test Containers
MinIO is used in production as our cloud storage service. For integration testing, we deliberately use Testcontainers to spin up a temporary MinIO instance. This approach provides a realistic and isolated testing environment that closely mirrors production, without requiring complex manual setup. By relying on Testcontainers, we ensure that integration tests validate real interactions with a MinIO-like service, improving test reliability and confidence.
Although the container-based approach introduces some overhead during test startup—particularly on the first run—the tradeoff is worthwhile for the authenticity and consistency it brings to our test environment. Testcontainers handle the lifecycle of the MinIO instance automatically using Docker under the hood, requiring no manual service management, even within CI pipelines such as GitHub Actions.
The minioListener establishes a real-time stream to capture file upload events during both production and testing. While there is a slight latency when establishing the connection, the listener remains lightweight and robust in practice, ensuring reliable event capture without significant performance impact.
To maintain fast and reliable unit tests, the MinIO client is conditionally initialized only when needed. In environments where MinIO interaction is unnecessary—such as pure unit tests—the setup is bypassed entirely. This isolation keeps unit tests quick and focused, while still supporting full-stack validation during integration tests when necessary.
Gotchas and Potential Pitfalls
Several potential pitfalls could impact both performance and functionality.
Error Handling in Event Listeners
The minioListener function logs errors when issues occur, but there’s no mechanism for recovering from these errors. Network failures or API issues could cause important events to be missed. It’s vital to implement robust error recovery strategies such as retries or fallback actions to handle failures effectively and avoid missing critical events.
Conclusion
The MinIO-based file upload solution proves effective in many ways but continues to present challenges that must be addressed to optimize both performance and reliability. Performance-related issues, such as startup delays, asynchronous latency, and handling of concurrent uploads, remain critical obstacles, especially in real-time use cases. Optimizing the system to handle these challenges is key to ensuring its robustness, especially when scaling to handle larger volumes of file uploads.
Overall, through containerization with Docker Compose, cloud-like storage with MinIO, automated CI/CD workflows with GitHub Actions, and a strong testing strategy, this project lays a solid foundation for future growth, feature expansion, and production readiness.
Written By Ali Ehsan & Zahra Mahmoodzadeh