From Hardware to Prodcution: A Journey of microservices deployment

Introudction
Recently, I had the opportunity to work on a project that involved deploying microservices in a production environment. The project was a significant learning experience.
Cloud providers like AWS, Azure, and GCP offer a wide range of services that can help you deploy your microservices quickly and efficiently (relatively, considering the penalty that comes with building and deploying a complex software architecture). Cloud providers hide the complexity of selecting the right hardware. For example, the deplolyment of Mongo DB is straightforward, and you can easily set up a MongoDB cluster with just a few clicks.
However, when I deployed the app I realsied that Mongo > V.4.* requires a CPU with AVX support. The app was restricted to deploy on hardware that does not support AVX. This was challenging. The only alternative was to downgrade MongoDB to V.4.18, which was a version that still supported the required features without AVX support.
However, running this version of Mongo compromises replicaset
. Therefore, a better solution is to look for maintainers who support both the hardware and database features. For example, using this image for Mongo instead of the original one `percona–server–mongodb:5.0` would balace hardware restriction vs. software features for this app.
End to End Deployment Architecture
The following diagram illustrates the end-to-end deployment architecture of the microservices application:

As the digram shows, the architecture consists of several components, including:
- AI Object Storage: for storing AI-related objects (not as simple as it sounds).
- AI Model: for managing AI-related activities and their versions.
- Accounts: for storing user account information.
- Call (video, and audio)
- Organisation: for managing departments and HR-specific organisations.
- Calendar: for managing events and schedules.
- Chat: for real-time messaging and communication.
- File: for file storage and management.
- Notification: for sending notifications and alerts (emails, and web notifications).
- Integration: for integrating with external services and APIs (Github, Gmail, etc).
Meeting the Cybersecurity Standards
I also had to ensure that the deployment met the cybersecurity standards required by the project. This involved implementing various security measures, such as:
- OWASP Top 10: Ensuring that the application is secure against the most common vulnerabilities.
- GDPR: Ensuring that the application complies with the General Data Protection Regulation (GDPR) requirements.
- ISO 27001: Ensuring that the application meets the ISO 27001 standard for information security management.
Configuring the system from end to end to encrypt data at rest and in transit was a complex task. I had to ensure that all data was encrypted using strong encryption algorithms, and that the encryption keys were managed securely.
Nginx was used as a reverse proxy to handle incoming requests and route them to the appropriate microservice. It also provided SSL termination, which ensured that all data in transit was encrypted using TLS.
Configuring Nginx to handle SSL was a bit tricky, as I had to ensure that the SSL certificates were valid and properly configured.
Nginx config snippet:
server {
listen 443 ssl;
server_name {{sub-domain}}.{{main-domain:majd.io}}.com;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
location / {
proxy_pass http://localhost:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
the location of the SSL certificate and private key files must be specified correctly. The `proxy_pass` directive is used to forward requests to the appropriate microservice running on port 80.
In addition, I had another layer of Nginx on the server to redirect DNS requests to the appropriate App on the server (which is the entry point for the app’s Nginx). This was done by configuring Nginx to listen on port 443 and redirecting requests to the appropriate microservice based on the requested domain name.
The applicatoin then takes over the request and returns the appropriate response. This setup allows for a clean separation of concerns, where Nginx handles the SSL termination and routing, while the application focuses on business logic.
Networking and DNS configuring
It was not neccessary to use Kubernetes but Docker was essential for running the microservices in isolated containers. Docker allowed me to package each microservice with its dependencies, ensuring that they could run consistently across different environments.
The key takeaway was that Docker is powerful tool that support variety of networking options (7 of them) that isolated the network of the app from the host and allowed containers to communicate with each other. Learning advanced docker techniquese enabled me to package and run the app in dev and production environments.
Conclusion
In conclusion, deploying microservices in a production environment can be a complex task, but with the right tools and practices, it can be done efficiently. The journey from hardware setup to production deployment involves several steps, including selecting the right hardware, configuring security measures, and setting up reverse proxies like Nginx.
I kept this short on purpose but if you have questions or want to know more about specific aspects of the deployment process, feel free to reach out. The experience was a great learning opportunity, and I’m happy to share more details or insights based on my journey.