I have a server with 4vCPU and 8GB of RAM hosting a REST API in Node. I wanted to use the CPUs more efficiently, so I took advantage of the fact that I am testing Docker and created a Node image with my API and started 4 containers. Configure my Nginx server to load balance between those four containers. The problem is that I don't improve the performance, it's more, it got a little worse.
Nginx configuration:
upstream app{
server 192.168.1.12:3000;
server 192.168.1.12:3001;
server 192.168.1.12:3002;
server 192.168.1.12:3003;
}
server{
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Start of the containers:
docker run -dit --name node -p 3000:3000 node-api:0.4
PS: I tried using the node cluster module and an improvement in performance is noticeable. The servers that I have inside the containers do not use this module
I recommend you to create a docker-compose.yml file , where you have the manifest to run your node container, and also add an nginx container that in its port 80 is pointing to your node container (as you are doing now). The correct thing to balance these containers is to deploy with docker-swarm , which is prepared to run the containers in productive environments, etc. You can add a new deploy key to your usual docker-compose.yml file , where you can tell it how many replicas of this container you want running. Example:
Deploy with the following command
And you should already have 4 correctly balanced containers.