Cheaper than built-in, the load balancer for your Heroku pipeline applications. The way to host CPU-bound
A load balancer is a device that distributes network or application traffic across a cluster of servers. A load balancer sits between the client and the server farm accepting incoming network and application traffic and distributing the traffic across multiple backend servers. By balancing application requests across multiple servers, a load balancer reduces individual server load and prevents any one application server from becoming a single point of failure, thus improving overall application availability and responsiveness.
Heroku has built-in load balancer:
Heroku’s HTTP routers distribute incoming requests for your application across your running web dynos. So scaling an app’s capacity to handle web traffic involves scaling the number of web dynos. A random selection algorithm is used for HTTP request load balancing across web dynos - and this routing handles both HTTP and HTTPS traffic.
But at the same time, built-in load balancer can't help if your application is CPU-bound.
If you are processing individual requests slowly due to CPU or other shared resource constraints (such as database), then optimizing concurrency on the dyno may not help your application’s throughput at all.
So, there are pros of the solution:
Performenceplan for your application (it means you will have 4 dynos — 4 server instances), it will cost you $100 per month. In case of the solution, you can buy 4 independent applications (
Hobbyplan — $7 per month per instance), setup an indential software, put the load balancer before them (also, $7 per month) — it will cost you $35 (7$x5) — ~3 times cheaper,
Herokucannot completely fit you in this case. You can even buy the
Performanceplan, but it will not increase your
CPUperformance too much to pay a few hundred dollars for this. But if you create tens of the instances with identical software and put a load balancer before them, it may solve your problems.
And cons of the solution. Keep in mind that this solution requires multiple, technically independent applications. The applications do not behave as a single application:
Heroku platformdoes not operate them as a single app (could cause downtime during deployments or daily dyno cycling) when the single load balancer
Hobbydyno cycles (restarts) each day or on deployment, the entire app will go offline temporarily,
Deploy to Herokubelow.
API Keysection, reveal the key and paste it to the
f64cf79b-79ba-4c45-8039-57c9af5d4508mentioned by red arrow at the top.
Deploy app. The process of deploying will start immediately as illustrated below.
heroku logs --tail -a application-namein the terminal), and send the request to the load balancer application. As the result, the load balancer will proxy your request to the each back-end server in round-robin method (one by one in order).
PIPELINE_IDENTIFER) to create load balancer for its applications in
HEROKU_API_KEY, URLs of applications are fetched.
Clone the project with the following command:
$ git clone https://github.com/dmytrostriletskyi/heroku-load-balancer.git $ cd heroku-load-balancer
To build the project, use the following command:
$ docker build -t heroku-load-balancer . -f Dockerfile
To run the project, use the following command. It will start the server and occupate current terminal session:
$ docker run -p 7979:7979 -v $PWD:/heroku-load-balancer \ -e PORT=7979 \ -e HEROKU_API_KEY='8af7dbb9-e6b8-45bd-8c0a-87787b5ae881' \ -e PIPELINE_IDENTIFIER='f64cf79b-79ba-4c45-8039-57c9af5d4508' \ --name heroku-load-balancer heroku-load-balancer
If you need to enter the bash of the container, use the following command:
$ docker exec -it heroku-load-balancer bash
Clean all containers with the following command:
$ docker rm $(docker ps -a -q) -f
Clean all images with the following command:
$ docker rmi $(docker images -q) -f