Scalability Blog

Scaling tips, insights, updates, culture, and more from our Server Experts.
 

Load Distribution with Nginx and Cloudflare

nginx-cloudflare-header

Nginx is a popular reverse proxy application that is very efficient at serving static content and forwarding requests to other webservers.  It can provide a much needed performance boost for websites that have a lot of visitors and static content like images, videos, PDF files, etc.  While dynamic content like PHP, Python, Ruby, and other scripts, are passed off to an interpreter.  This is usually an Apache webserver, which receives a request for dynamic content like a PHP code, and renders it for a user.  When scaling these services, it is important to note that Apache uses a lot of memory to serve these requests, so optimization of content delivery is important.  This is where Nginx is very handy, as it serves static content like images very quickly with a minimal memory footprint.  By combining the two you can serve a lot more traffic.

If you choose to use Nginx for reverse proxy, you’ll also be able to customize where content is delivered from.  For example, you’ll be able to serve images from one cluster of servers, and videos from another:

nginx-cloudflare-diagram

This helps to optimally scale your servers and minimize idling.

For our example, suppose we use Nginx on 192.34.56.28.  The DNS record would look like this:

domain.com.            300     IN      A       192.34.56.28

Keeping the refresh rate to something small like 300 seconds (5 minutes) would allow you to scale up your infrastructure horizontally pretty quickly, but those IPs are best reserved for front-facing Nginx proxies.  These proxies, in turn, can have as many webservers in upstream as you’d like, handling actual traffic.  This protects the webservers from being exposed to DDoS attacks, and also allows optimizing the traffic delivery by splitting destination for different content.

Here is a snippet of Nginx config for multiple IPs that you can place on main entry point:

location / {
proxy_pass http://LOAD-BALANCED-IPS;
proxy_redirect     off;
proxy_set_header   Host             $host;
proxy_set_header   X-Real-IP        $remote_addr;
proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
}

upstream LOAD-BALANCED-IPS {
#LBENTRYPOINT
server 192.34.56.29:80 max_fails=1 fail_timeout=1;
server 192.34.56.30:80 max_fails=1 fail_timeout=1;
}

This essentially forwards all requests for domain.com from Nginx proxy (192.34.56.28) to 192.34.56.29 and 192.34.56.30, evenly distributing requests between these two upstream servers.  The best part about this setup is that if the upstream server is down, the Nginx would not send a visitor to that server, and there would be no 404 error displayed.  Nginx would continue to poll the server to see if it is alive, and once that upstream server is back online, the traffic will resume.

Placing tags like “#LBENTRYPOINT” would allow you to create a script that would insert or delete a line based on IP address of your webserver. You can use command line tools like sed to accomplish this in Linux.

Once we have added our SSH key on the Nginx proxy, we can make a script that would insert a new server address with IP of 192.34.56.31 on our Nginx proxy (192.34.56.28):

sed -i '/#LBENTRYPOINT/a\server 192.34.56.31:80 max_fails=1 fail_timeout=1;' /etc/nginx/nginx.conf && service nginx reload

This assumes your configuration file is in /etc/nginx/nginx.conf but on Nginx compiled from source this could be in /usr/local/nginx or /usr/share/nginx.  Make sure to tailor it to your own system.

After the line is inserted, it would also be prudent to check for any duplicate entries and remove them, as the load balancing evenly distributes traffic among all entries of ‘server’ on the list, so having duplicate entries would send multiple requests to that server.

The following command would remove this upstream server (192.34.56.31) from Nginx:

sed -i "/$192.34.56.31/d" /etc/nginx/nginx.conf && service nginx reload

 

With these simple tools you can now automate the process of cloning a VM and placing it into proxy server’s upstream rotation.  This would essentially be scaling up your proxy server vertically.

To add additional proxy servers and scale horizontally, we would need to use a DNS manager with API toolset.  Cloudflare offers just the solution.  Click Account and copy your API key:

Cloudflare allows you to modify DNS records with three API commands: rec_new, rec_edit, and rec_delete.  Their documentation covers each in greater detail.

For a quick example, we will create a new subdomain for our images using Cloudflare’s API.  We’ll call this subdomain images.domain.com and give it a 300 second TTL (5 minutes):

[root@web ~]# curl "https://www.cloudflare.com/api_json.html?a=rec_new&tkn=62a946da58115cc89cff61f84b4a6c8f401b3&email=
root@domain.com&z=domain.com&type=A&name=images&ttl=300&content=192.34.56.28"

{"request":{"act":"rec_new","a":"rec_new","tkn":"62a946da58115cc89cff61f84b4a6c8f401b3",
"email":"root@domain.com","z":"domain.com","type":"A","name":"images","ttl":
"300","content":"192.34.56.28"},"response":{"rec":{"obj":{"rec_id":"32696770","rec_tag":
"b469d45498fc38d7792a46bafcff0136","zone_name":

"domain.com","name":"images.domain.com","display_name":"images","type":"A","prio":null,
"content":"192.34.56.28","display_content":"192.34.56.28","ttl":"300","ttl_ceil":86400,
"ssl_id":null,"ssl_status":null,"ssl_expires_on":null,"auto_ttl":0,"service_mode":"1",
"props":{"proxiable":1,"cloud_on":1,"cf_open":0,"ssl":0,"expired_ssl":0,"expiring_ssl"
:0,"pending_ssl":0,"vanity_lock":0}}}},"result":"success","msg":null}

Adding more proxies is as simple as running the same command but replacing IP address with that of a new proxy.  This is an example of round-robin DNS load balancing. You can also modify how the request will be handled by the DNS server by using service_mode.  By setting service_mode to 1, the requests go through ‘orange’ cloud of CDN proxies.  Setting service_mode to 0 points the A record directly to the IP address you specified.

a-record-proxy

Now we can modify this record and add new Nginx proxies to scale it horizontally.  The entire process can be automated using Bourne shell, PHP, Python, Ruby, and so on.