Categories
DevOps Tips & Tricks Web Development

Cloudflare Real IP

How is your website routed when behind Cloudflare?

When your website traffic is routed through the Cloudflare, they act as a reverse proxy. This allows Cloudflare to speed up page load time by routing packets more efficiently and caching static resources (images, JavaScript, CSS, etc.). As a result, when responding to requests and logging them, your origin server returns a Cloudflare IP instead of user’s real IP address.

Example. If you have a wordpress website running behind NGINX and you face an issue with spam. You would want to see the IP addresses of the users who are spamming your website. Normally, without cloudflare it is straight forward, you just look up in NGINX access log file and get the client IP addresses. But when the website is behind Cloudflare, you’ll see Cloudflare’s IP instead of user’s real IP.

The following diagram illustrates the different ways that IP addresses are handled with and without Cloudflare.

How to find real ip address behind cloudflare?

Solution: There is an easy fix for this. You just need to tell you webserver, in this case NGINX that whenever it is a cloudflare IP, tell me the real user’s IP. In this case we will use Module ngx_http_realip_module.

Where can I find Cloudflare IP ranges?

Cloudflare publishes their IP ranges at https://www.cloudflare.com/en-gb/ips. They often update thes IPS. So it becomes repetitive task keep updating these Nginx headers. That is why we have made this little script to always show the latest header rules based on current cloudflare IP address ranges.

Cloudflare Real IP header (Updated Daily)

You can just copy and paste the code from the next block into you NGINX server block and then you will start seeing real IP addresses of users on your website.

Note: You may have to change your code to look for IP addresses in CF-Connecting-IP header.

# Add following to get user's real IPs info from Cloudflare
# (last updated 24 Sep 2022)
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2a06:98c0::/29;
set_real_ip_from 2c0f:f248::/32;
real_ip_header CF-Connecting-IP;

Bonus Setup: A bash script to automatically update nginx configs with updated IPs

Here is a nifty little resource that lets you keep you nginx file up to date through a bash script. It basically does the same thing as above but through a cron job. Check it out.

Categories
DevOps Servers Tips & Tricks

Access to Fetch has been blocked by CORS policy

Access to fetch at ‘https://example.com’ from origin ‘https://example2.com’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

Does this error look familiar? This usually happens when a client browser is trying to access a resource from another domain via client script (typically javascript). Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.

Solution via NGINX webserver: Add a header directive that tells the browser that this origin is whitelisted to access the resource

If you are accessing the resource from a domain that you own, then you are in luck. You can tell your webserver to allow your other domain to have access. Add the following code in your NGINX conf file. This can be placed within the server block.

add_header 'Access-Control-Allow-Origin' "${scheme}://definedictionarymeaning.com" always;

Explanation: If resource is hosted on craftypixels.com then we are telling its webserver that when a client requests a resource from definedictionarymeaning.com then allow it.

How to add multiple origin headers?

Adding multiple add_header is simple. In following code needs to be placed in the location block inside the server block. Replace the site1.com with the sites you want to whitelist.

  if ($http_origin ~* "^https?://(site1.com|site2.com|site3.com)$") {
            add_header Access-Control-Allow-Origin "$http_origin";
        }

Why does CORS exists?

CORS exists to protect the internet from evil hackers and intruders. For many years a script from one site could not access the content of another site. That simple, yet powerful rule was a foundation of the internet security. E.g. an evil script from website badsite.com could not access the user’s mailbox at website gmail.com. JavaScript also didn’t have any special methods to perform network requests at that time. It was a toy language to decorate a web page.

This article tries to address following errors:

  1. cors policy no access control allow origin
  2. cross origin request blocked
  3. blocked by cors policy
  4. access blocked by cors policy
  5. cors header access control allow origin missing
  6. no access control allow origin header is present
Categories
DevOps Servers Tips & Tricks

Failed to load resource the server responded with a status of 400

SocketIO Failed to load resource the server responded with a status of 400

2018/08/19 08:37:09 Failed to load resource the server responded with a status of 400 (socketio)
2018/08/19 08:37:09 Failed to load resource the server responded with a status of 400 (socketio)
2018/08/19 08:37:09 Failed to load resource the server responded with a status of 400 (socketio)
2018/08/19 08:37:09 Failed to load resource the server responded with a status of 400 (socketio)
400 Bad Request Error Socketio (browser console)

When it started happening for me?

This started happening to me when i had Nodebb Running on a server behind Nginx as a reverse proxy in multiple processes. Funny enough it worked okay with conventional setup where nodebb was running in one port and process. Ultimately following Nginx config file helped me resolve the issue.

upstream io_nodes {
    ip_hash;
    server 127.0.0.1:4567;
    server 127.0.0.1:4569;
}


server {
    listen                  80;
    server_name             craftypixels.com www.craftypixels.com;
    return                  301 $scheme://craftypixels.com$request_uri;
}
server {

    listen 443 ssl;

    ssl on;
    ssl_certificate         filepath
    ssl_certificate_key     filepath

    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";

    server_name craftypixels.com www.craftypixels.com;

    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_redirect off;

    # Socket.io Support
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

    gzip            on;
    gzip_min_length 1000;
    gzip_proxied    off;
    gzip_types      text/plain application/xml text/javascript application/javascript application/x-javascript text/css application/json;


    location @nodebb {
        proxy_pass http://io_nodes;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
    }

    location ~ ^/assets/(.*) {
        root /var/www/nodebb/;
        try_files /build/public/$1 /public/$1 @nodebb;
    }

    location /plugins/ {
        root /var/www/nodebb/build/public/;
        try_files $uri @nodebb;
    }

    location / {
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://io_nodes;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
    }

}
Nginx vhost file for nodebb running on multiple ports

Background of 400 Bad Request Error

The 400 Bad Request Error is an HTTP response status code that indicates that the server was unable to process the request sent by the client due to invalid syntax. As with the dozens of potential HTTP response codes, receiving a 400 Bad Request Error while accessing your own application can be both frustrating and challenging to fix. Such HTTP response codes represent the complex relationship between the client, a web application, a web server, and often multiple third-party web services, so determining the cause of a particular status code can be a difficult, even within a controlled development environment.

Categories
DevOps Servers

Scaling with Nginx (Error: worker_connections are not enough)

Worker_connections are not enough

2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14518#14518: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14518#14518: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
2018/08/19 08:37:09 [alert] 14517#14517: 3000 worker_connections are not enough
worker_connections are not enough

This usually happens when Nginx is creating more connections than allowed in the configuration files. Which is quite simple just increase the number of connections allowed for the workers. Keep in mind these settings depends on your hardware size and the number of users you are serving. In our case we used to do fine with a value below 1000 but with the recent increase in traffic we had to bump this number up.

/etc/nginx/nginx.conf
Nginx Config file needs to be edited
events {
        #Large number based on your situation
        worker_connections 20000; 
}
Increase worker connection

Other variations of Worker_connections are not enough error


2018/08/19 08:46:32 [alert] 14517#14517: *1128344 3000 worker_connections are not enough while connecting to upstream, client: 2607:fb90:8a5:19fe:ddc1:7265:5d72:646e, server: craftypixels.com, request: "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1", upstream: "/socket.io/?EIO=3&transport=websocket", host: ""
2018/08/19 08:46:32 [alert] 14518#14518: *1128352 3000 worker_connections are not enough while connecting to upstream, client: 70.54.115.201, server: craftypixels.com, request: "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1", upstream: "/socket.io/?EIO=3&transport=websocket", host: "craftypixels.com"
2018/08/19 08:46:32 [alert] 14517#14517: *1128355 3000 worker_connections are not enough while connecting to upstream, client: 24.162.99.225, server: craftypixels.com, request: "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1", upstream: "/socket.io/?EIO=3&transport=websocket", host: "craftypixels.com"
2018/08/19 08:46:32 [alert] 14517#14517: *1128368 3000 worker_connections are not enough while connecting to upstream, client: 81.157.70.235, server: craftypixels.com, request: "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1", upstream: "/socket.io/?EIO=3&transport=websocket", host: "craftypixels.com"

This article tries to address following errors:

  1. worker_connections are not enough while connecting to upstream
  2. 1024 worker_connections are not enough while connecting to upstream
  3. nginx worker_connections are not enough
  4. worker_connections are not enough nginx
  5. 1024 worker_connections are not enough
  6. 768 worker_connections are not enough
  7. 512 worker_connections are not enough
  8. worker_connections are not enough
Categories
DevOps Domains Servers Tips & Tricks Tools Web Development

How to Enable HTTPS on Website for Free

HTTPS protocol lets the users use a website in a secure and protected way. What it basically means is, it sends information to the website server in an encrypted form and also receives the information on the user’s computer or mobile in an encrypted form. You should always protect all of your websites with HTTPS, even if they don’t handle sensitive communications. Security wise it is critical because it restricts the intruders exploiting unprotected resources between your websites and users. This helps to protect website users from getting identified and their behavior pattern being noted. Aside from providing critical security and data integrity for both your websites and your user’s personal information, HTTPS is a requirement for many new browser features.

HTTPS is a modification of the HTTP, used to allow the exchange of content on the Internet with encryption.

Easiest & Free Option

There are various ways to secure your website and enable HTTPS protocol. Some are technically difficult and others are expensive. I was just setting up an example hobby website and needed it served via HTTPS. I wanted a simple and free way to do it. Here is how I did it.

My Setup

  • Website already propagating through Cloudflare account
  • Access to Shell terminal
  • Nginx with admin access

Steps

1. Head over to “Crypto” tab SSL from the drop-down.

I recommend Full(strict) Option because this will make sure everything is delivered through HTTPS instead of mixed protocols.

2. Create origin Certificates through Cloudflare


Make sure you have an entry in the hosts that covers subdomains and the main domain

3. Save Keys

This will generate two keys. When is your private key and the other is your public certificate. Go ahead and save both of these on your server. You are going to use these in your vhost settings.

4. Now open your vhost configuration file for the domain and add two server blocks. One server block will handle port 80 (HTTP) and will redirect all the traffic to port 443 (HTTPS)

server {
    listen                  80;
    server_name             craftypixels.com;
    return                  301 https://$server_name;
}


server {
    listen 443 ssl;
    server_name craftypixels.com;

    root /var/www/craftypixels.com/client;
    index index.php index.html;
    charset UTF-8;

    ssl on;
    ssl_certificate         /var/www/cert/craftypixels.pem;
    ssl_certificate_key     /var/www/cert/craftypixels-private.key;

}

5. Some people usually skip the points 4 and 5 and just enable redirection in the cloudflare interface. I personally had some issues with this especially the website had multiple servers running on different ports and they needed to communicate with each other. But if your website is simple, then you’ll be fine with this step.

Dive Deeper

This article is just a journal entry for myself and it’s a quick and easy way to setup HTTPS protocol. However, if you have trouble following along following are some of the resources which you can go through to more details and understand prerequisite steps as well.

I also want to clarify, this approach is good for small hobby websites which don’t really deal with a lot of user data. But, if you are running a website that stores and propagates critical information, I recommend buying a proper certificate and not rely on a shared free certificates like the one from Cloudflare.

Categories
DevOps Servers

How to install Nginx on Ubuntu

NGINX is a very fast open source Webserver, its faster and more resource-friendly than Apache in most cases and can be used as a web server or a reverse proxy.

Nginx has gained a lot of popularity these days and is responsible for hosting some of the largest and highest-traffic sites on the internet.

Its quicker because it doesn’t need to spawn new processes or threads for each request like Apache does. Hence it also has a low memory foot print. It doesn’t support .htaccess files in its quest for speed and rewrite rules that you would write in .htaccess files for Apache are included in the virtual host configuration files. There could be others like the architecture etc

Installation steps

sudo apt-get update
sudo apt-get install nginx
sudo ufw allow 'Nginx HTTP'
systemctl status nginx
Basic Installation commands on ubuntu (Allowing HTTP traffic)

If you don’t see any errors after each of these commands, this means Nginx is installed and running. Head over to your server IP to see the default loading page.

/etc/nginx/sites-enabled/default
Path where default vhost file is located

 

sudo service nginx start
Command to run Nginx as a service

Cheatsheet

# Apache userdir simulation.
location ~ ^/~([^/]+)(/.*)?$ {
    alias /home/$1/public_html$2;
    autoindex on;
    ssi on;
}
Simulating Apache’s UserDir, with nginx
server {
    listen 80;

    server_name example.com;

    location / {
        proxy_pass http://localhost:4567;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

}
Reverse proxy Snippet
location ~ \.php {
    include /etc/nginx/fastcgi_params;
    keepalive_timeout 0;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name;
    fastcgi_index index.php;
}
FastCGI for PHP, Python or Ruby.
Categories
DevOps Servers

Running NodeJS and PHP together

We can run both the apps by configuring the virtual host files. You can find the settings i used for Apache and Nginx below. Make sure you have both of your apps ready by going over the prerequisites. In my case the wordpress blog is running in PHP while the rest of the site is running on NodeJS.

Apache

Prerequisites

  • Apache is installed

  • Proxy module for Apache is installed

  • PHP is installed

  • NodeJS app is running by itself on port 3000

  • PHP is installed

<Virtualhost>
  ServerName craftypixels.com
  ProxyPreserveHost on 
  ProxyPass /blog ! 
  ProxyPass / http://localhost:3000/ 
  ProxyPassReverse / http://localhost:3000/ 
  alias /blog /var/www/craftypixels/blog

  DocumentRoot /var/www/craftypixels/blog/ 
  Options -Indexes
</Virtualhost>

Nginx

Prerequisites

  • Nginx  1.11.* is installed

  • PHP is installed and socket file is located at /php7.0-fpm.sock

  • PHP app is placed at /var/www/craftypixels/blog/

  • NodeJS app is running by itself on port 3000

server {
    listen 80;
    server_name craftypixels.com;
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

location @blog {
  rewrite ^/blog(.*) /blog/index.php?q=$1;
}
location /blog {
    index index.php;
    try_files $uri $uri/ @blog;
    alias /var/www/craftypixels/blog;

     location ~ \.php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $request_filename;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        }

} }

What are Webservers?

A web server is a computer that runs websites. It’s a computer program that distributes web pages as they are requested by a browser. The basic objective of the web server is to store, process and deliver web pages to the users. This intercommunication is done using Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS). Here are some examples of some leading Web servers.

  1. Apache Server
  2. Microsoft Internet Information Services IIS
  3. Lighttpd
  4. Nginx Web Server
  5. Sun Java System Web Server
  6. Tomcat Server

What is Apache or Apache2?

Apache is the most widely used web server software. Developed and maintained by Apache Software Foundation, Apache is an open source software available for free. It runs on 67% of all web servers in the world. It is fast, reliable, and secure. It can be highly customized to meet the needs of many different environments by using extensions and modules. Most WordPress hosting providers use Apache as their web server software.

What is Nginx?

NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.