A proxy server is an intermediary between a client (user) and a destination server. It can be used for security, performance enhancement, and anonymity. There are two main types of proxy servers:
- Forward Proxy (Client-Side Proxy)
- Reverse Proxy (Server-Side Proxy)
- A forward proxy sits between the client and the internet.
- When a client makes a request, the forward proxy forwards the request to the target server on behalf of the client.
- The server sends the response back to the proxy, which then delivers it to the client.
- Hides user IP addresses from websites.
- Users can bypass geo-restrictions (e.g., access blocked content).
- Used by VPN services to provide anonymous browsing.
- Organizations use forward proxies to block access to restricted sites.
- Used in schools, offices, and government networks for censorship.
- Forward proxies cache frequently requested content to reduce server load.
- Example: A school proxy caches YouTube videos so students don’t repeatedly download them.
- A reverse proxy sits in front of backend servers and receives requests from clients.
- Instead of clients connecting to origin servers, they connect to the reverse proxy, which forwards requests.
- Distributes traffic across multiple backend servers.
- Improves performance and prevents server overload.
- Caches static content (images, CSS, JavaScript) to reduce backend load.
- Used in CDNs (Content Delivery Networks) to serve content efficiently.
- Hides backend server IPs, preventing direct attacks.
- Blocks malicious traffic before it reaches the origin.
- Offloads SSL/TLS encryption and decryption from backend servers.
- Used in HTTPS-based services for performance.
Install Nginx:
sudo apt install nginx -y
Configure as a Reverse Proxy:
sudo nano /etc/nginx/sites-available/reverse_proxy
Add the following:
server {
listen 80;
server_name mysite.com;
location / {
proxy_pass http://backend_server_ip;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Enable the Configuration and Restart Nginx:
sudo ln -s /etc/nginx/sites-available/reverse_proxy /etc/nginx/sites-enabled/
sudo systemctl restart nginx
Now, requests to mysite.com
are forwarded to the backend.
A load balancer distributes incoming traffic across multiple servers to:
- Ensure high availability (prevent a single server from being overloaded).
- Improve performance (use multiple servers efficiently).
- Provide fault tolerance (if one server fails, others take over).
- Software-based: Nginx, HAProxy
- Hardware-based: F5 Networks, Citrix ADC
- Cloud-based: AWS ELB, GCP Load Balancing
How It Works:
- Distributes requests evenly across all servers in a circular manner.
Example:
Request 1 → Server A
Request 2 → Server B
Request 3 → Server C
Request 4 → Server A
Pros: Simple and effective for equally powerful servers.
Cons: Does not consider server load or performance.
How It Works:
- Assigns weights to servers based on capacity.
- More powerful servers get more requests.
Example:
Server A (weight = 3)
Server B (weight = 1)
Server C (weight = 1)
Requests: A → A → A → B → C → A → A → A → B → C → …
Pros: Accounts for server capacity differences.
Cons: Still does not account for real-time load.
How It Works:
- Sends new requests to the server with the fewest active connections.
Example:
Server A (5 active connections)
Server B (2 active connections)
Request goes to Server B.
Pros: Ideal for handling varying workloads.
Cons: Requires tracking real-time server loads.
How It Works:
- Assigns a client to a server based on its IP address.
- Ensures that repeat requests from the same client go to the same server.
Example:
Client 192.168.1.10 → Server A
Client 192.168.1.11 → Server B
Pros: Useful for session persistence (e.g., authentication).
Cons: If a server fails, those users lose their session.
- Application Load Balancer (ALB): Best for HTTP/HTTPS traffic.
- Network Load Balancer (NLB): Best for low-latency TCP/UDP traffic.
- Classic Load Balancer (CLB): Legacy option.
- Supports global load balancing (single IP worldwide).
- Autoscaling to handle traffic spikes.
Edit nginx.conf
:
upstream backend {
server server1.com;
server server2.com;
server server3.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Restart Nginx:
sudo systemctl restart nginx