How to Use HAProxy for Load Balancing

Introduction

HAProxy is a powerful open source load balancer that can be used to distribute incoming requests across multiple servers. It is a reliable, high-performance solution that can be used to improve the availability and scalability of your web applications. In this guide, we will discuss how to use HAProxy for load balancing in order to improve the performance of your web applications. We will cover the basics of HAProxy, how to configure it, and how to monitor its performance. By the end of this guide, you will have a better understanding of how to use HAProxy for load balancing.

How to Use HAProxy for Load Balancing

1. Install HAProxy.

2. Configure HAProxy.

3. Create a backend server pool.

4. Configure the load balancing algorithm.

5. Configure the health checks.

6. Configure the logging.

7. Start HAProxy.

8. Monitor the performance of HAProxy.
[ad_1]

Introduction

High-traffic web servers benefit from implementing load balancers. A load balancer helps relay traffic across multiple web servers, ensuring high availability and maintaining web server performance during traffic spikes.

HAProxy is a popular, reliable, and cost-efficient solution for load balancing. The software is known for being robust and dependable. Many popular websites, such as GitHub, Reddit, Slack, and Twitter, use HAProxy for load-balancing needs.

This tutorial explains how to set up and use HAProxy for load balancing.

How to Use HAProxy for Load Balancing

Prerequisites

  • A system with Linux OS.
  • Access to the sudo command.
  • Python 3 installed.

What is HAProxy?

HAProxy (Highly Available Proxy) is an efficient web load balancer and reverse proxy server software written in C. This open-source software is available for most Linux distributions in popular package managers.

The tool has many complex functionalities, including a complete set of load balancing features.

As a load balancer, HAProxy works in two modes:

  • A TCP connection load balancer, where balancing decisions occur based on the complete connection.
  • An HTTP request balancer, where balancing decisions occur per request.

The sections below demonstrate how to create a HTTP load balancer.

Setting up HAProxy for Load Balancing

Install HAProxy on your system before setting up the load balancer. HAProxy is available in the yum and APT package manager repositories.

To install HAProxy, follow the directions for your OS below:

  • For Ubuntu and Debian-based systems using the APT package manager, do the following:

1. Update the package list:

sudo apt update

2. Install HAProxy with the following command:

sudo apt install haproxy
sudo apt install haproxy terminal output

Press y and Enter to continue when prompted and wait for the installation to complete.

  • For CentOS and RHEL-based systems using the yum package manager, to install HAProxy:

1. Update the yum repository list:

sudo yum update

2. Install HAProxy with the following command:

sudo yum install haproxy

Wait for the installation to complete before starting the setup.

Setting Initial Configuration

HAProxy provides a sample configuration file located in /etc/haproxy/haproxy.cfg. The file contains a standard setup without any load balancing options.

Use a text editor to view the configuration file and inspect the contents:

sudo nano /etc/haproxy/haproxy.cfg

The file has two main sections:

  • The global section. Contains configuration for HAProxy, such as SSL locations, logging information, and the user and group that execute HAProxy functions. There is only one global in a configuration file, and the values should not be altered.
haproxy.cfg global section default
  • The defaults section. Sets the default values for all nodes defined below it. Multiple defaults sections are possible, and they override previous default values.
haproxy.cfg defaults section default

Additional sections for load balancing include:

  • The frontend section. Contains information about the IP addresses and ports clients use to connect. 
  • The backend section. Defines server pools that fulfill requests sent through the frontend.
  • The listen section. Combines the functions of the frontend and backend. Use listen for smaller setups or when routing to a specific server group.

A typical load balancer configuration file looks like the following:

global
    # process settings
defaults
    # default values for sections below
frontend
    # server the clients connect to
backend
    # servers for fulfilling client requests
listen
    # complete proxy definition

Below is a detailed explanation of the sections and an example setup for a load balancing server with a custom configuration file. Clear all the contents from the default file and follow the example below.

Setting Defaults

The defaults section contains information shared across nodes defined below this section. Use defaults to define the operational mode and timeouts. For example:

defaults
    mode http
    timeout client 5s
    timeout connect 5s
    timeout server 5s
    timeout http-request 5s
haproxy load balancer defaults section

The code consists of:

  • The mode section. A directive that defines the operating mode for the load balancer, set to either http or tcp. The mode tells HAProxy how to handle incoming requests.
  • The timeout section. Consists of various safety measures for avoiding standard connection and data transfer problems. Increase or decrease the times according to your use case.
    • timeout client is the time HAProxy waits for the client to send data.
    • timeout connect is the time needed to establish a connection with the backend.
    • timeout server is the wait time for the server to send data.
    • timeout http-request is the wait time for the client to send a complete HTTP request.

Copy and paste the defaults code block into the /etc/haproxy/haproxy.cfg file and continue to the next section.

Setting Frontend

The frontend section exposes a website or application to the internet. The node accepts incoming connection requests and forwards them to a pool of servers in the backend.

Append the last two lines to the /etc/haproxy/haproxy.cfg file:

defaults
    mode http
    timeout client 10s
    timeout connect 5s
    timeout server 10s
    timeout http-request 10s

frontend my_frontend
    bind 127.0.0.1:80
haproxy load balancer frontend section

The new lines consist of the following information:

  • frontend defines the section start and sets a descriptive name (my_frontend).
  • bind binds a listener to the localhost 127.0.0.1 address on port 80, which is the address where the load balancer receives requests.

Save the file and restart the HAProxy service. Run:

sudo systemctl restart haproxy

The connection listens for requests on 127.0.0.1:80. To test, send a request using the curl command:

curl 127.0.0.1:80
restart haproxy curl 503 error

The response returns a 503 error, meaning there is no reply from the server. The response makes sense because the backend servers currently do not exist. The following step sets up the backend node.

Setting Backend

The backend is a pool of servers for fulfilling and resolving client requests. The section defines how the load balancer distributes the workload across multiple servers.

Append the backend information to the /etc/haproxy/haproxy.cfg file:

defaults
    mode http
    timeout client 10s
    timeout connect 5s
    timeout server 10s
    timeout http-request 10s

frontend my_frontend
    bind 127.0.0.1:80
    default_backend my_backend

backend my_backend
    balance leastconn
    server server1 127.0.0.1:8001
    server server2 127.0.0.1:8002
haproxy load balancer backend setup

Each line has the following information:

  • default_backend in the frontend section establishes communication between the front and back.
  • backend contains a descriptive name (my_backend) for the server pool, which we use to connect with the frontend.
  • balance is the load balancing algorithm. If omitted, the algorithm defaults to round-robin.
  • server defines a new server on each line with a unique name, IP address, and port.

To test, do the following:

1. Save the file and restart the HAProxy service:

sudo systemctl restart haproxy

2. Bind the backend ports to the address using Python to create web servers. Run the commands in two different terminal tabs:

python3 -m http.server 8001 --bind 127.0.0.1
python3 -m http.server 8002 --bind 127.0.0.1
python3 -m http.server bind port 8001 terminal output

3. In a third terminal window, send a request to confirm the connection works:

curl 127.0.0.1
curl 127.0.0.1 response

The server processes the request from the client and sends a response back. The output displays the contents of the directory where the server is running.

Check the terminal window of the running server to see the request.

server get request http response 200

The output shows the GET request with a response 200.

Setting Rules

Additional rules help configure the load balancer to handle cases with exceptions. For example, if there are multiple backends to which we direct client requests, the rules help define when to use which backend.

An example setup looks like the following:

defaults
    mode http
    timeout client 10s
    timeout connect 5s
    timeout server 10s
    timeout http-request 10s

frontend my_frontend
    bind 127.0.0.1:81, 127.0.0.1:82, 127.0.0.1:83
    use_backend first if { dst_port = 81 }
    use_backend second if { dst_port = 82 }
    default_backend third

backend first
    server server1 127.0.0.1:8001

backend second
    server server2 127.0.0.1:8002

backend third
    server server3 127.0.0.1:8003
haproxy load balancer rules multiple backends

The code does the following:

  • Binds the address to three ports (81, 82, and 83).
  • Sets a rule to use the first backend if the destination port is 81.
  • Adds another rule to use the second backend if the destination port is 82.
  • Defines a default backend (third) for all other cases.

Use multiple backends and rules to forward traffic to different websites or apps.

Monitoring

Use the global and listen sections to monitor the health of all the nodes via a web application. A typical setup looks like the following:

global
    stats socket /run/haproxy/admin.sock mode 660 level admin

defaults
    mode http
    timeout client 10s
    timeout connect 5s
    timeout server 10s
    timeout http-request 10s

frontend my_frontend
    bind 127.0.0.1:80
    default_backend my_backend

backend my_backend
    balance leastconn
    server server1 127.0.0.1:8001
    server server2 127.0.0.1:8002

listen stats
    bind :8000
    stats enable
    stats uri /monitoring
    stats auth username:password
haproxy load balancer monitoring global and listen sections

New additions to the file include:

  • The global section that enables the stats socket Runtime API. Connecting to the socket allows dynamic server monitoring through a built-in web application.
  • The listen section serves the monitoring page on port 8000 with the URI /monitoring and requires credentials to access the page.

To access the monitoring page:

1. Save the configuration file and restart HAProxy:

sudo systemctl restart haproxy

2. Open a web browser and enter 127.0.0.1:8000/monitoring as a web page address.

3. The page brings up the login window. Enter the credentials provided in the stats auth username:password located in the listen section.

127.0.0.1 port 8000 haproxy monitoring login

3. The monitoring page displays, showing various statistics for individual nodes.

haproxy load balancer stats monitoring page

The statistics display detailed information for the frontend and backend sections, while the final table shows the general statistics for both.

Conclusion

After reading this guide, you know how to set up a basic load balancer using HAProxy. The guide showed you how to configure the load balancer, as well as how to monitor all the nodes.

Next, see how you can use a small BMC server instance as a load balancer using HAProxy.

[ad_2]

How to Use HAProxy for Load Balancing

Load balancing is an important part of any web application. It helps to ensure that the application is able to handle the load of multiple users at the same time. HAProxy is a popular open source load balancer that can be used to distribute the load across multiple servers. In this article, we will look at how to use HAProxy for load balancing.

What is HAProxy?

HAProxy is an open source software that provides high availability, load balancing, and proxying for TCP and HTTP-based applications. It is a reliable, high-performance solution used by many organizations to improve the performance of their web applications. HAProxy is easy to configure and can be used to distribute the load across multiple servers.

How to Configure HAProxy for Load Balancing

Configuring HAProxy for load balancing is a straightforward process. The first step is to install HAProxy on the server. Once the installation is complete, you can configure the load balancing settings. This includes setting up the backend servers, the load balancing algorithm, and the health checks.

The backend servers are the servers that will be used to handle the incoming requests. You can specify the IP address and port of each server. You can also specify the weight of each server, which determines how much of the load each server will handle.

The load balancing algorithm determines how the requests are distributed among the backend servers. HAProxy supports several algorithms, including round robin, least connections, and source. You can also configure the health checks, which are used to determine if a server is available to handle requests.

Once the configuration is complete, you can start the HAProxy service. The service will start listening for incoming requests and will distribute them among the backend servers according to the configured load balancing algorithm.

Conclusion

HAProxy is a powerful and reliable open source load balancer. It is easy to configure and can be used to distribute the load across multiple servers. By using HAProxy, you can ensure that your web application is able to handle the load of multiple users at the same time.

Jaspreet Singh Ghuman

Jaspreet Singh Ghuman

Jassweb.com/

Passionate Professional Blogger, Freelancer, WordPress Enthusiast, Digital Marketer, Web Developer, Server Operator, Networking Expert. Empowering online presence with diverse skills.

jassweb logo

Jassweb always keeps its services up-to-date with the latest trends in the market, providing its customers all over the world with high-end and easily extensible internet, intranet, and extranet products.

GSTIN is 03EGRPS4248R1ZD.

Contact
Jassweb, Rai Chak, Punjab, India. 143518
Item added to cart.
0 items - 0.00