HAProxy¶
HAProxy¶
This content bundle automates the deployment and configuration of HAProxy load balancers within Digital Rebar Provision. It supports both automatic backend discovery and manual override configurations.
How It Works¶
The haproxy-configure task uses the DRP API (via drppy_client) to:
- Discover backend machines matching
haproxy/filters - Build frontend port maps and backend server lists from discovered params
- Set the computed maps as
haproxy/frontend/mapandhaproxy/backend/map - Render
haproxy.cfgfrom a Go template using those maps
Override parameters (haproxy/frontend/map-override, haproxy/backend/map-override)
bypass discovery and inject values directly.
Requirements¶
- Digital Rebar Provision v4.14.0 or later
- DRP Community Content Bundle (v4.14.0+)
- Python 3 and
drppy_clienton target nodes (installed automatically by the task) - A supported operating system on target nodes
- One or more backend machines running a service to load balance (e.g., Apache on port 80)
- Network access to package repositories (for haproxy and pip packages)
Self-Signed Certificates¶
If your DRP endpoint uses a self-signed certificate, set the following on the global profile or on the haproxy machines:
Parameters¶
| Parameter | Description |
|---|---|
haproxy/role |
Machine role: load-balancer or backend |
haproxy/filters |
List of DRP filter strings for backend discovery |
haproxy/backend/services |
Map of service name to port on backend machines |
haproxy/backend/services/config |
Per-service backend server options (default: maxconn 32) |
haproxy/frontend/map |
Computed frontend port map (set by configure task) |
haproxy/backend/map |
Computed backend server map (set by configure task) |
haproxy/frontend/map-override |
Manual frontend port map (bypasses discovery) |
haproxy/backend/map-override |
Manual backend server map (bypasses discovery) |
haproxy/config-template |
Go template name for haproxy.cfg |
haproxy/frontend/config |
Per-service frontend config stanzas |
haproxy/backend/config |
Per-service backend config stanzas |
rs-debug-py-log-level |
Python log level for the configure script (INFO, DEBUG, etc.) |
rs-drppy-client-verify-ssl |
SSL certificate verification for DRP API calls (default: true). Set to false for self-signed certs. |
Configuration Customization¶
The bundle supports full customization of the HAProxy configuration:
haproxy/config-template: Override the haproxy.cfg Go template name. Provide your own.cfg.tmpltemplate for complete control over the generated configuration file.haproxy/frontend/config: Per-service frontend settings (mode, options). Applied as stanzas within the frontend block of haproxy.cfg.haproxy/backend/config: Per-service backend settings (mode, balance algorithm). Applied as stanzas within the backend block of haproxy.cfg.haproxy/backend/services/config: Per-service, per-machine backend options appended to eachserverline (default:maxconn 32).
Deployment Methods¶
HAProxy can be configured in two ways: Auto Discovery finds backends
dynamically via DRP filters, while Manual Override lets you specify
the exact backend list. Both methods use the same haproxy-configure task,
and differ only in which parameters you set.
Auto Discovery¶
The load balancer queries the DRP API for machines matching the filter
expressions in haproxy/filters. Each matched machine's
haproxy/backend/services parameter is read to build the frontend port map
and backend server list automatically.
On backend machines, set:
On the load balancer, set:
The filter syntax follows DRP filter rules. Multiple filter strings can be listed, and their results are concatenated. Filters can match on any machine field: profiles, params, names, pools, etc.
Use auto discovery when backends change over time. The load balancer picks them up automatically on each configure run.
Manual Override¶
Set haproxy/frontend/map-override and haproxy/backend/map-override to
bypass discovery entirely. The configure task uses these values directly
instead of querying the DRP API for backends.
haproxy/filters: []
haproxy/frontend/map-override:
http: 80
https: [443, 8443]
haproxy/backend/map-override:
http:
- server web01 192.168.1.10:80 maxconn 32
- server web02 192.168.1.11:80 maxconn 32
https:
- server web01 192.168.1.10:443 maxconn 32
- server web02 192.168.1.11:443 maxconn 32
Port values accept either a single integer (80) or an array ([443, 8443]).
Use manual override when you know exactly which backends to use, or when backends are not managed by DRP.
Quick Start Guide¶
This walkthrough uses 4 machines to demonstrate both deployment methods.
You will need the apache content bundle installed for the backend service.
Machines: - 2 backend web servers (we'll use Apache) - 1 load balancer for manual override (Method A) - 1 load balancer for auto discovery (Method B)
Step 1: Prepare the Backend Machines¶
Install Apache on two machines to serve as backends.
# For each backend machine:
drpcli machines addprofile <BACKEND_UUID> apache-web-server
drpcli machines set <BACKEND_UUID> param haproxy/role to '"backend"'
drpcli machines set <BACKEND_UUID> param haproxy/backend/services to '{"http": 80}'
drpcli machines workflow <BACKEND_UUID> universal-runbook
drpcli machines update <BACKEND_UUID> '{"Runnable": true}'
Wait for both machines to complete. Verify Apache is running:
curl <BACKEND_1_IP> # Should return an HTML page
curl <BACKEND_2_IP> # Should return an HTML page
Step 2A: Manual Override Load Balancer¶
This method gives you explicit control over the backend list. No discovery is involved. You tell HAProxy exactly which servers to use.
# Apply the haproxy profile
drpcli machines addprofile <LB_UUID> universal-application-haproxy-server
# Set empty filters (disables auto-discovery)
drpcli machines set <LB_UUID> param haproxy/filters to '[]'
# Define the frontend and backend maps manually
drpcli machines set <LB_UUID> param haproxy/frontend/map-override to '{"http": 80}'
drpcli machines set <LB_UUID> param haproxy/backend/map-override to '{
"http": [
"server web01 <BACKEND_1_IP>:80 maxconn 32",
"server web02 <BACKEND_2_IP>:80 maxconn 32"
]
}'
# Run the workflow
drpcli machines workflow <LB_UUID> universal-runbook
drpcli machines update <LB_UUID> '{"Runnable": true}'
Step 2B: Auto Discovery Load Balancer¶
This method uses DRP filters to automatically find backend machines. HAProxy
discovers backends, reads their haproxy/backend/services params, and builds
the configuration dynamically.
# Apply the haproxy profile
drpcli machines addprofile <LB_UUID> universal-application-haproxy-server
# Set filters to discover backends by role
drpcli machines set <LB_UUID> param haproxy/filters to '["Params.haproxy/role=Eq(backend)"]'
# Run the workflow
drpcli machines workflow <LB_UUID> universal-runbook
drpcli machines update <LB_UUID> '{"Runnable": true}'
Step 3: Verify¶
Once the workflows complete, test load balancing:
curl <LB_IP> # Returns page from backend 1
curl <LB_IP> # Returns page from backend 2 (round-robin)
You can inspect the generated configuration:
drpcli machines get <LB_UUID> param haproxy/backend/map
drpcli machines get <LB_UUID> param haproxy/frontend/map
Beyond the Demo: Cluster Orchestration¶
The steps above are for learning and demonstration. In production, you would
use DRP's cluster orchestration to provision the entire stack, both load
balancers and backends, in a single operation. The webserver-cluster example content
bundle demonstrates this pattern.
A cluster profile defines all machine roles, pipelines, and parameters in one place. DRP handles provisioning, ordering, and configuration automatically:
Params:
universal/application: my-ha-cluster
universal/workflow-chain-index-override: cluster
on-complete-work-order-mode: true
cluster/wait-for-members: false
cluster/machine-types:
- load-balancer
- web-backend
cluster/machines:
load-balancer:
pipeline: universal-application-haproxy-server
names: ['my-ha-proxy.local.domain']
Params:
haproxy/role: load-balancer
haproxy/filters:
- Profiles=Eq(universal-application-apache-web-server) Params.haproxy/role=Eq(backend) Profiles=Eq({{ .Machine.Name }})
haproxy/frontend/config:
http:
mode: http
haproxy/backend/config:
http:
mode: http
balance: roundrobin
web-backend:
pipeline: universal-application-apache-web-server
names: ['web01.local.domain', 'web02.local.domain']
Params:
haproxy/role: backend
haproxy/backend/services:
http: 80
Talk to Us!¶
The RackN Team uses Slack to communicate with our Digital Rebar Provision community. If you haven't already signed up for our Community Slack, you can do so at:
- https://rackn.com/support/for-community/