Routing & Load Balancing Overview |Traefik Docs (2024)

What's Happening to the Requests?

Let's zoom in on Traefik's architecture and talk about the components that enable the routes to be created.

First, when you start Traefik, you define entrypoints (in their most basic forms, they are port numbers).Then, connected to these entrypoints, routers analyze the incoming requests to see if they match a set of rules.If they do, the router might transform the request using pieces of middleware before forwarding them to your services.

Routing & Load Balancing Overview |Traefik Docs (1)

Clear Responsibilities

  • Providers discover the services that live on your infrastructure (their IP, health, ...)
  • Entrypoints listen for incoming traffic (ports, ...)
  • Routers analyse the requests (host, path, headers, SSL, ...)
  • Services forward the request to your services (load balancing, ...)
  • Middlewares may update the request or make decisions based on the request (authentication, rate limiting, headers, ...)

Example with a File Provider

Below is an example of a full configuration file for the file provider that forwards http://example.com/whoami/ requests to a service reachable on http://private/whoami-service/.In the process, Traefik will make sure that the user is authenticated (using the BasicAuth middleware).

Static configuration:

entryPoints: web: # Listen on port 8081 for incoming requests address: :8081providers: # Enable the file provider to define routers / middlewares / services in file file: directory: /path/to/dynamic/conf

[entryPoints] [entryPoints.web] # Listen on port 8081 for incoming requests address = ":8081"[providers] # Enable the file provider to define routers / middlewares / services in file [providers.file] directory = "/path/to/dynamic/conf"

# Listen on port 8081 for incoming requests--entryPoints.web.address=:8081# Enable the file provider to define routers / middlewares / services in file--providers.file.directory=/path/to/dynamic/conf

Dynamic configuration:

# http routing sectionhttp: routers: # Define a connection between requests and services to-whoami: rule: "Host(`example.com`) && PathPrefix(`/whoami/`)" # If the rule matches, applies the middleware middlewares: - test-user # If the rule matches, forward to the whoami service (declared below) service: whoami middlewares: # Define an authentication mechanism test-user: basicAuth: users: - test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/ services: # Define how to reach an existing service on our infrastructure whoami: loadBalancer: servers: - url: http://private/whoami-service

# http routing section[http] [http.routers] # Define a connection between requests and services [http.routers.to-whoami] rule = "Host(`example.com`) && PathPrefix(`/whoami/`)" # If the rule matches, applies the middleware middlewares = ["test-user"] # If the rule matches, forward to the whoami service (declared below) service = "whoami" [http.middlewares] # Define an authentication mechanism [http.middlewares.test-user.basicAuth] users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/"] [http.services] # Define how to reach an existing service on our infrastructure [http.services.whoami.loadBalancer] [[http.services.whoami.loadBalancer.servers]] url = "http://private/whoami-service"

In this example, we use the file provider.Even if it is one of the least magical way of configuring Traefik, it explicitly describes every available notion.

HTTP / TCP

In this example, we've defined routing rules for http requests only.Traefik also supports TCP requests. To add TCP routers and TCP services, declare them in a TCP section like in the following.

Adding a TCP route for TLS requests on whoami-tcp.example.com

Static Configuration

entryPoints: web: # Listen on port 8081 for incoming requests address: :8081providers: # Enable the file provider to define routers / middlewares / services in file file: directory: /path/to/dynamic/conf

[entryPoints] [entryPoints.web] # Listen on port 8081 for incoming requests address = ":8081"[providers] # Enable the file provider to define routers / middlewares / services in file [providers.file] directory = "/path/to/dynamic/conf"

# Listen on port 8081 for incoming requests--entryPoints.web.address=:8081# Enable the file provider to define routers / middlewares / services in file--providers.file.directory=/path/to/dynamic/conf

Dynamic Configuration

# http routing sectionhttp: routers: # Define a connection between requests and services to-whoami: rule: Host(`example.com`) && PathPrefix(`/whoami/`) # If the rule matches, applies the middleware middlewares: - test-user # If the rule matches, forward to the whoami service (declared below) service: whoami middlewares: # Define an authentication mechanism test-user: basicAuth: users: - test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/ services: # Define how to reach an existing service on our infrastructure whoami: loadBalancer: servers: - url: http://private/whoami-servicetcp: routers: to-whoami-tcp: service: whoami-tcp rule: HostSNI(`whoami-tcp.example.com`) tls: {} services: whoami-tcp: loadBalancer: servers: - address: xx.xx.xx.xx:xx

# http routing section[http] [http.routers] # Define a connection between requests and services [http.routers.to-whoami] rule = "Host(`example.com`) && PathPrefix(`/whoami/`)" # If the rule matches, applies the middleware middlewares = ["test-user"] # If the rule matches, forward to the whoami service (declared below) service = "whoami" [http.middlewares] # Define an authentication mechanism [http.middlewares.test-user.basicAuth] users = ["test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/"] [http.services] # Define how to reach an existing service on our infrastructure [http.services.whoami.loadBalancer] [[http.services.whoami.loadBalancer.servers]] url = "http://private/whoami-service"[tcp] [tcp.routers] [tcp.routers.to-whoami-tcp] rule = "HostSNI(`whoami-tcp.example.com`)" service = "whoami-tcp" [tcp.routers.to-whoami-tcp.tls] [tcp.services] [tcp.services.whoami-tcp.loadBalancer] [[tcp.services.whoami-tcp.loadBalancer.servers]] address = "xx.xx.xx.xx:xx"

Transport configuration

Most of what happens to the connection between the clients and Traefik,and then between Traefik and the backend servers, is configured through theentrypoints and the routers.

In addition, a few parameters are dedicated to configuring globallywhat happens with the connections between Traefik and the backends.This is done through the serversTransport and tcpServersTransportsections of the configuration, which features these options:

HTTP Servers Transports

insecureSkipVerify

Optional, Default=false

insecureSkipVerify disables SSL certificate verification.

## Static configurationserversTransport: insecureSkipVerify: true

## Static configuration[serversTransport] insecureSkipVerify = true

## Static configuration--serversTransport.insecureSkipVerify=true

rootCAs

Optional

rootCAs is the list of certificates (as file paths, or data bytes)that will be set as Root Certificate Authorities when using a self-signed TLS certificate.

## Static configurationserversTransport: rootCAs: - foo.crt - bar.crt

## Static configuration[serversTransport] rootCAs = ["foo.crt", "bar.crt"]

## Static configuration--serversTransport.rootCAs=foo.crt,bar.crt

maxIdleConnsPerHost

Optional, Default=2

If non-zero, maxIdleConnsPerHost controls the maximum idle (keep-alive) connections to keep per-host.

## Static configurationserversTransport: maxIdleConnsPerHost: 7

## Static configuration[serversTransport] maxIdleConnsPerHost = 7

## Static configuration--serversTransport.maxIdleConnsPerHost=7

spiffe

Please note that SPIFFE must be enabled in the static configurationbefore using it to secure the connection between Traefik and the backends.

spiffe.ids

Optional

ids defines the allowed SPIFFE IDs.This takes precedence over the SPIFFE TrustDomain.

## Static configurationserversTransport: spiffe: ids: - spiffe://trust-domain/id1 - spiffe://trust-domain/id2

## Static configuration[serversTransport.spiffe] ids = ["spiffe://trust-domain/id1", "spiffe://trust-domain/id2"]

## Static configuration--serversTransport.spiffe.ids=spiffe://trust-domain/id1,spiffe://trust-domain/id2

spiffe.trustDomain

Optional

trustDomain defines the allowed SPIFFE trust domain.

## Static configurationserversTransport: trustDomain: spiffe://trust-domain

## Static configuration[serversTransport.spiffe] trustDomain = "spiffe://trust-domain"

## Static configuration--serversTransport.spiffe.trustDomain=spiffe://trust-domain

forwardingTimeouts

forwardingTimeouts is about a number of timeouts relevant to when forwarding requests to the backend servers.

forwardingTimeouts.dialTimeout

Optional, Default=30s

dialTimeout is the maximum duration allowed for a connection to a backend server to be established.Zero means no timeout.

## Static configurationserversTransport: forwardingTimeouts: dialTimeout: 1s

## Static configuration[serversTransport.forwardingTimeouts] dialTimeout = "1s"

## Static configuration--serversTransport.forwardingTimeouts.dialTimeout=1s

forwardingTimeouts.responseHeaderTimeout

Optional, Default=0s

responseHeaderTimeout, if non-zero, specifies the amount of time to wait for a server's response headersafter fully writing the request (including its body, if any).This time does not include the time to read the response body.Zero means no timeout.

## Static configurationserversTransport: forwardingTimeouts: responseHeaderTimeout: 1s

## Static configuration[serversTransport.forwardingTimeouts] responseHeaderTimeout = "1s"

## Static configuration--serversTransport.forwardingTimeouts.responseHeaderTimeout=1s

forwardingTimeouts.idleConnTimeout

Optional, Default=90s

idleConnTimeout, is the maximum amount of time an idle (keep-alive) connectionwill remain idle before closing itself.Zero means no limit.

## Static configurationserversTransport: forwardingTimeouts: idleConnTimeout: 1s

## Static configuration[serversTransport.forwardingTimeouts] idleConnTimeout = "1s"

## Static configuration--serversTransport.forwardingTimeouts.idleConnTimeout=1s

TCP Servers Transports

dialTimeout

Optional, Default="30s"

dialTimeout is the maximum duration allowed for a connection to a backend server to be established.Zero means no timeout.

## Static configurationtcpServersTransport: dialTimeout: 30s

## Static configuration[tcpServersTransport] dialTimeout = "30s"

## Static configuration--tcpServersTransport.dialTimeout=30s

dialKeepAlive

Optional, Default="15s"

dialKeepAlive defines the interval between keep-alive probes sent on an active network connection.If zero, keep-alive probes are sent with a default value (currently 15 seconds), if supported by the protocol andoperating system. Network protocols or operating systems that do not support keep-alives ignore this field. If negative,keep-alive probes are disabled.

## Static configurationtcpServersTransport: dialKeepAlive: 30s

## Static configuration[tcpServersTransport] dialKeepAlive = "30s"

## Static configuration--tcpServersTransport.dialKeepAlive=30s

tls

tls defines the TLS configuration to connect with TCP backends.

Optional

An empty tls section enables TLS.

## Static configurationtcpServersTransport: tls: {}

## Static configuration[tcpServersTransport.tls]

## Static configuration--tcpServersTransport.tls=true

tls.insecureSkipVerify

Optional

insecureSkipVerify disables the server's certificate chain and host name verification.

## Static configurationtcpServersTransport: tls: insecureSkipVerify: true

## Static configuration[tcpServersTransport.tls] insecureSkipVerify = true

## Static configuration--tcpServersTransport.tls.insecureSkipVerify=true

tls.rootCAs

Optional

rootCAs defines the set of Root Certificate Authorities (as file paths, or data bytes)to use when verifying self-signed TLS server certificates.

## Static configurationtcpServersTransport: tls: rootCAs: - foo.crt - bar.crt

## Static configuration[tcpServersTransport.tls] rootCAs = ["foo.crt", "bar.crt"]

## Static configuration--tcpServersTransport.tls.rootCAs=foo.crt,bar.crt

spiffe

Please note that SPIFFE must be enabled in the static configurationbefore using it to secure the connection between Traefik and the backends.

spiffe.ids

Optional

ids defines the allowed SPIFFE IDs.This takes precedence over the SPIFFE TrustDomain.

## Static configurationtcpServersTransport: spiffe: ids: - spiffe://trust-domain/id1 - spiffe://trust-domain/id2

## Static configuration[tcpServersTransport.spiffe] ids = ["spiffe://trust-domain/id1", "spiffe://trust-domain/id2"]

## Static configuration--tcpServersTransport.spiffe.ids=spiffe://trust-domain/id1,spiffe://trust-domain/id2

spiffe.trustDomain

Optional

trustDomain defines the allowed SPIFFE trust domain.

## Static configurationtcpServersTransport: trustDomain: spiffe://trust-domain

## Static configuration[tcpServersTransport.spiffe] trustDomain = "spiffe://trust-domain"

## Static configuration--tcpServersTransport.spiffe.trustDomain=spiffe://trust-domain

Using Traefik OSS in Production? Consider Adding Advanced Capabilities.

Add API Gateway or API Management capabilities seamlessly to your existing Traefik deployments.No rip and replace. No learning curve.

  • Explore our API Gateway
  • Explore our API Management
  • Get 24/7/365 Commercial Support for Traefik OSS
Routing & Load Balancing Overview |Traefik Docs (2024)

FAQs

What is routing and load balancing? ›

Load balancers are routers. Typically they route high volume traffic between almost identical endpoints. To provide "balance" and "availability". Routing service is more general. It takes traffic from one place, puts it through a pattern and redirects to another target.

How does the Traefik work? ›

Unlike a traditional reverse proxy, which requires manual configuration, Traefik uses service discovery to dynamically configure routing. Traefik supports all major protocols, leveraging a rich set of middleware for load balancing, rate-limiting, circuit-breakers, mirroring, authentication, and more.

How does Traefik load balance? ›

Traefik Proxy automates load balancing, including a powerful set of middlewares for both HTTP and TCP load balancing. It includes weighted round robin, round robin, and mirror mechanisms.

What is the default load balancer port for Traefik? ›

Traefik automatically creates a loadbalancer service addressing port 80 ignoring the "- traefik. tcp. services.

What is load balancing in simple terms? ›

Load balancing is the method of distributing network traffic equally across a pool of resources that support an application. Modern applications must process millions of users simultaneously and return the correct text, videos, images, and other data to each user in a fast and reliable manner.

What is the difference between router and load balancing? ›

Routing makes a decision on where to forward something – a packet, an application request, an approval in your business workflow. Load balancing distributes something (packets, requests, approval) across a set of resources designed to process that something. You really can't (shouldn't) substitute one for the other.

What is better, nginx or Traefik? ›

NGINX offers greater control and customization possibilities, while Traefik is more user-friendly and simpler to set up. The choice between Traefik and NGINX will ultimately depend on your unique requirements, the intricacy of your application, and the degree of control and customization you need.

What is the benefit of Traefik? ›

Traefik
  • cloud-native solution aimed at microservice deployments.
  • built-in Let's Encrypt integration.
  • lightweight.
  • relatively simple to configure.
  • auto-discovery allows for a hands-off approach to route configuration.
  • support for HTTP, UDP, and TCP traffic.

Is Traefik a router? ›

Traefik is based on the concept of EntryPoints, Routers, Middlewares and Services. The main features include dynamic configuration, automatic service discovery, and support for multiple backends and protocols.

How do you load balance between VMS? ›

VM load balancing evaluates a server's load based on the following heuristics: Current memory pressure: Memory is the most common resource constraint on a Hyper-V host. CPU utilization averaged over a five-minute window: Mitigates any server in the cluster from becoming over-committed.

Is load balancing a reverse proxy? ›

Once the content is obtained, the reverse proxy caches it so that future requests can be served faster. Reverse proxies can also be used to balance traffic across multiple servers. This process is known as load balancing and is an essential part of network architecture.

Is Nginx used for load balancing? ›

Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your server's resource availability and efficiency. How does Nginx work? Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

Why does Traefik need docker Sock? ›

Traefik requires access to the docker socket to get its dynamic configuration. You can specify which Docker API Endpoint to use with the directive endpoint . Accessing the Docker API without any restriction is a security concern: If Traefik is attacked, then the attacker might get access to the underlying host.

Does Traefik require tls? ›

Traefik supports HTTPS & TLS, which concerns roughly two parts of the configuration: routers, and the TLS connection (and its underlying certificates). When a router has to handle HTTPS traffic, it should be specified with a tls field of the router definition.

How does Traefik work? ›

Traefik is an open-source, cloud-native edge router and load balancer designed for modern microservices architectures. It serves as a reverse proxy that automatically discovers and routes traffic to backend services without manual configuration.

What is the purpose of a load balancing router? ›

A load balancing router is a network device that distributes network traffic across multiple servers to optimize resource usage, improve response times, and minimize downtime. Load-balancing routers evenly distribute incoming requests, preventing any single server from becoming overloaded.

What is routing and loading? ›

Loading and Routing optimizer evaluates the data related to the Orders, Warehouses, Vehicle Types, Locations, Costs, and Customer Agreements in conjunction with the restrictions, and generates optimum load route planning.

What does network load balancing do? ›

The Network Load Balancing (NLB) feature distributes traffic across several servers by using the TCP/IP networking protocol. By combining two or more computers that are running applications into a single virtual cluster, NLB provides reliability and performance for web servers and other mission-critical servers.

What is vehicle routing and loading problem? ›

Vehicle Routing Problem with Pickup and Delivery (VRPPD): A number of goods need to be moved from certain pickup locations to other delivery locations. The goal is to find optimal routes for a fleet of vehicles to visit the pickup and drop-off locations.

Top Articles
Latest Posts
Article information

Author: Tish Haag

Last Updated:

Views: 5279

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Tish Haag

Birthday: 1999-11-18

Address: 30256 Tara Expressway, Kutchburgh, VT 92892-0078

Phone: +4215847628708

Job: Internal Consulting Engineer

Hobby: Roller skating, Roller skating, Kayaking, Flying, Graffiti, Ghost hunting, scrapbook

Introduction: My name is Tish Haag, I am a excited, delightful, curious, beautiful, agreeable, enchanting, fancy person who loves writing and wants to share my knowledge and understanding with you.