The tag traefik.backend.loadbalancer.swarm is default to false with --docker.swarmMode, however I wanted to know if there is any advantages / disadvantages to turn it on.
From what I've gathered so far, traefik.backend.loadbalancer.swarm with cause traefik to call the backend service (by a virtual ip) via the swarm mesh routing network, resulting a single backend record in the dashboard.
Advantage
Not sure, they both use round robin by default.
Disadvantage
Lose the following features:
wighted round robin
sticky session as swarm lb doesn't have it natively right now
Related
I'm looking into altering the architecture of a hosting service intended to scale arbitrarily.
On a given machine, the service works roughly as follows:
Start a container running Redis cluster client that joins a global cluster.
Start containers for each of the "Models" to be hosted.
Use upstream Redis cluster for managing model global state. Handle namespacing via keys themselves.
I'm wondering if it might be possible to change to something like this:
For each Model, start a container running the Model and a Redis cluster client.
Reverse proxy the Redis service using something like Nginx to be available on a certain path, e.g., <host_ip>:6397/redis-<model_name>. (Note: I can't just proxy from different ports, because in theory this is supposed to be able to scale past 65,535 models running globally.)
Join the Redis cluster by using said path.
Internalizing the Redis service to the container is an appealing idea to me because it is closer to what the hosting service is supposed to achieve. We do want to share compute; we don't want to share a KV store.
Anyways, I haven't seen anything that suggests this is possible. So, sticking with the upstream may be my only option. But, in case anyone knows otherwise, I wanted to check and see.
I'm running HAProxy as a TCP loadbalancer in front of an on-prem Kubernetes cluster. I have set up a small app on each cluster node which return HTTP200 when the node is considered healthy. One of the healthchecks it performs is to query the KubeAPI and verify the status according to K8S itself. Now, if for some reason the Kube API goes down, all nodes will be considered unhealthy at the same time, even though the applications running on the workers are still available.
I'd like to set up HAProxy in such a way that whenever all worker nodes are down according to the health check, HAProxy just assumes they are all alive. If indeed all nodes are down, whether or not traffic is forwarded doesn't matter. If the reason they're all down is that some shared component doesn't respond, just blindly sending traffic will at least keep the service going.
I've parsed the HAProxy reference in search of an option which does this. I can't seem to find one. I think I should be able to get this functionality by registering each worker node twice, once regularly and once with the backup option specified. Then adding allbackups to the backend would make it so that if all worker nodes are down, alls worker nodes are used as a backup. That would look like this:
backend workers
mode tcp
option httpchk HEAD /
option allbackups
server worker-001-1 <address-1> check port 32000
server worker-001-2 <address-2> check port 32000
server worker-001-1-backup <address-1> backup
server worker-001-2-backup <address-2> backup
While this solution seems to work. It seems very hacky. Is there any way to do this in a cleaner way. Is there an option I missed in the reference?
Thanks!
I found a more suitable solution in this answer: https://serverfault.com/a/923624/255500
It boils down to using backend switching rules and creating two backends for each group of clusters:
frontend ingress
bind *:80 http
bind *:443 https
bind *:30000-32676 nodeports
mode tcp
default_backend workers
use_backend workers_backup if { nbsrv(workers) eq 0 }
backend workers
mode tcp
option httpchk HEAD /
server worker-001-1 <address-1> check port 32000
server worker-001-2 <address-2> check port 32000
backend workers_backup
mode tcp
server worker-001-1 <address-1> no-check
server worker-001-2 <address-2> no-check
Once backend workers has zero servers up, backend workers_backup will be used. It's still registering each node twice, but I think this is the better solution.
Is it possible that you're trying to solve the wrong problem? If the nodes report as unhealthy if the Kube API is unavailable, then should you focus on making Kube API highly available?
In this article, they describe a way to create a highly available control plane. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
As i am having one application in which architecture is as per below,
users ----> Haproxy load balancer(TCP Connection) -> Application server 1
-> Application server 2
-> Application server 3
Now i am able to scale out and scale in application servers. But due to high no of TCP connections(around 10000 TCP connections), my haproxy load balancer node RAM gets full, so further not able to achieve more no of TCP connections afterwords. So need to add one more node for happroxy itself. So my question is how to scale out haproxy node itself or is there any other solution for the same?
As i am deploying application on AWS. Your solution is highly appreciated.
Thanks,
Mayank
Use AWS Route53 and create CNAMEs that point to your haproxy instances.
For example:
Create a CNAME haproxy.example.com pointing to haproxy instance1.
Create a second CNAME haproxy.example.com pointing to haproxy instance2.
Both of your CNAMEs should use some sort of routing policy. Simplest is probably round robin. Simply rotates over your list of CNAMEs. When you lookup haproxy.example.com you will get the addresses of both instances. The order of ips returned will change with each request. This will distribute load evenly between your two instances.
There are many more options depending on your needs: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Overall this is pretty easy to configure.
Few unrelated issues: You can also setup health checks and route traffic to the remaining healthy instance(s) if needed. If your running at capacity with two instances you might want to add a few more to able to cope with an instance failing.
As per the (verbose) topic, are there any advantages over using a Keepalived & HAProxy as an HA webserver loadbalancer vs a pure keepalived solution?
Keepalived is working in layer 4 so doesn't have layer 7 knowledge at all. By using HAProxy and Keepalived together you can get benefit of having some options that HAProxy provides in layer 7 like Stickiness, Sampling and converting information, ACLs and conditions, Content switching, Stick-tables, Formated strings, HTTP rewriting and redirection, Server protection, etc.
If you only need to have a load balancer without any manipulating or any high level decision (layer 7) you can use only Keepalived and it will be faster because it works in layer 4.
Administrators can use both Keepalived and HAProxy together for a more robust and scalable high availability environment. Using the speed and scalability of HAProxy to perform load balancing for HTTP and other TCP-based services in conjunction with Keepalived failover services, administrators can increase availability by distributing load across real servers as well as ensuring continuity in the event of router unavailability by performing failover to backup routers.
keepalived and haproxy
Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster