OpenStack L3 load balancer - load-balancing

Is it possible to implement an L3 load balancer in open stack?
I want to load balance incoming traffic to a Virtual IP across multiple VMs based on source IP. Does any of the OpenStack neutron plugin have this feature?
If not, is there any other Linux based approach that I could use to implement this feature?
HA proxy and OpenStack LBaaS is not suitable for me as there are L4 load balancers and handles only TCP and UDP traffic.

Yes, it is possible to do L3 load-balancing in OpenStack. OpenStack has a project for this called Octavia (Load-Balancer-as-a-service).
Install OpenStack Octavia using the info in:
https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
https://github.com/openstack/octavia
Add the following in /etc/neutron/neutron_lbaas.conf:
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
Add the following in /etc/neutron/neutron.conf:
[octavia]
base_url=http://<IP address of OpenStack controller node>:9876
Add the following in /etc/octavia/octavia.conf:
[neutron]
service_name = <name of the neutron service in the keystone catalog>
endpoint = <custom neutron endpoint if override is necessary>
Sample configuration file for Octavia is at https://github.com/openstack/octavia/blob/master/etc/octavia.conf.

Related

Portainer - OAuth conf. with multiple cluster nodes

I've installed 3 nodes with Docker Swarm and Portainer:
node1.int.org
node2.int.org
node3.int.org
Portainer uses Google Credential to authenticate each users.
The problem is that into the Redirect URL I can specify only one node (in the image below, node1.int.org). If the node1.int.org die, and I use node2.int.org or node3.int.org to login, the redirect doesn't work!
What is the best practice to solve this problem?
Thank you
You create DNSRR records:
swarm.int.org A IP1
swarm.int.org A IP2
*.swarm.int.org CNAME swarm.int.org
and then use "swarm.int.org" in place of "node1.int.org" when addressing swarm hosted services.
Bonus Point 1
Use Traefik to handle ssl offloading, so "https://swarm.int.org" can be used to connect to Portainer on the swarm.
Bonus Point 2
Use keepalived or similar to allocate a pool of VIPs and map the DNSRR entries to those. This means even if nodes go down the IPs and thus DNS entries keep routing to healthy nodes.

How to make my Google Cloud Load Balancer work?

I follow Document for Creating Content-Based Load Balancing: https://cloud.google.com/load-balancing/docs/https/content-based-example
I want to reach external address with https. I want load balancer to connect to VM with simple http.
Both VMs work as expected and are returning proper answet when reached by IP address. LB's settings seem fine. Both health checks are passing and Google SSL Certificate is ACTIVE.
However, when I try to reach Load Balancer's IP address or domain I get 502.
LB IP is 35.244.161.226 wciel.pl
Load Balancer's logs show statusDetails: "failed_to_connect_to_backend"
I attached screens of my Google Cloud Console.
Please advice.
me#machine:$ gcloud beta compute ssl-certificates list
NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS
wciel-pl-certificate2 MANAGED 2019-08-11T03:20:15.971-07:00 2019-11-09T01:27:44.000-08:00 ACTIVE
www.wciel.pl: ACTIVE
I think there is a mismatch in back end service configuration. From the details of web-map-backend-service its seems like your service listening on port 80. However, when you have configured backend service you have configured it with port 443.
If you don't require secure communication between LB to VM, I would recommend followings:
Change backend protocol from HTTPS to HTTP
Edit backend Port numbers from 443 to 80
Save and update the configuration.

How to expose minikube service urls to outside system

I 've apache camel application deployed on kubernetes. My application is esposed in kubernetes cluster which is accessible at http://192.168.99.100:31750. so how to make it accessiible accross.
I suggest you do 2 things :
run an NginX Ingress Controller in your minikube and expose it with NodePort service. Meaning it will be available somewhat similar to your service right now (high port range)
run HAProxy on your host that runs minikube that will forward 80/443 port to your high ports on minikube (ie. 80->32080, 443->32443)
that way you can expose your ingress controller on standard ports and have your services exposed with regular kubernetes Ingress definitions on these ports.

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.

Clustering doesn't work with mod_cluster on JbossAS7 - Stateful Application

I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.