I'm trying to load balance a cluster that is exposing port 7654. I've followed the instructions here. When following it exactly (creating the nginx cluster), it works fine, but when I try to apply it to my containers I can't get it to pass the health check. If I use kubectl to expose 7654 with LoadBalancer instead of NodePort, I'm able to connect, so it seems that the container is working fine. Does anyone have any advice for creating a load balancer?
According to https://cloud.google.com/compute/docs/load-balancing/health-checks#overview a successful health check "must return a valid HTTP response with code 200 and close the connection normally within the timeoutSec period". It's possible that your empty response wasn't closing the HTTP connection and adding HTML content caused your backend to close the connection.
Related
I am trying to reach the list of user devices with a GET here:
https://graph.microsoft.com/beta/me/devices
I am using the Graph Explorer at https://developer.microsoft.com/en-us/graph/graph-explorer
I get a 500 error response.
In addition to the Graph Explorer, I also tried making the HTTP request manually using a token for a demo tenant, for a user that has at least 1 registered device. Same result.
Any ideas what could be wrong here?
In case you have network virtual appliances (NVA) such as firewall to inspect the network traffic, ensure that the required ports are allowed.
Also in case the traffic is force tunneled, ensure that the UDR routes for control plane ip addresses and others are added. Virtual network gateway route propagation settings should also be properly set up. Else, you might have asymmetric routing issues.
By following many docs/tutorials I implemented SSL with Kernel and reverse proxy in my SF.
I made it work but the access point Url is as follow : https://mycluster.westeurope.cloudapp.azure.com:19081
before I implemented https, I had a CNAME mycustomdomain.com redirecting to mycluster.westeurope.cloudapp.azure.com which was working fine.
So now, I would like to know if there's a way to call http://mycustomdomain.com
and access the actual Uri. Is there a way with what I already have in place through probes/lbrules for example? Or do I have to implement an Application Gateway or use API management or something else?
Edit : LBRules+Probes
AppPortProbe : 44338 (backend ssl port in the SF)
FabricGatewayProbe : 19000
FabricHttpGatewayProbe : 19080
SFReverseProxyProbe : 19081
[Rule : Probe]
[AppPortLBRule (TCP/80 to TCP/19081) : 19081]
[LBHttpRule (TCP/19080) : 19080]
[LBRule (TCP/19000) : 19000]
[LBSFReverseProxyRule (TCP/19081 to TCP/44338) : 44338]
You question is too broad, there are too many questions that might bring many answers, I will try to answer with a few options:
In your scenario, to access the same url you should use https://mycustomdomain.com:19081 instead.
The problem here is, when you setup the cluster, the certificate used by the cluster is valid only for the domain 'mycluster.westeurope.cloudapp.azure.com' in your case your domain is not valid in the certificate used by SF and it will fail once you make any request to it.
You can skip certificate validation errors on your browser and continue. In your applications you could do the same, the problem is that is not a friendly way doing it.
To be able to use the domain without any conflicts, you have to register your own certificate created for the domain you own.
Because you are using reverse proxy, you also have to define the certificate in the cluster configuration, look for: reverseProxyCertificate in this link
For deploying applications using SSL will happen something similar, but in your application you can define the certificate you want to use on startup, in the cluster you have to define it in the cluster configuration.
You can find more information here:
Manage service fabric cluster security certificates
If the problem is the port, you have two options:
Create a 'Port forwarding' rule in the load balancer, to forward any
request in the port 80 to port 19081. You can find here how to do that with powershell.
Update the cluster/service configuration to listen in the port
80 instead of 19081. check it here
go to godaddy or whichever domain provider you use and add a transfert to your your azure domain : https://mycluster.westeurope.cloudapp.azure.com
I'm new to haproxy and load balancing. I want to see what happens when a backend host is turned off while the proxy is running.
The problem is, if I turn off one of the backends and refresh the browser the page immediateltly exposes a 503 error to the user. After the next page load, it no longer gets the error since presumably that backend has been removed from the pool.
As a test I have set up two backend Flask apps and configured HAProxy to balance them like so:
backend app
mode http
balanace roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
My understanding according to this:
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html#check-parameters
is that every 2 seconds a the backend hosts are pingged to see if they are up. Then they are removed from the pool if they are down. The 5xx error happens between the time I kill the backend and the 2 seconds.
I would think there is a way to get around this 5xx error by having HAProxy perform a little logic such that if a request from the frontend fails, it would then remove that failed backend from the pool and then switch to another and make another request. This way the user would never see the failure.
Is there a way to do this, or should I try something else so that my user does not get an error?
By default haproxy will retry 3 times (retries) with 1s intervals to the same backend. In order to allow to take another backend you should set option redispatch.
Also consider to (carefully, it can be hamrful):
decrease fall (default is 3),
decrease error-limit (default is 10) and set on-error to mark-down or sudden-death
tune healthcheck intervals with inter/fastinter/downinter
Note: Haproxy retries only on connection errors (e.g. ECONNNREFUSED like in your case), it will not resend/resubmit request/data.
Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..
I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.
I have configured cluster with two instances on glassfish 3.1.1 and iPlanet Web Server as a load-balancer (on the same machine). For test application provided with glassfish everything works ok (and this application has session replication enabled).
But when I try to make my own application working following situation takes place: it responds when I send requests on ports of a particular instances (that is 28080 and 28081), but when I try to send request through load balancer (port 81) I get error 404. My application has not session replication enabled yet, but it can just make a connection and create two other sessions for each instance. I would like to get similar effect with load balancer.
So I would like to determine:
Is session replication strongly required to load balancer works fine?
Does anyone know any other reasons of this error?
Message from iPlanet log:
[23/Aug/2012:05:44:16] failure ( 4120) myHost: for host 127.0.0.1 trying to GET /myApp/login.jsp, service-j2ee reports: PWC6117: File "c:/webserver7/https-myHost/docs/myApp/login.jsp" not found
Additional conclusions:
(81 - http-listener port on iPlanet)
When I send GET http://localhost:81/testApp then loadbalancer passes it to glassfish and returns correct site. But when I try the same with my test application, GET http://localhost:81/myApp then iPlanet looks for this site in its own resources (docs directory as in log above)
fragment of myHost-obj.conf:
<Object name="default">
AuthTrans fn="match-browser" browser="*MSIE*" ssl-unclean-shutdown="true"
NameTrans fn="name-trans-passthrough" name="lbplugin" config-file="C:/WebServer7/https-myHost/config/loadbalancer.xml"
NameTrans fn="assign-name" name="perf" from="/.perf"
NameTrans fn="ntrans-j2ee" name="j2ee"
NameTrans fn="pfx2dir" from="/mc-icons" dir="C:/WebServer7/lib/icons" name="es-internal"
PathCheck fn="uri-clean"
PathCheck fn="check-acl" acl="default"
PathCheck fn="find-pathinfo"
PathCheck fn="find-index-j2ee"
PathCheck fn="find-index" index-names="index.html,home.html,index.jsp"
ObjectType fn="type-j2ee"
ObjectType fn="type-by-extension"
ObjectType fn="force-type" type="text/plain"
Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file"
Service method="TRACE" fn="service-trace"
Error fn="error-j2ee"
AddLog fn="flex-log"
</Object>
First, if you are running the Load Balancer plugin, then you may have a support contract (a GlassFish license is required before you put the plugin into production). If so, calling support is a good option.
To answer your first question, session replication is not required for the Load Balancer to work.
As a shameless plug, I have a 5-part youtube series on setting this up. You can skip the videos on downloading and installing and go straight to setup/configuration/testing. Based on what you describe, I suspect the issue isn't the plugin itself, but the loadbalancer.xml configuration. Look at loadbalancer.xml and see if myApp is configured.
Hope this helps.