How to control Jelastic Traffic Distributor via API - load-balancing

Traffic Distributor (https://docs.jelastic.com/traffic-distributor) is a cool feature which add's load balancing to your app and enables Blue/Green deploy.
However, seems that there's no API to control traffic distributor so it's impossible to automate new releases rollout.
Is there a way to do this?

There is a possibility to create and control Traffic Distributor via API.
Let us explain the flow...
At first, you should login to the platform and get your session.
This can be done by next API request:
https://app.{platform_domain}/1.0/users/authentication/rest/signin/login={your_email}&password={your_password}
If you are using Jelastic platform v5.1+ you should perform the mentioned request as POST.
As example, you can do this using curl:
curl 'https://app.{platform_domain}/1.0/users/authentication/rest/signin' -d "login={your_email}&password={your_password}"
Next, you can create the Traffic Distributor using this request:
http://appstore.{platform_domain}/InstallApp?envName=[env_name]&session=[your_session]&jps=traffic-distributor&displayName=[disp_env_name]&settings={"extip":true,"balancerCount":1,"routingMethod":"round-robin","range":50,"backend1":"{environment_1}","backend2":"{environment_2}"}
, where
[env_name] - the name of the environment.
[disp_env_name] - the visible name of the environment in the Dashboard.
[your_session] - your session, which can be taken from the response of the previous request.
The necessary settings of the Traffic Distributor can be specified inside the JSON:
extip - enables the external IP for the Traffic Distributor ( Highly recommended! ).
balancerCount - count of the balancers inside the Traffic Distributor. (by default = 1)
routingMethod - defines the necessary method of the traffic's routing.
You can specify next possible values: round-robin, sticky-sessions or failover
range - define the percent of the traffic, that will be routed to the first environment.
For example:
0 - All requests will be routed to the {environment_2},
100 - All requests will be routed to the {environment_1},
50 - All requests will be balanced between environments equally.
{environment_1} - URL to the first environment like env-XXXXXXX.{platform_domain}
{environment_2} - URL to the second environment like env-XXXXXXX.{platform_domain}
After executing this method - Traffic Distributor will appear in the Jelastic Dashboard.
Please, execute next API request to take the "uniqueName" value from the response (inside the Addons section):
https://app.{Platform_domain}/1.0/environment/control/rest/getenvinfo?envname=[env_name]&session=[your_session]
, where [env_name] - the name of the created environment with the Traffic Distributor addon.
Now you can control the settings of the created Traffic Distributor by next API:
https://appstore.{Platform_domain}/ExecuteAppAction?session=[your_session]&appUniqueName=[app_unique_name]&action=configure&params={"extip":1,"balancerCount":1,"routingMethod":"sticky-sessions","range":50,"backend1":"{environment_1}","backend2":"{environment_2}"}
, where
[app_unique_name] - the value "uniqueName" from the response of the previous request.
The settings inside the JSON exactly the same as for the InstallApp API request.

Related

My Fusionpbx server doesn't accept incoming calls

I have a freshly installed fusionpbx server on the cloud and I created a few extensions and I registered my gateway which in this case is flowroute. I created the outbound routes and everything looked OK besides the incoming calls which are not working.
In the access control I added all IP addresses that flowroute has on their website. I made sure to add :5080 but it still doesn't work.
I made sure flowroute is sending it to :5080. I added all IP adresses in the ACL list. And if I use "sngrep" it doesn't even show any incoming calls. And when I check in flowroute it says "Unavailable - No trunk or registration 604".

HAproxy passive health checking

I'm new to haproxy and load balancing. I want to see what happens when a backend host is turned off while the proxy is running.
The problem is, if I turn off one of the backends and refresh the browser the page immediateltly exposes a 503 error to the user. After the next page load, it no longer gets the error since presumably that backend has been removed from the pool.
As a test I have set up two backend Flask apps and configured HAProxy to balance them like so:
backend app
mode http
balanace roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
My understanding according to this:
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html#check-parameters
is that every 2 seconds a the backend hosts are pingged to see if they are up. Then they are removed from the pool if they are down. The 5xx error happens between the time I kill the backend and the 2 seconds.
I would think there is a way to get around this 5xx error by having HAProxy perform a little logic such that if a request from the frontend fails, it would then remove that failed backend from the pool and then switch to another and make another request. This way the user would never see the failure.
Is there a way to do this, or should I try something else so that my user does not get an error?
By default haproxy will retry 3 times (retries) with 1s intervals to the same backend. In order to allow to take another backend you should set option redispatch.
Also consider to (carefully, it can be hamrful):
decrease fall (default is 3),
decrease error-limit (default is 10) and set on-error to mark-down or sudden-death
tune healthcheck intervals with inter/fastinter/downinter
Note: Haproxy retries only on connection errors (e.g. ECONNNREFUSED like in your case), it will not resend/resubmit request/data.

HTTPS Load Balancing Google Container Cluster

I'm trying to load balance a cluster that is exposing port 7654. I've followed the instructions here. When following it exactly (creating the nginx cluster), it works fine, but when I try to apply it to my containers I can't get it to pass the health check. If I use kubectl to expose 7654 with LoadBalancer instead of NodePort, I'm able to connect, so it seems that the container is working fine. Does anyone have any advice for creating a load balancer?
According to https://cloud.google.com/compute/docs/load-balancing/health-checks#overview a successful health check "must return a valid HTTP response with code 200 and close the connection normally within the timeoutSec period". It's possible that your empty response wasn't closing the HTTP connection and adding HTML content caused your backend to close the connection.

Difference in response time between http vs https

I tested my web site with 100 users with http and https. The response time obtained in https is much higher compared to the response time obtained in http. The response time of https is nearly four times greater than http. Can anyone explain me why the response time is higher in https compared to http? or do i need to change any SSL property in jmeter system.properties? Thanks in Advance..!
SSL Handshake assumes 4 requests for establishing a connection so first request should be something like 4x times longer than in case of HTTP. See The SSL handshake diagram for more info
However if you receive 4 times performance degradation for all requests - that doesn't sound right.
There are following JMeter properties which control SSL flows:
https.sessioncontext.shared - controls whether SSL session contexts are created per thread (if it's set to false) or shared (if it's set to true)
https.use.cached.ssl.context - controls if cached SSL context is being reused between iterations
These properties live in jmeter.properties file under /bin folder of your JMeter installation. It's also possible to override them using -J command line key as follows:
jmeter -Jhttps.sessioncontext.shared=true -Jhttps.use.cached.ssl.context=true
See Apache JMeter Properties Customization Guide for more details.
If above setting won't help you'll need to review your test plan and perhaps profile application to see where this extra time is spent.

Why would an SSL/Basic Authentication WCF service start throwing a 404?

I have a WCF service that has been working flawlessly for 3 months. It is consumed by local clients on the same server hosting the WCF service and local network clients. It uses SSL and basic authentication for security.
A few nights ago, the local client (local network clients not affected) started receiving 404 errors whenever it tried to use the service. I am able to open a browser on the server hosting the WCF and view the WSDL and even call the "put" command and get the expected "method not allowed". I have confirmed that no software or hardware changes have been made to the hosting server. I have confirmed that the SSL key is valid. I have confirmed that the permissions for the Application Pool are sufficient. I have confirmed that no firewall is running. The only thing odd is the IIS log showing that the first post does not contain the basic authentication user. However, the next line in the log does and shows a 200 response. I am not entirely sure that log is not normal. See below. I was hoping somebody could give me another place to research to find the problem. Please let me know.
2010-08-28 10:30:03 192.168.100.100 POST /protected/Service_Name_Here.svc/put - 443 - 192.168.100.100 - 401 2 5 2
2010-08-28 10:30:03 192.168.100.100 POST /protected/Service_Name_Here.svc/put - 443 User_Name_Here 192.168.100.100 - 200 0 0 5
EDIT: The local client that is throwing the error is transferring large files to the WCF service. The local network clients are transferring small files and not throwing the error. I found this link that suggests that the default transferMode="Buffered" will throw a 404 for files above 20 MB file. The fix for this person was to change the transferMode="Streamed". However, the "Streamed" setting only allows 1 parameter to be passed to the WCF service. I have multiple parameters so I need to find a fix for "buffered" mode.
The fix for this person was to change the transferMode="Streamed". However, the "Streamed" setting only allows 1 parameter to be passed to the WCF service. I have multiple parameters so I need to find a fix for "buffered" mode.
Sounds like that's the correct fix, however the caveat is that streamed mode requires custom message contracts; you can't use the "RPC" style that WCF pushes as a default for operations. If you need to provide more than one parameter in a streamed mode transfer, simply add them to your custom message contract.
Here's a nice discussion on the subject from Microsoft.
If you have problems with message size be aware that there are 3 levels of configuring accepted request size for IIS:
WCF - default max message size 65KB (maxReceivedMessageSize)
ASP.NET runtime hosting WCF - default max request size is 4MB (maxRequestLength)
IIS 7 with request filtering installed - default max request size about 28MB (maxAllowedContentLength)
If WCF rejects your message you will probably get meaning full error but for ASP.NET and IIS you will get exactly HTTP 404.
Streaming will not help you unless you change your operations.