Play 2.1.0 + Apache 2.2 Reverse proxy => 502 proxy error when idle - apache

Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..

I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.

Related

kotlin https request timeout using ktor httpclient suggestions

i have an app that is being used in many places across my country and it comunicated with a flask server (deployed on apache).
sometimes the reception is not so good in areas of my country. my users sometimes fetching/sending data to the server, but its loading too much time and then its fails/ stuck. i think somthing is lost in the comunication between the app and the server. also, it happend from 6:am in the morning where i guess there are more activity/load on the server
what is the best configuration to apply in this case for the http request:
this is what i use.
requestTimeoutMillis = 15000
connectTimeoutMillis = 10000
socketTimeoutMillis = 10000
further more, if anybody have a suggestion for a protocol that can "hide" this behevior from the user. they tell me that if its stuck, they going to surf on some website like facebook, go back to the app, then try to press the send button again and its working.
thanks

What can I do to fix a 504 gateway timeout error?

I have been using jquery to try and pull data from an API. However I am getting a 504 error. Even when I am using postman to test the data this happens. Can anyone suggest what I need to do to get around this?
I recently experienced this issue on one of my app's that was making an ambitious call to it's Firebase Database - it was grabbing a very large record, which took over 60 seconds (the default timeout) to retrieve.
For those experiencing this error, that have access to their app / site's hosting environment, which is proxying through NGINX, this issue can be fixed by extending the timeout for API requests.
In your /etc/nginx/sites-available/default or /etc/nginx/nginx.conf add the following variables:
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
send_timeout 240;
Run sudo nginx -t to check the syntax, and then sudo service nginx restart.
This should effectively quadruple the time before NGINX will timeout your API requests (the default being 60 seconds, our new timeout being 240 seconds).
Hope this helps!
There is nothing you can do.
You are sending a request to a server. This particular request fails, because the server sends a request to a proxy, and gets a timeout error. Your server reports this back to you as status 504.
The only way to fix it is to fix the proxy (make it respond in a timely manner), or to change the server to not rely on that proxy. Both are outside your area.
You cannot prevent such errors. What you can do is find out what user experience there should be when such a problem happens, and implement it. BTW. If you get 504 errors, then you should also expect timeout errors. Say you make a request to your server with 60 second timeout, and your server makes a request to the proxy with 60 second timeout. Because both timeouts are the same, sometimes your server will receive the proxy timeout and send it to you (status 504), but sometimes your request to the server will time out just before that, and you get a timeout error.
One way to fix this is, Changing proxy settings to "NO PROXY" in the browser
It works in Firefox browser, Others I don't know.
Follow the steps:
1. Go to preferences (top right corner 3 lines, search in drop down)
Search for proxy (Ctrl+F)
Go inside the settings
select "NO PROXY" in the choice buttons, below "Configure Proxy Access to the Internet"
refresh web page.

HAproxy passive health checking

I'm new to haproxy and load balancing. I want to see what happens when a backend host is turned off while the proxy is running.
The problem is, if I turn off one of the backends and refresh the browser the page immediateltly exposes a 503 error to the user. After the next page load, it no longer gets the error since presumably that backend has been removed from the pool.
As a test I have set up two backend Flask apps and configured HAProxy to balance them like so:
backend app
mode http
balanace roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
My understanding according to this:
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html#check-parameters
is that every 2 seconds a the backend hosts are pingged to see if they are up. Then they are removed from the pool if they are down. The 5xx error happens between the time I kill the backend and the 2 seconds.
I would think there is a way to get around this 5xx error by having HAProxy perform a little logic such that if a request from the frontend fails, it would then remove that failed backend from the pool and then switch to another and make another request. This way the user would never see the failure.
Is there a way to do this, or should I try something else so that my user does not get an error?
By default haproxy will retry 3 times (retries) with 1s intervals to the same backend. In order to allow to take another backend you should set option redispatch.
Also consider to (carefully, it can be hamrful):
decrease fall (default is 3),
decrease error-limit (default is 10) and set on-error to mark-down or sudden-death
tune healthcheck intervals with inter/fastinter/downinter
Note: Haproxy retries only on connection errors (e.g. ECONNNREFUSED like in your case), it will not resend/resubmit request/data.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.

Long delays and messed up AJAX responses when running Pylons app via Apache

I have a Pylons app that I'm trying to set up using Apache and FCGI. The Pylons INI file has this in it:
[server:main]
use = egg:Flup#fcgi_thread
host = 0.0.0.0
port = 40100
This used to work on an old CentOS server with Pylons 0.9.7, but now I'm trying to set it up on a new one, running Ubuntu 10.04 and Pylons 1.0. I can connect to the app and load main page, but it's very slow. It then makes AJAX requests and the HTTP responses to those are all messed up: sometimes I'll get half of the response text (eg. half a GUID that the server sent), other times there will be HTTP headers and binary junk in the body of the response. Each response is also delayed by about 15 seconds. The app works fine on the same server when using Paster directly.
I've never seen anything like this before. Any idea what's going on?
In case anyone else runs into this, turning off the gzip module in Apache fixed the problem. I still don't know why it happened.