I am using superset as my data visualization tool. But I am getting 504 Gateway timeout while I am trying to run a long-running query.
My original query took 40 seconds to run in Bigquery console but after 50 sec I am getting 504 error.
I have changed SUPERSET_WEBSERVER_TIMEOUT = 300 in superset_config.py also run with superset runserver -t 300
From the Superset documentation[1]:
"If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response from Superset server (which is processing long queries), these web servers will send 504 status code to clients directly."
Adjusting the timeout on Superset won't help you in this case, since it has no control over the early response from your intermediate proxy. See if you can bypass the proxy or adjust the timeout.
[1] https://superset.incubator.apache.org/faq.html#why-are-my-queries-timing-out
I had the same problem, here is what to do :
Add this between http{} in etc/nginx/nginx.conf
uwsgi_read_timeout 600s;
proxy_connect_timeout 600;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;
If you are using gunicorn be sure to launch your app with a larger timeout (mine was 60 seconds so I put 600 secondes
Related
I'm having an issue with Safari not displaying a video in a HTML page I'm returning, whereas it works on other browsers. There are multiple parts of the system but the concept is that there is an NGINX proxy in front of an express web server which can further proxy users to a different location for their resources using http-proxy.
I have looked at many different discussions about this but none of them have managed to resolve my issue.
I tried curling all those different parts of the system to make sure they all support byte-range requests and partial responses and everything seems fine to me:
Curling the NGINX:
curl -k --range 0-1000 https://nginx-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1082 0 --:--:-- --:--:-- --:--:-- 1080
Curling the express (to bypass nginx):
curl -k --range 0-1000 http://express-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 2166 0 --:--:-- --:--:-- --:--:-- 2161
Curling the final resource:
curl -k --range 0-1000 https://final-resource.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1041 0 --:--:-- --:--:-- --:--:-- 1040
When I do verbose output of curl all of them return 206 status code as well, so it looks to me as if the requirements for Safari are satisfied.
Also here's my nginx configuration:
location / {
server_tokens off;
proxy_pass http://localhost:8085;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
if ( $request_filename ~ "^.*/(.+\.(mp4))$" ){
set $fname $1;
add_header Content-Disposition 'inline; filename="$fname"';
}
more_clear_headers 'access-control-allow-origin';
more_clear_headers 'x-powered-by';
more_clear_headers 'x-trace';
}
and there isn't any additional http or server configuration that might mess some of this up. Server is listening on 443 using ssl with http2. I also tried by adding add_headers Accept-Ranges bytes and proxy_force_ranges on; but that didn't help either.
Does anyone maybe have any idea what I'm doing wrong ? Attached I'm also providing requests and responses in Safari that I'm getting for initial byte range request which checks whether my servers are compatible with it and the follow up that's supposed to fetch the data which fails for some reason. Any help is appreciated
Initial request
Followup
My Rails app which has been working great for years suddenly started returning Cloudflare 520 errors. Specifically, api.exampleapp.com backend calls return the 520 whereas hits to the frontend www.exampleapp.com subdomain are working just fine.
The hard part about this is nothing has changed in either my configuration, or code at all. Cloudflare believes this is happening as the Heroku server is returning an empty response.
> GET / HTTP/1.1
> Host: api.exampleapp.com
> Accept: */*
> Accept-Encoding: deflate, gzip
>
{ [5 bytes data]
* TLSv1.2 (IN), TLS alert, close notify (256):
{ [2 bytes data]
* Empty reply from server
* Connection #0 to host ORIGIN_IP left intact
curl: (52) Empty reply from server
error: exit status 52
On the Heroku end, my logs don't even seem to register the request when I hit any of these urls. I also double-checked my SSL setup (Origin certificate created at Cloudflare installed on Heroku), just in case, and it seems to be correct and is not expired.
The app is down for a couple of days now, users are complaining, and no response from either customer care teams despite being a paid customer. My dev ops knowledge is fairly limited.
Welcome to the club: https://community.cloudflare.com/t/sometimes-a-cf-520-error/288733
It seems to be a Cloudflare issue introduced in late July affecting hundreds of sites running very different configurations. It's been almost a month since the issue was first reported, Cloudflare "fixed" it twice, but it's still there. Very frustrating.
Change your webserver logs to a info state and see if your application is not exceeding some HTTP/2 directive while processing the connection.
If this is the case, try to increase the directive size:
#nginx
server {
...
http2_max_field_size 64k;
http2_max_header_size 64k;
}
I created an python api on openshift online with python image. If you request all the data, it takes more than 30 seconds to respond. The server gives a 504 gateway timeout http response. How do you configure the length a response can take? > I created an annotation on the route, this seems to set proxy timeout.
haproxy.router.openshift.io/timeout: 600s
Problem remains, I now got logging. It looks like the message comes from mod_wsgi.
I want to try alter the configuration of the httpd (mod_wsgi-express process) from request-timeout 60 to request-timeout 600. Where doe you configure this. I am using base image https://github.com/sclorg/s2i-python-container/tree/master/2.7
Logging:
Timeout when reading response headers from daemon process 'localhost:8080':/tmp/mod_wsgi-localhost:8080:1000430000/htdocs
Does someone know how to fix this error on openshift online
Next to alter timeout of haproxy of the route of my app
haproxy.router.openshift.io/timeout: 600s
I altered the request-timeout and socket-timeout in app.sh of my python application. So the mod_wsgi-express server is configured with a higher timeout
ARGS="$ARGS --request-timeout 600"
ARGS="$ARGS --socket-timeout 600"
My application now wait 10 minutes before cancelling a request
I am trying to run background processes on an Elastic Beanstalk single worker instance within a Docker container and have not been able execute a request/job for longer than 60 seconds without getting a 504 timeout.
Looking at the log files provided by AWS the issue begins with the following error;
[error] 2567#0: *37 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /queue/work HTTP/1.1", upstream: "http://172.17.0.3:80/queue/", host: "localhost"
Does anyone know if it possible to increase the limit from 60 seconds to a longer period as I would like to generate some reports which will take 3 to 4 minutes to process.
I have increased the NGINX timeout settings within .ebextensions/nginx-timeout.config without any results.
files:
"/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf": mode: "000644"
owner: root
group: root
content: |
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
commands:
"00nginx-create-proxy-timeout":
command: "if [[ ! -h /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ]] ; then ln -s /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy-timeout.conf /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy-timeout.conf ; fi"
I have also increased the PHP max_execution_time within a custom php.ini
max_execution_time = 600
Any help will be greatly appreciated.
Maybe check the Load Balancer's Idle timeout setting. The default is 60 seconds. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html?icmpid=docs_elb_console
maybe someone know what to do. I'm trying to upload files greater than 3Gb. No problems, if I upload files up to 2Gb with next configs:
Nginx:
client_max_body_size 5g;
client_body_in_file_only clean;
client_body_buffer_size 256K;
proxy_read_timeout 1200;
keepalive_timeout 30;
uwsgi_read_timeout 30m;
UWSGI options:
harakiri 60
harakiri 1800
socket-timeout 1800
chunked-input-timeout 1800
http-timeout 1800
When i upload big (almost 4Gb) file, it uploads ~ 2-2.2Gb and stops with error:
[uwsgi-body-read] Timeout reading 4096 bytes. Content-Length: 3763798089 consumed: 2147479552 left: 1616318537
Which params i should use?
What ended up solving my issue was setting:
uwsgi.ini
http-timeout = 1200
socket-timeout = 1200
nginx_site.conf
proxy_read_timeout 1200;
proxy_send_timeout 1200;
client_header_timeout 1200;
client_body_timeout 1200;
uwsgi_read_timeout 20m;
After stumbling upon a similar issue with large files (>1Gb) I collected further info from github issue and stackoverflow thread and several more. What ended up happening was python / uwsgi taking too long to process the large file, and nginx stopped listening to uwsgi leading to a 504 error. So increasing the timeout time for http and socket communication ended up resolving it.
I have similar problems with nginx and uWSGI with the same limit at about 2-2.2GB file size. nginx properly accepts the POST request and when it forwards the request to uWSGI, uWSGI just stops processing the upload after about 18 seconds (Zero CPU, lsof says that the file size in the uWSGI temp dir does not increase anymore). Increasing any timeout values does not help.
What solved the issue for me was to disable buffering in nginx (proxy_request_buffering off;) and setting up buffering in uWSGI with 2MB buffer size:
post-buffering = 2097152
post-buffering-bufsize = 2097152