Is there any way to increase the cloudflare proxy request timeout limit(524)? [duplicate] - ssl

Is it possible to increase CloudFlare's time-out? If yes, how?
My code takes a while to execute and I wasn't planning on Ajaxifying it the coming days.

No, CloudFlare only offers that kind of customisation on Enterprise plans.
CloudFlare will time out if it fails to establish a HTTP handshake after 15 seconds.
CloudFlare will also wait 100 seconds for a HTTP response from your server before you will see a 524 timeout error.
Other than this there can be timeouts on your origin web server.
It sounds like you need Inter-Process Communication. HTTP should not be used a mechanism for performing blocking tasks without sending responses, these kind of activities should instead be abstracted away to a non-HTTP service on the server. By using RabbitMQ (or any other MQ) you can then pass messages from the HTTP element of your server over to the processing service on your webserver.

I was in communication with Cloudflare about the same issue, and also with the technical support of RabbitMQ.
RabbitMQ suggested using Web Stomp which relies on Web Sockets. However Cloudflare suggested...
Websockets would create a persistent connection through Cloudflare and
there's no timeout as such, but the best way of resolving this would
be just to process the request in the background and respond asynchronously, and serve a 'Loading...' page or similar, rather than having the user to wait for 100 seconds. That would also give a better user experience to the user as well
UPDATE:
For completeness, I will also record here that
I also asked CloudFlare about running the report via a subdomain and "grey-clouding" it and they replied as follows:
I will suggest to verify on why it takes more than 100 seconds for the
reports. Disabling Cloudflare on the sub-domain, allow attackers to
know about your origin IP and attackers will be attacking directly
bypassing Cloudflare.
FURTHER UPDATE
I finally solved this problem by running the report using a thread and using AJAX to "poll" whether the report had been created. See Bypassing CloudFlare's time-out of 100 seconds

Cloudflare doesn't trigger 504 errors on timeout
504 is a timeout triggered by your server - nothing to do with Cloudflare.
524 is a timeout triggered by Cloudflare.
See: https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
524 error? There is a workaround:
As #mjsa mentioned, Cloudflare only offers timeout settings to Enterprise clients, which is not an option for most people.
However, you can disable Cloudflare proxing for that specific (sub)domain by turning the orange cloud into grey:
Before:
After:
Note: it will disable extra functionalities for that specific (sub)domain, including IP masking and SSL certificates.
As Cloudflare state in their documentation:
If you regularly run HTTP requests that take over 100 seconds to
complete (for example large data exports), consider moving those
long-running processes to a subdomain that is not proxied by
Cloudflare. That subdomain would have the orange cloud icon toggled to
grey in the Cloudflare DNS Settings . Note that you cannot use a Page
Rule to circumvent Error 524.

I know that it cannot be treated like a solution but there is a 2 ways of avoiding this.
1) Since this timeout is often related to long time generating of something, this type of works can be done through crontab or if You have access to SSH you can run a PHP command directly to execute. In this case connection is not served through Cloudflare so it goes as long as your configuration allows it to run. Check it on Google how to run scripts from command line or how to determine them in crontab by using /usr/bin/php /direct/path/to/file.php
2) You can create subdomain that is not added to cloudlflare and move Your script there and run them directly through URL, Ajax call or whatever.
There is a good answer on Cloudflare community forums about this:
If you need to have scripts that run for longer than around 100 seconds without returning any data to the browser, you can’t run these through Cloudflare. There are a couple of options: Run the scripts via a grey-clouded subdomain or change the script so that it kicks off a long-running background process and quickly returns a status which the browser can poll until the background process has completed, at which point the full response can be returned. This is the way most people do this type of action as keeping HTTP connections open for a long time is unreliable and can be very taxing also.
This topic on Stackoverflow is high in SERPs so I decided to write down this answer for those who will find it usefull.

https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
Cloudflare 524 error results from a web page taking more than 100 seconds to completely respond.
This can be overridden to (up to) 600 seconds ... if you change to "Enterprise" Cloudflare account. The cost of Enterprise is roughtly $40k per year (annual contract required).

If you are getting your results with curl, you could use the resolve option to directly access your IP, not using the Cloudflare proxy IP:
For example:
curl --max-time 120 -s -k --resolve lifeboat.com:443:127.0.0.1 -L https://lifeboat.com/blog/feed

The simplest way to do this is to increase your proxy waiting timeout.
If you are using Nginx for instance you can simply add this line in your /etc/nginx/sites-availables/your_domain:
location / {
...
proxy_read_timeout 600s; # this increases it by 10mins; feel free to change as you see fit with your needs.
...
}
If the issue persists, make sure you use let's encrypt to secure your server alongside Nginx and then disable the orange cloud on that specific subdomain on Cloudflare.
Here are some resources you can check to help do that
installing-nginx-on-ubuntu-server
secure-nginx-with-let's-encrypt

Related

HTTPS connection stops working after a few minutes

I have the following setup:
Service Fabric cluster running 5 machines, with several services running in Docker containers
A public IP which has port 443 open, forwarding to the service running Traefik
Traefik terminates the SSL, and proxies the request over to the service being requested over HTTP
Here's the behavior I get:
The first request to https:// is very, very slow. Chrome will usually eventually load it after a time timeouts or "no content" errors. Invoke-WebRequest in Powershell usually just times out with an "The underlying connection was closed" message.
However, once it loads, I can refresh things or run the command again and it responds very, very quickly. It'll work as long as there's regular traffic to the URL.
If I leave for a bit (not sure on the time, definitely a few minutes) it dies and goes back to the beginning.
My Question:
What would cause SSL handshakes to just break or take forever? What component in this stack is to blame? Is something in Service Fabric timing out? Is it a Traefik thing? I could switch over to Nginx if it's more stable. We use these same certs on IIS, and we don't have this problem.
I could use something like New Relic to constantly send a ping every minute to keep things alive, but I'd rather figure out why the connection is dying after a few minutes.
What are the best ways to go about debugging this? I don't see anything in the Traefik log files (In DEBUG mode), in fact when it doesn't connect, there's no record of the request in the access logs at all. Any tools that could help debug this? Thanks!
Is the Traefik service healthy on all 5 nodes, can you inspect the logs of all 5 instances? If not this might cause the Azure Load Balancer to load balance across nodes where Traefik is not listening which would cause intermittent and slow responses. Once a healthy Traefik responds, you'll get a sticky session cookie which will then make subsequent responses faster. You can enable ApplicationInsights monitoring for Traefik logs to save you crawling across all the machines: https://github.com/jjcollinge/traefik-on-service-fabric#debugging. I'd also recommend testing this without SSL to ensure Traefik can route correctly over HTTP first and then add HTTPS. That way you'll know it's something to do with the SSL configuration (i.e. mounted the certificates correctly, Traefik toml config, trusted certificates, etc.)

infinite timeout for reverse proxy in Apache

I am running tornado behind apache. I have created proxy server.
ProxyRequests On
ProxyPass /chat/ http://localhost:8888/chat/
This code works great and pass all my requests to tornado and returns the response back to client.
Now, I am using tornado for long polling. Some of the requests which finishes in a short interval of time say less than 1 minute this reverse proxy works fine. But certain long polling requests this gives 502 proxy error. The reason for this proxy error is that Apache can hold long polling request for just one minute(by default). It closes the request and hence proxy error is received.
Now, I modified the directive to
ProxyRequests On
ProxyPass /chat/ http://localhost:8888/chat/ timeout=12000
i.e I changed the default timeout to 12000 seconds.
This is currently working fine for me. Bu this is not the best solution to the issue. Ideally long polling requests can exceed any timeout specified. So my questions are
How to make the timeout infinite ? i.e the request is never closed by Apache.
Please also comment: whether the performance of tornado is degraded by going through Apache as proxy server?
I experienced a similar issue with Nginx and solved it the same way as you did. But I changed the timeout to 1 day as it was sufficiently large in my case.
I think you cannot do away with this. The rationale behind this is that Apache (or any proxy server for that matter) has to maintain its performance, which it clearly can't if it has to hold stale or inactive connections. You'd rather let your proxy server proxy more active connections than inactive connections.
Therefore, there is no way to turn off the ProxyTimeout in Apache or even in Nginx (configured using proxy_read_timeout). So if your proxied server is not sending any response within the timeout, then either your application server is taking too long to respond or there is something wrong with your application server or the client is not request for any response. In the first case, you can make safe estimates to set an appropriate timeout. In the second case, you need to fix your application server. And in the third case, you must gracefully handle the situation on the client and reconnect if required.
Coming to your second question, there shouldn't be any difference other than the latency involved between your Apache and your Tornado server. You can very well expose your Tornado server directly to the world but that will come with a few challenges:
1. More ops work - make sure that Tornado process is always up and running.
2. Proxying and load balancing will become more difficult.
3. Worse security as YOU have written that code instead of thousands of expert contributors. So you should not be thinking of running this server as root every. But you can still sort of safely do the same with Apache or Nginx.
Of course the above problems are solvable, but why solve an already solved problem. :)

All jmeter requests going to only one server with haproxy

I'm using Jmeter to load test my web application. I have two web servers and we are using HAProxy for load balance. All my tests are running fine and configured correctly. I have three jmeter remote clients so I can run my tests distributed. The problem I'm facing is that ALL my jmeter requests are only being processed by one of the web servers. For some reason it's not balancing and I'm having many time outs, and huge response times. I've looked around a lot for a way to make these requests being balanced, but I'm having no luck so far. Does anyone know what can be the cause of this behavior? Please let me know if you need to know anything about my environment first and I will provide the answers.
Check your haproxy configuration:
What is it's load balancing policy, if not round-robin is it based on ip source or some other info that might be common to your 3 remote servers?
Are you sure load balancing is working right? Try testing with browser first, if you can add some information about the web server in response to debug.
Check your test plan:
Are you sure you don't have somewhere in your requests a sessionid that is hardcoded?
How many threads did you configure?
In your Jmeter script by default the HTTP Request "Use KeepAlive" header option is checked.
Keep-Alive is a header that maintains a persistent connection between
client and server, preventing a connection from breaking
intermittently. Also known as HTTP keep-alive, it can be defined as a
method to allow the same TCP connection for HTTP communication instead
of opening a new connection for each new request.
This may cause all requests to go to the same server. Just uncheck the option and save, stop your script and re-run.

Can I use Apache mod_proxy as a connection pool, under the Prefork MPM?

Summary/Quesiton:
I have Apache running with Prefork MPM, running php. I'm trying to use Apache mod_proxy to create a reverse proxy that I can re-route my requests through, so that I can use Apache to do connection pooling. Example impl:
in httpd.conf:
SSLProxyEngine On
ProxyPass /test_proxy/ https://destination.server.com/ min=1 keepalive=On ttl=120
but when I run my test, which is the following command in a loop:
curl -G 'http://localhost:80/test_proxy/testpage'
it doesn't seem to re-use the connections.
After some further reading, it sounds like I'm not getting connection pool functionality because I'm using the Prefork MPM rather than the Worker MPM. So each time I make a request to the proxy, it spins up a new process with its own connection pool (of size one), instead of using the single worker that maintains its own pool. Is that interpretation right?
Background info:
There's an external server that I make requests to, over https, for every page hit on a site that I run.
Negotiating the SSL handshake is getting costly, because I use php and it doesn't seem to support connection pooling - if I get 300 page requests to my site, they have to do 300 SSL handshakes to the external server, because the connections get closed after each script finishes running.
So I'm attempting to use a reverse proxy under Apache to function as a connection pool, to persist the connections across php processes so I don't have to do the SSL handshake as often.
Sources that gave me this idea:
http://httpd.apache.org/docs/current/mod/mod_proxy.html
http://geeksnotes.livejournal.com/21264.html
First of all, your test method cannot demonstrate connection pooling since for every call, a curl client is born and then it dies. Like dead people don't talk a lot, a dead process cannot keep a connection alive.
You have clients that bothers your proxy server.
Client ====== (A) =====> ProxyServer
Let's call this connection A. Your proxy server does nothing, it is just a show off. The handsome and hardworking server is so humble that he hides behind.
Client ====== (A) =====> ProxyServer ====== (B) =====> WebServer
Here, if I am not wrong, the secured connection is A, not B, right?
Repeating my first point, on your test, you are creating a separate client for each request. Every client needs a separate connection. Connection is something that happens between at least two parties. One side leaves and connection is lost.
Okay, let's forget curl now and look together at what we really want to do.
We want to have SSL on A and we want A side of traffic to be as fast as possible. For this aim, we have already separated side B so it will not make A even slower, right?
Connection pooling? There is no such thing as connection pooling at A. Every client comes and goes making a lot of noise. Only thing that can help you to reduce this noise is "Keep-Alive" which means, keeping connection alive from a client for some short period of time so this very same client can ask for other files that will be required by this request. When we are done, we are done.
For connections on B, connections will be pooled; but this will not bring you any performance since on one-server setup you did not have this part of the noise production.
How do we help this system run faster?
If these two servers are on the same machine, we should get rid of the show-off server and continue with our hardworking webserver. It adds a lot of unnecessary work to the system.
If these are separate machines, then you are being nice to web server by taking at least encyrption (for ssl) load from this poor guy. However, you can be even nicer.
If you want to continue on Apache, switch to mpm_worker from mpm_prefork. In case of 300+ concurrent requests, this will work much better. I really have no idea about the capacity of your hardware; but if handling 300 requests is difficult, I believe this little change will help your system a lot.
If you want to have an even more lightweight system, consider nginx as an alternative to Apache. It is very easy to setup to work with PHP and it will have a better performance.
Other than front-end side of things, also consider checking your database server. Connection pooling will make real difference here. Be sure if your PHP installation is configured to reuse connections to database.
In addition, if you are hosting static files on the same system, then move them out either on another web server or do even better by moving static files to a cloud system with CDN like AWS's S3+CloudFront or Rackspace's CloudFiles. Even without CloudFront, S3 will make you happy. Rackspace's solution comes with Akamai!
Taking out static files will make your web server "oh what happened, what is this silence? ohhh heaven!" since you mentioned this is a website and web pages have many static files for each dynamically generated html page most of the time.
I hope you can save the poor guy from the killer work.
Prefork can still pool 1 connection per backend server per process.
Prefork doesn't necessarily create a new process for each frontend request, the server processes are "pooled" themselves and the behavior depends on e.g. MinSpareServers/MaxSpareServers and friends.
To maximise how often a prefork process will have a backend connection for you, avoid very high or low maxspareservers or very high minspareservers as these will result in "fresh" processes acceptin new connections.
You can log %P in your LogFormat directive to help get an idea if how often processes are being reused.
The Problem in my case was, the the connection pooling between reverse proxy and backend server was not taking place because of the Backend Server Apache closing the SSL connection at the end of each HTTPS request.
The backend Apache Server was doing this becuse of the following Directive being present in the httpd.conf:
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
This directive does not make sense when the backend server is connected via a reverse proxy and this can be removed from the backend server config.

disable request buffering in nginx

It seems that nginx buffers requests before passing it to the updstream server,while it is OK for most cases for me it is very bad :)
My case is like this:
I have nginx as a frontend server to proxy 3 different servers:
apache with a typical php app
shaveet(a open source comet server) built by me with python and gevent
a file upload server built again with gevent that proxies the uploads to rackspace cloudfiles
while accepting the upload from the client.
#3 is the problem, right now what I have is that nginx buffers all the request and then sends that to the file upload server which in turn sends it to cloudfiles instead of sending each chunk as it gets it (those making the upload faster as i can push 6-7MB/s to cloudfiles).
The reason I use nginx is to have 3 different domains with one IP if I can't do that I will have to move the fileupload server to another machine.
As soon as this [1] feature is implemented, Nginx is able to act as reverse proxy without buffering for uploads (bug client requests).
It should land in 1.7 which is the current mainline.
[1] http://trac.nginx.org/nginx/ticket/251
Update
This feature is available since 1.7.11 via the flag
proxy_request_buffering on | off;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
According to Gunicorn, they suggest you use nginx to actually buffer clients and prevent slowloris attacks. So this buffering is likely a good thing. However, I do see an option further down on that link I provided where it talks about removing the proxy buffer, it's not clear if this is within nginx or not, but it looks as though it is. Of course this is under the assumption you have Gunicorn running, which you do not. Perhaps it's still useful to you.
EDIT: I did some research and that buffer disable in nginx is for outbound, long-polling data. Nginx states on their wiki site that inbound requests have to be buffered before being sent upstream.
"Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers."
Now available in nginx since version nginx-1.7.11.
See documentation
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
To disable buffering the upload specify
proxy_request_buffering off;
I'd look into haproxy to fulfill this need.