When using my Ember application in the Ember server ("ember serve"), any request that takes over 1.00 minutes fails with a 502 error. However, when I build my app, and use my local Apache server, requests don't fail for several minutes.
How do I increase the request timeout in Ember server? I don't see any options anywhere to set this value, nor can I find any sort of documentation online.
The following may help.
As specified here, you can add the timeout in ms in the hash returned by ajaxOptions in the adapter.
https://discuss.emberjs.com/t/how-to-set-ajax-timeout-for-ember-rsvp-or-store-findrecord/8386/4
They have also mentioned the limitation that If
If Ember stop receiding from Jquery, this will not be an option.
I fixed this myself. It wasn't a problem with Ember server after all. The problem was my ember proxy was pointing to an Apache server, which in-turn proxied the API server. The Apache ProxyPass setting was missing the timeout argument, so it was defaulting to 60 seconds. I needed to add a timeout to my ProxyPass setting in my httpd.conf file, like so:
ProxyPass /api https://localhost:8081/api timeout=180
Related
I'm using memcached and Apache with the following default configuration
CacheEnable socache /
CacheSocache memcache:IP:PORT
MemcacheConnTTL 30
What will the behavior be when 30 seconds expire and a request for the same URL comes in? Is there a way to configure the cache key? I.e. what are the info which make a request unique?
What if the server can't get an answer? (like timeout to fetch the newly updated object) Can it be configured to serve the old object?
Thanks
What will the behavior be when 30 seconds expire and a request for the same URL comes in
Apache would simply create a new connection to memcached. It doesn’t mean something would happen to the data stored in memcached
https://httpd.apache.org/docs/2.4/mod/mod_socache_memcache.html#memcacheconnttl
Set the time to keep idle connections with the memcache server(s)
alive (threaded platforms only).
If you need to control for how long an object will be stored in a cache, check out CacheDefaultExpire
Is there a way to configure the cache key
An url is used to build the key, but you can partially configure which parts of the url are used, check out
CacheIgnoreQueryString, CacheIgnoreURLSessionIdentifiers
I.e. what are the info which make a request unique
https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cacheenable
The CacheEnable directive instructs mod_cache to cache urls at or
below url-string
Notice that not all requests can be cached, there’s a lot of rules on it
What if the server can't get an answer? Can it be configured to serve the old object
You need CacheStaleOnError
https://httpd.apache.org/docs/2.4/mod/mod_cache.html#cachestaleonerror
When the CacheStaleOnError directive is switched on, and when stale
data is available in the cache, the cache will respond to 5xx
responses from the backend by returning the stale data instead of the
5xx response
we have set up with Apache and tomcat with another third party data storage server. We support API call through which one can upload a file and in turn tomcat stores that file on third party data storage server.
Timeout for that API request now is 10 hours and Apache request timeout is 2 Minutes. However tomcat sometimes takes more then 2 minutes to upload that file. And in the mean time Apache sends 500 internal server error instead of 408 request timeout error. In the whole process tomcat successfully uploads that file but client who has made API call will get 500 error with understanding that file is not uploaded so again it will try to upload making it a duplicate entry.
we are using apache proxy ajp. Please help me resolve this issue. Thanks in advance.
I know that error. As of Apache 2.4.40, it cannot send 500, it either send a gateway read timeout or bad gateway. The 408 will be send by apache if YOUR client fails to deliver timely. You have to know how long the longest request will take on the Tomcat and change ProxyTimeout accordingly. I did and it solved my issue.
I am trying to grab json data from monit and display it on a status page for management to see the current status of a handful of processes. This info would be displayed in Confluence running on the same machine but since Confluence (apache) and monit are running on different ports it is considered to be cross domain.
I know I can write a server side process to serve this data but that seems to be overkill and would actually take longer that it took to set monit up in the first place :)
The simplest solution is to configure monit's headers (Access-Control-Allow-Origin) to allow the other server. Does anyone know how to do this? I suspect there is a way since M/Monit would run into the same issue. I have tried some blind attempts on the "httpd... allow" lines but it complains about the syntax with x.x.x.x:port or using keyword "port" in that location.
ok... going to answer my own question (sort of).
First, I think I may have asked the question wrong. I don't deal with a lot of cross domain issues. Sorry about that.
But here is what I did to get to the monit info from the other servers: pretty simple using proxies in apache where the main server is:
ProxyPass /monit http://localhost:2812
ProxyPassReverse /monit http://mainserver/monit
ProxyPass /monit2 http://server2:2812
ProxyPassReverse /monit2 http://mainserver/monit2
I did this for each of the servers and tested that I can get to either the monit web interface or to the _status?format=json sub pages. I can now call them using ajax on our main web page.
This also has the benefit that I can lock down the monit access control to just the main server but have the info show on a more visible page. :)
I don't think you would need a proxy to just display monit's api or http info. It depends on how you have your network and dns configured. If you'd like to use only localhost, then that might be necessary. But, monit does have a facility to use global host ip access using allow directives in it's own config rc file
I can't find in the docs if it's possible to keep apache from timing out on a proxy request. I'm trying to setup a socket server and am looking for this as an option.
Did you try ProxyTimeout directive, and set it to a relatively big number in seconds as specified in http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#proxytimeout
I have the following system configured:
Tomcat -> Apache
Now, I have some URLs on which I have Max-Age, LastModified and Etags set.
My expectation is when Client1 makes a call to the server, the page should get served from tomcat, but should get cached in the mod_cache module of Apache. So that when next client makes a call, the page is served from Apache and it doesnt have to hit the Tomcat server, if the page is still fresh. If the page isnt fresh, Apache should make a Conditional Get to validate the content it has.
Can someone tell me if there is any fundamental mistake in this thinking? It doesnt happen to work that way. In my case, when client2 makes a call, it goes straight to the Tomcat server(not even a Conditional Get).
Is my thinking incorrect or my Apache configuration incorrect?! Thanks
The "What can be cached" section of the docs has a good summary of factors - such as response codes, GET request, presence of Authorization header and so on - which permit caching.
Also, set the Apache LogLevel to debug in httpd.conf, and you will get a clear view of whether or not each request got cached. Check the error logs.
You should be able to trace through what is happening based on these two.