After having my VPS upgraded to CentOs 5.5, I began to experience frozen / disconnected shell sessions if I had neglected them for a certain amount of time. Very annoying. The solution I found was to edit /etc/ssh/sshd_config and set the ClientAliveInterval to the desired number of seconds. My understanding is that this essentially substitutes for activity from the client user (me) and so keeps the session from disconnecting.
Having initiated a shell session after making this minor change, I seem to be able to maintain a neglected session. However, just because a thing seems to be working doesn't mean the best, or even correct, approach was necessarily taken.
Is there a better / different way to prevent a shell session from freezing?
The ClientAliveInterval value can increase the sshd timeout, you can try the following command as well
echo "TMOUT=300 >> /etc/bashrc
echo "readonly TMOUT" >> /etc/bashrc
echo "export TMOUT" >> /etc/bashrc
No guarantees, but this is what I recently started using on the server. 10 seconds seems like a short time, but I don't trust my phone to keep the connection alive. I suppose you could increase the number of seconds till the problem starts again, then dial it back.
ClientAliveInterval 10
Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client.
ClientAliveCountMax 200
If it fails, keep trying for about 30 minutes. In other words, keep trying 200x, every 10 seconds. My logic could be flawed though, depending on what happens after 10 seconds. Assuming the client is quiet (like maybe I'm reading) does the "alive" message reset the max count if successful? Is inactivity considered failure? Or is failure a "no acknowledgement" of the alive message? Until I know the answer, I figure it's safe to repeat 200x.
Similar question here, and some decent recommendations...
https://unix.stackexchange.com/questions/400427/when-to-use-clientaliveinterval-versus-serveraliveinterval
Related
HelloA Windows Phone application need to connect to a server and get messages from it. This is done using WCF and long polling on the server. 3 minutes is the timeout defined on the server. Call from windows phone is done using HttpWebRequest.
The problem is that Windows Phone devices have a timeout of 60 seconds for get request (emulator have a different value, greater than 3 minutes).
Currently i can't decrease server timeout. Doing a new GetRequest after the 60 seconds doesn't get anymore messages.
Does anyone have an idea ?
Thanks
I don't think leaving a connection open is a good idea on mobile devices. I'm assuming that's what you're doing. In my app, I would just poll whenever needed by creating a new HttpWebRequest. But it made sense to do this in my app, because I would be updating train arrival status every 40 seconds.
If you're trying to pull data on a given schedule, put a timer in and just call the webserver every 3 minutes or whatever the requirement is.
If you want to be able to check things (when the app is closed) or if there's rarely fresh data on the server, then you'd need to implement a Push mechanism.
Update: Here's a good article on dealing with the timeout issue - http://blog.xyzzer.me/2011/03/10/real-time-client-server-communication-on-windows-phone-with-long-polling/
Update 2: What if you arranged it so that, you have cascading connections - what I mean is since you can't go beyond 60 seconds per connection, you can write a class that'll house two connections and once one of them is about to timeout, say several seconds before, you can start opening the other connection - you can pick the timing so that there's at most 5 seconds of overlap between them. This way you could have your always open connection.
Also see what these guys have done with the GChat app, they have their source code available at this link. This may provide a more proper design.
I created a COM object that makes a query into a website. It's works perfectly, but when I use this COM object with many threads (50 for example), I get many timeout errors, and I changed the HttpWebRequest timeout to 45 seconds.
How is that possible?
Is there some limitation in this method? How can I solve this problem?
Thanks!
Since you don't have your code, I could not tell exactly what happened. But this are my assumptions:
Firstly, You maxed out the default limit number of connections per application to a web host. By default, the number is 2. You can increase it by looking at this document
Secondly, the connections were not terminated properly after you transmitted the data. You can verify those HTTP connections by typing netstat -n if you're on Windows. Look for connections that have the same destination IP (should be). If this is the case, then you need to properly close the HttpWebResponse.GetReponseStream(). This will terminate the HTTP connection quickly.
I have a request that takes more than 30 seconds and it breaks.
What is the solution for this? I am not sure if I add more dynos this will work.
Thanks
You should probably see the Heroku devcenter article regarding this, as the information will be more helpful, here's a small summary:
To answer the timeout question:
Cedar supports long-polling and streaming responses. Your app has an initial 30 second window to respond with a single byte back to the client. After each byte sent (either recieved from the client or sent by your application) you reset a rolling 55 second window. If no data is sent during the 55 second window your connection will be terminated.
(That is, if you had Cedar instead of Aspen or Bamboo you could send a byte every thirty seconds or so just to trick the system. It might work.)
To answer your dynos question:
Additional concurrency is of no help whatsoever if you are encountering request timeouts. You can crank your dynos to the maximum and you'll still get a request timeout, since it is a single request that is failing to serve in the correct amount of time. Extra dynos increase your concurrency, not the speed of your requests.
(That is, don't bother adding more dynos.)
On request timeouts:
Check your code for infinite loops, if you're doing something big:
If so, you should move this heavy lifting into a background job which can run asynchronously from your web request. See Queueing for details.
http://www.php.net/manual/en/session.configuration.php#ini.session.cookie-lifetime
says that a session.cookie_lifetime of 0 "goes until the browser is closed". Is that the absolute maximum length that the session can have (always wiped when the browser is closed), or would setting a session.cookie_lifetime of, say, 23243245234 yeild a result that would probably last beyond whenever the browser is closed?
More to the point, what php.ini settings would I need to set to make sessions last somewhere along the lines of two days, and is there a security reason to recommend a certain (I would expect lower) time limit, and if so what would the recommended period be?
Intended behavior
Edit: Here is what I want to achieve, perhaps I'll be able to understand the behavior by getting some settings suggestions as opposed to the specific values of the php.ini settings:
I want the session to last as long as possible, up to (approximately) two days.
If the session can last beyond browser close, I would like it to do so (up to approximately two days).
What would I set for php.ini settings (and yes, I have direct edit access to the php.ini) to acheive that?
There are two parameters you need to worry about regarding sessions. The first is the TTL for the cookie, the other is how old a session data file can become before it gets garbage collected.
session.cookie_lifetime determines, in seconds, how long the cookie sent to the browser will last. It defaults to 0, which means until the browser closes. For two days it'd need to be 172800 seconds.
session.gc_maxlifetime determines, also in seconds, how long before session data marked on the server will be regarded as garbage and can be deleted.
Setting these two ini directives should give you sessions that survive for two days, except for one more thing of which you need to be aware.
Some operating systems do automated garbage collection on their default temporary directories. If PHP is configured to store session data there, then if the GC period for the temp directory is short you may find yoruself losing your session before the value in session.gc_maxlifetime is reached. To avoid this, make sure PHP is storing session data to a location other than /tmp or whatever the temporary directory of your host operating system is.
It means that the session is lost at the time the browser gets closed.
That is, the cookie containing that session id gets deleted together with the onclose event of the browser.
session.cookie_lifetime specifies the
lifetime of the cookie in seconds
which is sent to the browser
The recommended period depends basically on what your session needs to hold. Say you want to keep your user logged in the website (remind me), you should go for the largest period. Just as an example.
If you want the session alive for approximately two days, you just count
60 [seconds] * 60 [minutes] * 48 [hours] = 172800 seconds
First off I do not recommend to anyone that they play around with session life unless they are aware of the consequences.
In regards to the your question there are actually two systems in place to manage a session.
Firstly if you are using PHP's default system you are employing a file based session system where by a file is created on your server which holds the actual data of the clients session, this file is normally named the same as the session id. The user then has a cookie send to there browser which holds the session id client side.
The setting you are referring to ONLY defines the life of the cookie in the clients browser not the life of the session.
A setting of 0: causes the cookie to last until the browser is closed.
A setting higher than 0: causes the session to last that many seconds and is only terminated after that time. The browser can be opened and closed as many times as the user wants and the cookie will remain until the time of expiry.
I believe but could be wrong that the setting is only the number of seconds from when the cookie is created and not an actual timestamp but I could be wrong.
You can change this setting in your php.ini or you can use a combination of session_get_cookie_params and session_set_cookie_params
Clarification
Ignoring server side. The client holds a cookie which holds the SessionID and allows them to access there session. If the cookie is lost the client no longer has the ability to access the session and is in essence lost.
A value of 0 will cause the clients browser to keep the cookie until the browser is closed. If the user was to keep there browser open for a week, the cookie would be kept for a week.
A value greater than 0 will cause the clients browser to keep the cookie for that number of seconds. E.g. If the value was set to 172800 seconds (2 days) the cookie would be held by the browser for 2 days. If the browser is closed during this 2 days the cookie is not destroyed, it is only lost after 2 days.
Why use 0
Using 0 is more secure because when a user has finished using your website on a public system and close the browser the cookie is lost and the session can no longer be accessed preventing another user from opening the browser and continuing the session. It is not reliable to presume that a user will end the session manually (e.g. logout) as many don't.
Wait there is the confusion .....
"Session" will not lost if the the browser get closed....its the "Cookies" which get lost on close of browser.
"Session" is altogether is different from "Cookies". Session stays at server and can be destroyed explicitly. Whereas "Cookies" resides at client end and can be destroyed manually or at a particular time interval or on browser close.
Short (but slightly inaccurate) solution
Set session.gc_maxlifetime to 172800 and session.cookie_lifetime to 0.
Extended and more accurate solution
In general, session.gc_maxlifetime is the configuration to control the session’s life time. So setting that directive to 172800 will make the session to expire after 172800 seconds (theoretically). But as the calculation of the age of a session is slightly odd, you might want to implement a more accurate expiration scheme. See my answer to How do I expire a PHP session after 30 minutes? for more information.
And setting the session ID’s cookie lifetime to 0, the cookie will be valid until the browser is closed.
We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?