On redis_version:4.0.2
Documentation of redis-cli INFO says its is a Flag indicating if active defragmentation is active. However even after turning it off using CONFIG SET activedefrag no. It still shows a value 38.
The actual meaning of the flag is the percentage of CPU taken by active defrag's last cycle.
Active defragmentation is disabled by default, so that value should be 0 forever unless it was enabled at some point. It looks like that after enabling and then disabling it, the last value remains without ever being reset.
That last point, about not being reset, is an issue with Redis - its resolution is included in pull request #6559.
Related
I'm using AppFabric caching in a WCF service hosted in WAS.
I must do something wrong because sometimes GetObjectsInRegion() return an empty list while objetcs are indeed present in the region.
Unfortunately, I'm not able to identify the context in which the problem is reproductible.
It seems though that if the web service is restarted, existing regions are seen empty for the service.
Im sure that this is not tied to a timeout problem.
I'll update the question if there is any progress on my side.
Any help appreciated.
This one was a bug on my side.
I was not explicitely setting expiration timeout in some circumstances. The cache cluster was configured with default expiration settings. The TTL is 10 minutes. Objetcs were automatically removed from the cache.
The takeway is : Always set an expiration timout when putting objetcs in the cache.
In my app, I need to share a setting between different devices running the app. I want the first device that install the app to set the master value of the setting, then all other devices should get that setting and not overwrite it.
How do I make sure I first check if iCloud has a value before setting the value? So I don't overwrite an existing one.
Should I wait for NSUbiquitousKeyValueStoreInitialSyncChange event to be sent, and then I can check for an eventual existing value and otherwise set it for the first time? If yes, can I rely on receiving the NSUbiquitousKeyValueStoreInitialSyncChange event? If not, then it might turn out that it don't set the iCloud value at all with this approach.
If I try to set a value before NSUbiquitousKeyValueStoreInitialSyncChange is triggered for the first time, will it be discarded and then the NSUbiquitousKeyValueStoreInitialSyncChange will be triggered with the existing data in the store?
I've heard NSUbiquitousKeyValueStoreInitialSyncChange is not triggered if there is no values in the store when it synces the first time?
I have read the Apple documentation about this and seen answers here on Stack Overflow, but don't understood how to do exactly this.
How can I make sure I don't overwrite an existing value the first time I launch/install the app?
There is no way for you to surely know you have effectively synchronized with the distant store at least once and you should not count on it (imagine there is no iCloud account set up, or no connectivity or iCloud servers are down, etc.: you don't want your user to wait for you to be sure you are in sync with the cloud as it can take forever or even never happen).
What you should do:
when you start, check the store to see if there is a value.
If there is no value, just push your own value.
If the initial sync with the server did not happen yet and there is, in fact, a value in the cloud, this will be considered a conflict by the NSUbiquitousKeyValueStore. In this precise case (initial sync), the automatic policy is to revert your local value and prefer the one in the cloud instead. So your application will be notified by the NSUbiquitousKeyValueStoreDidChangeExternallyNotification of this revert with the NSUbiquitousKeyValueStoreInitialSyncChange reason.
If there was in fact no value in the cloud, your local value will be pushed and everyone will be happy.
I want to use RabbitMQ to broadcast the state of an object continuously to any consumers which maybe listening. I want to set it up so when a consumer subscribes it will pick up the last available state...
Is this possible?
Use a custom last value cache exchange:
e.g.
https://github.com/squaremo/rabbitmq-lvc-plugin
Last value caching exchange:
This is a pretty simple implementation of a last value cache using RabbitMQ's pluggable exchange types feature.
The last value cache is intended to solve problems like the following: say I am using messaging to send notifications of some changing values to clients; now, when a new client connects, it won't know the value until it changes.
The last value exchange acts like a direct exchange (binding keys are compared for equality with routing keys); but, it also keeps track of the last value that was published with each routing key, and when a queue is bound, it automatically enqueues the last value for the binding key.
It is possible with the Recent History Custom Exchange. It says that it will put the last 20 messages in the queue, so if it configurable you may be able to change that to the last 1 message and you are done.
If that doesn't work, ie the number is fixed at 20, then you may have to process the first 19 messages off the queue and take the status from the 20th. This is a bit of an annoying work around but as you know the parameter is always 20 this should be fine.
Finally if this doesn't suit you perhaps you will set you consumer to wait until the first status is receive, presuming that the status is broadcast reasonably frequently. Once the first status is received then start the rest of the application. I am assuming here that you need the status before doing something else.
http://www.php.net/manual/en/session.configuration.php#ini.session.cookie-lifetime
says that a session.cookie_lifetime of 0 "goes until the browser is closed". Is that the absolute maximum length that the session can have (always wiped when the browser is closed), or would setting a session.cookie_lifetime of, say, 23243245234 yeild a result that would probably last beyond whenever the browser is closed?
More to the point, what php.ini settings would I need to set to make sessions last somewhere along the lines of two days, and is there a security reason to recommend a certain (I would expect lower) time limit, and if so what would the recommended period be?
Intended behavior
Edit: Here is what I want to achieve, perhaps I'll be able to understand the behavior by getting some settings suggestions as opposed to the specific values of the php.ini settings:
I want the session to last as long as possible, up to (approximately) two days.
If the session can last beyond browser close, I would like it to do so (up to approximately two days).
What would I set for php.ini settings (and yes, I have direct edit access to the php.ini) to acheive that?
There are two parameters you need to worry about regarding sessions. The first is the TTL for the cookie, the other is how old a session data file can become before it gets garbage collected.
session.cookie_lifetime determines, in seconds, how long the cookie sent to the browser will last. It defaults to 0, which means until the browser closes. For two days it'd need to be 172800 seconds.
session.gc_maxlifetime determines, also in seconds, how long before session data marked on the server will be regarded as garbage and can be deleted.
Setting these two ini directives should give you sessions that survive for two days, except for one more thing of which you need to be aware.
Some operating systems do automated garbage collection on their default temporary directories. If PHP is configured to store session data there, then if the GC period for the temp directory is short you may find yoruself losing your session before the value in session.gc_maxlifetime is reached. To avoid this, make sure PHP is storing session data to a location other than /tmp or whatever the temporary directory of your host operating system is.
It means that the session is lost at the time the browser gets closed.
That is, the cookie containing that session id gets deleted together with the onclose event of the browser.
session.cookie_lifetime specifies the
lifetime of the cookie in seconds
which is sent to the browser
The recommended period depends basically on what your session needs to hold. Say you want to keep your user logged in the website (remind me), you should go for the largest period. Just as an example.
If you want the session alive for approximately two days, you just count
60 [seconds] * 60 [minutes] * 48 [hours] = 172800 seconds
First off I do not recommend to anyone that they play around with session life unless they are aware of the consequences.
In regards to the your question there are actually two systems in place to manage a session.
Firstly if you are using PHP's default system you are employing a file based session system where by a file is created on your server which holds the actual data of the clients session, this file is normally named the same as the session id. The user then has a cookie send to there browser which holds the session id client side.
The setting you are referring to ONLY defines the life of the cookie in the clients browser not the life of the session.
A setting of 0: causes the cookie to last until the browser is closed.
A setting higher than 0: causes the session to last that many seconds and is only terminated after that time. The browser can be opened and closed as many times as the user wants and the cookie will remain until the time of expiry.
I believe but could be wrong that the setting is only the number of seconds from when the cookie is created and not an actual timestamp but I could be wrong.
You can change this setting in your php.ini or you can use a combination of session_get_cookie_params and session_set_cookie_params
Clarification
Ignoring server side. The client holds a cookie which holds the SessionID and allows them to access there session. If the cookie is lost the client no longer has the ability to access the session and is in essence lost.
A value of 0 will cause the clients browser to keep the cookie until the browser is closed. If the user was to keep there browser open for a week, the cookie would be kept for a week.
A value greater than 0 will cause the clients browser to keep the cookie for that number of seconds. E.g. If the value was set to 172800 seconds (2 days) the cookie would be held by the browser for 2 days. If the browser is closed during this 2 days the cookie is not destroyed, it is only lost after 2 days.
Why use 0
Using 0 is more secure because when a user has finished using your website on a public system and close the browser the cookie is lost and the session can no longer be accessed preventing another user from opening the browser and continuing the session. It is not reliable to presume that a user will end the session manually (e.g. logout) as many don't.
Wait there is the confusion .....
"Session" will not lost if the the browser get closed....its the "Cookies" which get lost on close of browser.
"Session" is altogether is different from "Cookies". Session stays at server and can be destroyed explicitly. Whereas "Cookies" resides at client end and can be destroyed manually or at a particular time interval or on browser close.
Short (but slightly inaccurate) solution
Set session.gc_maxlifetime to 172800 and session.cookie_lifetime to 0.
Extended and more accurate solution
In general, session.gc_maxlifetime is the configuration to control the session’s life time. So setting that directive to 172800 will make the session to expire after 172800 seconds (theoretically). But as the calculation of the age of a session is slightly odd, you might want to implement a more accurate expiration scheme. See my answer to How do I expire a PHP session after 30 minutes? for more information.
And setting the session ID’s cookie lifetime to 0, the cookie will be valid until the browser is closed.
After having my VPS upgraded to CentOs 5.5, I began to experience frozen / disconnected shell sessions if I had neglected them for a certain amount of time. Very annoying. The solution I found was to edit /etc/ssh/sshd_config and set the ClientAliveInterval to the desired number of seconds. My understanding is that this essentially substitutes for activity from the client user (me) and so keeps the session from disconnecting.
Having initiated a shell session after making this minor change, I seem to be able to maintain a neglected session. However, just because a thing seems to be working doesn't mean the best, or even correct, approach was necessarily taken.
Is there a better / different way to prevent a shell session from freezing?
The ClientAliveInterval value can increase the sshd timeout, you can try the following command as well
echo "TMOUT=300 >> /etc/bashrc
echo "readonly TMOUT" >> /etc/bashrc
echo "export TMOUT" >> /etc/bashrc
No guarantees, but this is what I recently started using on the server. 10 seconds seems like a short time, but I don't trust my phone to keep the connection alive. I suppose you could increase the number of seconds till the problem starts again, then dial it back.
ClientAliveInterval 10
Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client.
ClientAliveCountMax 200
If it fails, keep trying for about 30 minutes. In other words, keep trying 200x, every 10 seconds. My logic could be flawed though, depending on what happens after 10 seconds. Assuming the client is quiet (like maybe I'm reading) does the "alive" message reset the max count if successful? Is inactivity considered failure? Or is failure a "no acknowledgement" of the alive message? Until I know the answer, I figure it's safe to repeat 200x.
Similar question here, and some decent recommendations...
https://unix.stackexchange.com/questions/400427/when-to-use-clientaliveinterval-versus-serveraliveinterval