Since my ubuntu server was clearing sessions too early I decided to use another folder to store the sessions. So I use something like the following:
session_save_path(SESSION_PATH);
ini_set('session.gc_probability', 1);
session_start();
I attach this in every php file that needs a session_start() [I hope this is the right implementation]
My logout.php file does seem to clear the stored sessions in this custom directory. However my question is what if the user doesn't logout and just closes the browser. Do these session files from the custom folder get deleted over time?
yes it will be cleaned by php engine.
Garbage collection may occur during session start (depending on session.gc_probability and session.gc_divisor).
Ref: PHP Documents
in the other hand it would be enough to set session.gc_maxlifetime option.
Related
I am trying to log specific requests by users to determine if their Lucee request has completed, if it is still running, etc. The purpose of this is to fire of automated processes on demand and ensure to the end users that the processes is already started so they do not fire off a second process. I have found the HTTP_X_REQUEST_ID in other searches, but when dumping the CGI variables, it is not listed. I have set CGI variables to Writable rather than Read Only, but it is still not showing up. Is it something I must add in IIS, or a setting in Lucee Admin that I am overlooking. Is there a different way to go about doing this rather than HTTP_X_REQUEST_ID? Any help is appreciated.
Have yo consider using <cfflush>. When Lucee request start you can send partial information to the client informing that the process has started in the server.
I remember playing around with some settings and I believe it changed the location of dump.rdb. Now, dump.rdb auto-magically appears at the root of my projects.
Where does it belong, and how would I return it there? Also, how does this location change when in a production environment?
Where does it belong?
Wherever you want.
The default directory is ./, meaning the directory where the Redis server got started from.
Edit:
* I am modifying your second question (asked in comment) a little bit.
Is it possible to change to location of dump.rdb? How?
Yes, it is possible. There two possible ways I can think of.
1.
Modify redis configuration file (e.g. redis.conf) and restart redis server. This way, every restart after this one will use the new directory. But redis will not reload any previous data at first restart (because there will not be anything to reload from).
To reload previous data, previous dump.rdb would have to be moved to new location manually before restarting the server.
2.
Set new directory by CONFIG SET command. E.g.
CONFIG SET dir path/to/new/directory
* Note that the path has to be a directory.
That's it! But this way is not permanent because server restart will use the old directory.
To make new directory permanent, execute CONFIG REWRITE to rewrite the configuration file. Remember, redis server has to have write permission to that file.
dir path/to/dorectory has to be set in the redis config file.
(LAMP server configuration)
As a workaround for another problem, I need PHP to be able to access local files, but prevent these files from being served over http by Apache.
Normally, I would just use .htaccess to accomplish this, however due to institutional restrictions, I cannot. I also can't touch php.ini, although I can use php_ini_set within php.
As a creative solution, I thought that if php executes as its own linux user (not as apache) I could use normal chown's and chmod's to accomplish this.
Again, the goal is simply to have a directory of files that apache will not display, but php can access.
I'm open to any suggestions.
Put the files outside of your web accessible root (DocumentRoot), but keep them accessible via PHP.
Suggestion:
/sites
/sites/my.site.com
/sites/my.site.com/data // <-- data goes here
/sites/my.site.com/web // <-- web root is here
Here's a thought. Set the permissions on the files to be inaccessible to even the owner, then when PHP needs them, chmod() then, read them, then chmod() them back to inaccessible.
I'm having some trouble getting a Joomla 1.5 installation to work on my computer.
When I login, it seems to work as JApplication::login() returns true. Also when I debug and var_dump the response, I can see the user.
However, when I reload the page, I'm still not logged in so it seems Joomla didn't save the current session.
I've looked in the framework and can't find how sessions are saved. Also the log is empty.
Does anybody know what could be the problem?
Do you have the same problem in the administrator and in the frontend? If you do have access to your administrator, it might be a frontend problem. You can check the file "configuration.php" in your joomla root, and look for this line:
var $session_handler = 'database';
If it's empty, it means you've had some problem saving your configuration. The usual values for "session_handler" are 'database' or 'file', and you might have some others depending on your setup.
If your session_handler is ok, then check this value too:
var $lifetime = '15';
It's the session lifetime if I'm not wrong.
If you're still having problems, you should check in a separate script (not Joomla) if you can use sessions, just to make sure you haven't problems with your local set up.
I hope it helped!
I have read about a technique involving writing to disk a rendered dynamic page and using that when it exists using mod_rewrite. I was thinking about cleaning out the cached version every X minutes using a cron job.
I was wondering if this was a viable option or if there were better alternatives that I am not aware of.
(Note that I'm on a shared machine and mod_cache is not an option.)
You could use your cron job to run the scripts and redirect the output to a file.
If you had a php file index.php, all you would have to do is run
php index.php > (location of static file)
You just have to make sure that your script runs the same on command line as it does served by apache.
I would use a cache on application level. Because the application knows best when the cached version is out of date and is more flexible and powerful in the matter of cache negotiation.
Does the page need to be junked every so often because it just has to? Or should it be paralleled with a static version after an update to the page?
If the latter, you could try and write a script that would make a copy of the just edited page and save it to its static filename version. That should lighten the write load since in that scenario you wouldn't need to have a fresh static copy unless there was a change made that needed some show time.