Where does dump.rdb belong? - redis

I remember playing around with some settings and I believe it changed the location of dump.rdb. Now, dump.rdb auto-magically appears at the root of my projects.
Where does it belong, and how would I return it there? Also, how does this location change when in a production environment?

Where does it belong?
Wherever you want.
The default directory is ./, meaning the directory where the Redis server got started from.
Edit:
* I am modifying your second question (asked in comment) a little bit.
Is it possible to change to location of dump.rdb? How?
Yes, it is possible. There two possible ways I can think of.
1.
Modify redis configuration file (e.g. redis.conf) and restart redis server. This way, every restart after this one will use the new directory. But redis will not reload any previous data at first restart (because there will not be anything to reload from).
To reload previous data, previous dump.rdb would have to be moved to new location manually before restarting the server.
2.
Set new directory by CONFIG SET command. E.g.
CONFIG SET dir path/to/new/directory
* Note that the path has to be a directory.
That's it! But this way is not permanent because server restart will use the old directory.
To make new directory permanent, execute CONFIG REWRITE to rewrite the configuration file. Remember, redis server has to have write permission to that file.

dir path/to/dorectory has to be set in the redis config file.

Related

How to restore Redis db?

I am following the documents about how to restore Redis and I am at complete loss at this point.
The document says
127.0.0.1:6379> SAVE
OK
This command will create the dump.rdb file in your redis directory.
Which it does, it creates the exact same file for me in /usr/lib/redis which is alright I guess.
To restore redis data just move redis backup file (dump.rdb) into your redis directory and start the server. To get your redis directory use CONFIG command can be used. The CONFIG GET command is used to read the configuration parameters of a running Redis server.
127.0.0.1:6379> CONFIG get dir
1) "dir"
2) "/var/lib/redis/6379"
Here is where it makes no sense to me. The .rdb file for me is already saved in /var/lib/redis/ and I have no sub folder to that. I don't understand what "dir" is doing there and how I can restore my database.
Please enlighten me. I don't seem to be able to save it or I cannot find it perhaps.
Okay so basically, the rdb file saved in /var/lib/redis/ is saved every time the server stops and this can be moved to another folder and be used as a point for restoring every time redis service starts.

Is there a way of preserving the original Redis configuration file when CONFIG REWRITE occurs?

Conditionally Redis automatically changes its configuration file via CONFIG REWRITE execution. I sometimes want to preserve the original version of the file.
Is there a way I should follow?

Change redis db location

To change the location of redis on ubuntu 14 is just copy the db to another path and create a symlink or need another aproach to this?
dir /var/lib/redis
You can do it by sending Redis CONFIG SET dir /new/path, and making the same change in the configuration file or issuing CONFIG REWRITE. The next dump file, e.g. created with BGSAVE, will use the new path.
You solution is valid if you can afford downtime on your system during this change in order to maintain data consistency.
Another solution is to setup second Redis instance on different port on the same machine that will replicate from the first instance and you application will working with second instance. After a while you will delete your first instance.

Is there a way to check if a directory exists in Apache configuration files?

Is there a way to include configuration settings in Apache based on if a directory exists? Basically I have a portable hard drive that I transport between work and home that has some stuff I'm developing on it. I only want the Apache config to load a particular virtual host if the folder exists.
Since Apache 2.4.34 you can now use <IfFile>...</IfFile> which will check to see if a file exists. There's more details on the <IfFile> page.
No, there seems to be no direct way to do this.
The only thing that might be a solution is the IfDefine directive. You can define defines using the -d parameter to when the server is started.
The parameter-name argument is a define as given on the httpd command line via -Dparameter-, at the time the server was started.
You might be able to check for the existence of a directory in a batch or bash file, and set the -d parameter accordingly.
Whether that is an option, will depend on how your server is started from the portable hard drive.
I've come up with a solution that seems to work for Linux and OS X, and it hinges on "mountpoints". It might be possible to emulate it within Windows, as well, but you would probably have to get creative with FUSE and/or Cygwin.
If you create an empty folder in your home directory, such as "/Users/username/ExtraVhosts", you can add an apache directive to "Include /Users/username/ExtraVhosts/*".
Then, when you insert your thumb drive, you can mount somewhere and then use mountpoint "binding" to cross-link the ExtraVhosts folder to a folder on the mobile device.
An OS X example:
I have a thumb drive called 'Cherrybomb'
When I insert it, it always gets mounted to /Volumes/Cherrybomb
I can then use bindfs (sudo port install bindfs) to mount a subfolder of it, like so:
sudo bindfs /Volumes/Cherrybomb/Projects/vhosts /Users/username/ExtraVhosts
Then I can restart apache to read in the updated configuration:
sudo /opt/local/apache2/bin/apachectl restart
At that point, it's just a matter of adding entries in /etc/hosts for server aliases to get picked up.
The linux equivalent would be using the "--bind" parameter of the mount command.
One caveat: This makes it difficult to quickly unmount the USB drive, since it is always marked as "in use" by apache. Here's a removal procedure:
Close all open files and terminal sessions that are using the drive (the present-working-directory in terminal can cause unmount issues)
Stop apache: sudo /opt/local/apache2/bin/apachectl stop
umount /Users/username/ExtraVhosts
Then you can either unmount it graphically or manually (umount /Volumes/Cherrybomb).
If your work and home machines mount the drive to different locations, you could have multiple vhosts folders - home_vhost, work_vhost, etc - and use that in the binding step.
I hope this helps someone out :)
If you point apache to the mountpoint only there shouldn't be an issue. Just don't point Directory directives to directories within the drive.
eg, if you mount /dev/somedisk /mnt/somevhost, the
/mnt/somevhost directory will be there whether or not you have the drive mounted and apache will start. Apache doesn't care if the directory is empty so a <Directory "/mnt/somevhost"/> won't cause server to not start if the drive isn't mounted.
Work with UNIX not against it :-p This solution should be sufficient for development.

Set apache documentRoot to symlink (for easy deployment)

we are looking for a way to point our Apache DocumentRoot to a symlink.
E.g. DocumentRoot /var/www/html/finalbuild
finalbuild should point to a folder somewhere like /home/user/build3
when we move a new build to /home/user/build4 we want to use a shell script that changes the symbolic link "finalebuild" to this new directory /home/user/build4 and do an apache graceful restart to have a new web application version up and running with little risk.
What's the best way to create this symlink and to change this link afterwards using the shell script?
We're using capistrano to employ a similar setup. However, we've run into a few problems:
After switching to the setup, things appeared to be going fine, but then we started noticing that after running cap deploy, even though the symlink had been changed to point toward the head revision, the browser would still show the old pages, even after multiple refreshes and appending different GET parameters.
At first, we thought it was browser caching, so for development we disabled browser caching via HTTP headers, but this didn't change anything. I then checked to make sure we weren't doing full-page caching server-side, and we weren't. But I then noticed that if I deleted a file in the revision the symlink used to point to, we would get a 404, so Apache was serving up new pages, but it was still following the "old symlink" and serving the pages up from the wrong directory.
This is on shared hosting, so I wasn't able to restart Apache. So I tried deleting the symlink and creating a new one each time. This seemed to work sometimes, but not reliably. It worked probably 25~50% of the time.
Eventually, I found that if I:
removed the existing symlink (deleting it or renaming it);
made a page request, causing Apache to attempt to resolve the symlink but find it missing (resulting in a 404)
then created a new symlink to the new directory
it would cause the docroot to be updated properly most of the time. However, even this isn't perfect, and about 2-5% of the time, when the deploy script ran wget to fetch a page right after renaming the old symlink, it would return the old page rather than a 404.
It seems like Apache is either caching the filesystem, or perhaps the mv command only changed the filesystem in memory while Apache was reading from the filesystem on disk (doesn't really make any sense). In either case, I've taken up someone's recommendation to run sync after the symlink changes, which should get the filesystem on disk in sync with memory, and perhaps the slight delay will also help the wget to return a 404.
I've used symlinks as the apache DocumentRoot in production with no graceful restart necessary. In general, the idea should work. A 403 error probably indicates a permissions error unrelated to the symlink changing. An added wrinkle that you would want to add is making the symlink switch atomic so the symlink always exists. That is to say, at no time is the symlink nonexistent, even for a moment.
The solution to this problem is to effect the change by creating a new symlink and then renaming it over the old symlink. On Unix-like systems, renaming is an atomic operation, and thus the symlink “change” will be atomic too. By hand, the process looks like this:
$ ln -s new current_tmp && mv -Tf current_tmp current