Changing the location of the postgreSQL file - sql

I am installing postgreSQL on my debian server using apt-get. The postgresql.conf is located here:
/etc/postgresql/8.4/main/postgresql.conf
Is there a way to actually change where postgreSQL looks for this config without my having to install postgreSQL by building it from source?

You can specify the location of the .conf file when you start PostgreSQL.
From the manual:
If you wish, you can specify the configuration file names and locations individually using the parameters config_file, hba_file and/or ident_file. config_file can only be specified on the postgres command line
Where config_file refers to the location of the postgresql.conf file.

Have a look at the do_ctl_all() function in /usr/share/postgresql-common/init.d-functions and see how it tries to locate the postgresql instance to start at boot:
for c in /etc/postgresql/"$2"/*; do
[ -e "$c/postgresql.conf" ] || continue
name=$(basename "$c")
# evaluate start.conf
if [ -e "$c/start.conf" ]; then
....
This code shows that despite the fact that the path to postgresql.conf is not hardcoded (the version number and cluster name are variables), the way is it built by concatenating the different parts is hardcoded.
You may still symlink postgresql.conf manually to somewhere else, though I'm not sure how an automatic upgrade of the package would cope with that.

Related

Issue with Postgres on windows 10

I have installed PostgreSQL 13 on windows 10
When I tried to run this command:
$ which Postgres on Git Bash, It returns which: no Postgres in (/c/Users*ahmedeid/......
Could you help me solve the issue?
It seems that your PostgreSQL installation's bin directory is not on the PATH, so you cannot find the server executable.
You'll have to modify the PATH environment variable appropriately.
Another possibility is that your bash was already running when you installed PostgreSQL, so that its PATH setting is out of date. Try closing bash and start a new one. Maybe that one has the correct setting.

Save database on external hard drive

I am creating some databases using PostgreSQL but I want to save them on an external hard drive due to lack of memory in my computer.
How can I do this?
You can store the database on another disk by specifying it as the data_directory setting. You need to specify this at startup and it will apply to all databases.
You can put it in postgresql.conf:
data_directory = '/volume/path/'
Or, specify it on the command line when you start PostgreSQL:
postgres -c data_directory='/volume/path/'
Reference: 18.2. File Locations
STEP 1: If postgresql is running, stop it:
sudo systemctl stop postgresql
STEP 2: Get the path to access your hard drive.
(if Linux) Find and mount your hard drive by:
# Retrieve your device's name with:
sudo fdisk -l
# Then mount your device
sudo mount /dev/DEVICE_NAME YOUR_HD_DIR_PATH
STEP 3: Copy the existing database directory to the new location (in your hard drive) with rsync.
sudo rsync -av /var/lib/postgresql YOUR_HD_DIR_PATH
Then rename the previous postgres main dir with .bak extension to prevent conflicts
sudo mv /var/lib/postgresql/11/main /var/lib/postgresql/11/main.bak
Note: my postgres version was 11. Replace in path with your version.
STEP 4: Edit postgres configuration file:
sudo nano /etc/postgresql/11/main/postgresql.conf
Change the data_directory line with:
data_directory = 'YOUR_HD_DIR_PATH/postgresql/11/main'
STEP 5: Restart Postgres & Check everything is working
sudo systemctl start postgresql
pg_lsclusters
Output should shows status as 'online'
Ver Cluster Port Status Owner Data directory Log file
11 main 5432 online postgres YOUR_HD_DIR_PATH/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
Finally your can access your PostgresSQL with:
sudo -u postgres psql
You can try following the walkthrough here. This worked well for me and is similar to #Antiez's answer.
Currently, I am trying to do the same and the only conflict that I'm having at the moment is that it seems there is an issue with PostgreSQL's incremental backup and point-in-time recovery proccesses. I think it has something to do with folder permissions. If I try uploading a ~30MB csv to the postgres db, it will crash and the server will not start again because files cannot be written to the pg_wal directory. The only file in that directory is 000000010000000000000001 and does not move on to 000000010000000000000002 etc. while writing to a new table.
My stackoverflow post looking for a solution to this issue can be found here.

Bacula/Bareos disaster recover from scratch using bextract

On Bacula/Bareos, document stress the importance of Catalog bootstrap file must be save on somewhere safe, I know Catalog consist of MySQL DB dump and optional included Bacula/bareos config file, but how exactly does anyone recover from scratch in case the whole backup infrastructure is gone?
Is it just install all Bacula/bareos software, then import MySQL and config then fire up Director would do the trick?
A bit of an old question, but I'll provide some feed back,
If you've done a mysqldump of the database (or pgdump depending on the backend) you essentially have the catalog in it's full state. I believe that you can simply restore this database to a new server, and restore the old config files (these are not stored in the dump but rather in /etc/bareos). Also, make sure that the same user/password is used for the database user as specified in the bareos-dir.conf file, or else you will not be able to connect to the database. Depending on how your storage devices are setup you may need to mess around with the baroes-sd.conf file.
To answer the other question off the OP, you can use a volume without a catalog. It's a bit cumbersome, but is possible with the following:
http://www.bacula.org/5.0.x-manuals/en/utility/utility/Volume_Utility_Tools.html
For example:
List jobs on a volume: bls -j -V Full_1-1886 FileStorage1
List files on a volume: bls -V Full_1-1886 FileStorage1
Once you have found the file, or directory (Note wildcard characters are supported) you can extract the file:
bextract -i restoreFiles -V Full_2-1277 FileStorage2 /tmp/
Where:
restoreFiles specifies a file separated with newlines that lists files/directories to restore
/tmp/ is the destination of the restore

RavenDB external config is being ignored

RavenDB's server (builds 2330 and 2380) seem to ignore the --config parameter:
Raven.Server.exe --config=another.config
The feature has been suggested and confirmed and implemented. Are there any constraints on the location of the configuration file?
In particular, I cannot seem to even change the port number unless I overwrite the existing configuration file Raven.Server.exe.config, rather than specifying a new configuration file using the command-line option.
There appears to be some strangeness in the method we used to get the config.
This command line argument won't work with 2.0 builds. This will be fixed in 2.5
In the meantime, you can set the values explicitly using:
Raven.Server.exe --set=Raven/Port==9999
Note that the first equal repeat once, and the second repeat twice.

Multiple Job (j3)

I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ā€˜-jā€™ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin