Barman postgresql incoming WALs directory - sql

I have got a problem with incoming WALs directory in Barman - backup tool to postgresql databases
In my database server I have in postgresql.conf
wal_level = 'archive'
archive_mode = on
archive_command = 'rsync -a %p barman#mybarmanserverip:INCOMING_WALS_DIRECTORY/%f'
In my barman server when I make command "barman show-server myservername" I get, that my incoming_wals_directory is
/var/lib/barman/myservername/incoming
Command barman check myservername return "OK" in all points, but when I want to make backup in command barman backup myservername I see that first 3 points is correct but point "Asking PostgreSQL server to finalize the backup" never ends.
Where is my mistake?

I had this issue and that was a problem due to rsync.
To check if it's the case for you, try to rsync a random file :
rsync -zvh random_file user#remote_host:/tmp/test
if the output is something like:
protocol version mismatch -- is your shell clean?
then there is 2 possible reasons :
rsync versions are not the same on the two servers
some text is output when you ssh to the remote server, rsync does not like it
To fix the first issue, here is what I did :
be sure that rsync --version is the same on both machines :
on your local env run rsync --version
from your local (to the remote) run ssh login#remote_host "rsync --version"
(Install the correct version if they don't match.)
To fix the second issue, you must add something in your .bashrc file that prevent text output after ssh connection on non interactive session (i.e "Last login: Thu Sep..." - it makes rsync fail)
I put that at the top of my .bashrc file :
case $- in
*i*) ;;
*) return;;
esac
Then rsync works fine, and the initial barman backup command finnishes well

replace INCOMING_WALS_DIRECTORY with your incoming folder path which you can find using this command barman show-server main
archive_command = 'rsync -a %p barman#mybarmanserverip:/var/lib/barman/main/incoming/%f'
Make sure you change the INCOMING_WALS_DIRECTORY placeholder with the value returned by the barman show-server main command above.
Also make sure that postgres user can ssh to barman server correctly.

Related

How to copy file from server to local using ssh and sudo su?

Somewhat related to: Copying files from server to local computer using SSH
When debugging on DEV server I can see logs with
# Bash for Windows
ssh username#ip
# On server as username
sudo su
# On server as su
cat path/to/log.file
The problem is that while every line of the file is indeed printed out, the CLI seems to have a height limit, and I can only see the last "so many" lines after the printing is done.
If there is a better solution, please bring it forward, otherwise, how do I copy the "log.file" to my computer.
Note: I don't have a password for my username, because the user is created with echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/$USER.
After sudo su copy the file to the /tmp folder on the server with
cp path/to/log.file /tmp/log.file
After that the standard command should work
scp username#ip:/tmp/log.file log.file
log.file is now in the current directory (echo $PWD).

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

rsync to remote location exits with code 12

I am trying to rsync a local folder to a remote location. This a command that I have run successfully a week ago, but now if i run:
rsync -vrtzu\
--chown=user:webadm
--delete
--exclude-from=.rsyncignore
FOLDER/
USER#REMOTE:/DESTINATION
Then I get the following error message:
zsh:1: no matches found: --usermap=*:USER
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]
make: *** [makefile:39: push] Error 12
The command is run from a makefile, hence the last line.
I am using a regular WSL2 Ubuntu shell, not zsh.
I am able to ssh into the remote location with USER#REMOTE.
I have also checked that both locations have rsync installed (same version).
Finally, there is plenty of disk space available on the remote location.
Any pointers? What should I be checking to improve my diagnostic?
Thanks in advance!
This can happen when the remote shell messes with the command. Not sure exactly why and what it does but it modifies escaping so that the file path becomes invalid.
In your case the shell outputs --usermap=*:USER at log in.
The solution is to change the remote (zsh) shell to bash using the chsh command
I'm pretty sure this is an rsync bug:
zsh:1: no matches found: --usermap=*:USER
It only happens the remote machine's default shell is zsh.
It was fixed somewhere between rsync 3.2.3 (where it's broken) and 3.2.5 (where the bug is gone).
You can verify this by passing -vv to rsync. This prints as one of the first output lines which command invocation rsync is doing on the remote server via SSH.
On a broken version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=*:user" "--groupmap=*:webadm"
On a fixed version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=\*:user" "--groupmap=\*:webadm"
As you can see, they inserted a \ to fix the string being interpreted by zsh.

Can't rsync into subfolder, or even ssh at this point

I need to rsync log files like this:
rsync --progress -rvze ssh name#host:/path/to/folder/*.log
When I run this command though, I get an error:
rsync: getcwd(): No such file or directory (2)
No such file or directory? That's odd. So I try to ssh directly:
ssh name#host
it prompts to enter my name, I do, then I type
cd /path/to/folder
which works fine (log files are present).
I double checked my ssh keys, everything seems to be in order there, but for some reason I can't ssh into a subfolder on this host, so there's no way I can get rsync working correctly.
EDIT:
Running the identical rsync command on my Mac, it works fine. Running it in my ubuntu EC2 instance is still failing.
Are you sure there are any log files at all? If not this command will fail with the 'No such file or directory'
Rather use:
rsync --progress --include='*.log' -rvze ssh name#host: /path/to/folder/ local_folder
The 'direct' ssh syntax you use in your second test is not supported:
ssh name#host:/path/to/folder/
because it will use host:/path/to/folder/ as the hostname.

Enabling SSH compression in Sourcetree Windows for a mercurial repository

I am on Windows 7 - Sourcetree 1.4.1.0 - Embedded Mercurial 2.6.1
Target is a private mercurial repo hosted on bitbucket.
How do I enable SSH compression so that my transactions are faster?
A quick Google search yielded this document:
Edit the Mercurial global configuration file (~/.hgrc). Add the following line to the UI section:
ssh = ssh -C
When you are done the file should look similar to the following:
[ui]
# Name data to appear in commits
username = Mary Anthony <manthony#atlassian.com>
ssh = ssh -C
On Windows, the Mercurial settings file is located here:
C:\Users\{username}\AppData\Local\Atlassian\SourceTree\hg_local\Mercurial.ini
The contents of the file are actually not to be changed, as its header explains:
; System-wide Mercurial config file.
;
; !!! Do Not Edit This File !!!
;
; This file will be replaced by the installer on every upgrade.
; Editing this file can cause strange side effects on Vista.
;
; http://bitbucket.org/tortoisehg/stable/issue/135
;
; To change settings you see in this file, override (or enable) them in
; your user Mercurial.ini file, where USERNAME is your Windows user name:
;
; XP or older - C:\Documents and Settings\USERNAME\Mercurial.ini
; Vista or later - C:\Users\USERNAME\Mercurial.ini
I don't have a Mac, so I can't test this, but this Atlassian answer states that the location of this file for Mac is:
/Applications/SourceTree.app/Contents/Resources/mercurial_local/hg_local/
In my case, I'm using TortoiseHg, but the concept should be the same.
Here is my original c:\somerepo\.hg\hgrc file:
[paths]
default = ssh://hg#bitbucket.org/someuser/somerepo
So what's happening with ssh? Let's debug a pull statement, hg pull --debug on the command-line. I noticed it is running C:\Program Files\TortoiseHg\lib\TortoisePlink.exe instead of ssh to make the call:
PS C:\somerepo> hg pull --debug
pulling from ssh://hg#bitbucket.org/someuser/somerepo
running "C:\Program Files\TortoiseHg\lib\TortoisePlink.exe" -ssh -2 hg#bitbucket.org "hg -R someuser/somerepo serve --stdio"
sending hello command
sending between command
abort: no suitable response from remote hg!
So let's just reuse the call, add compression (yay!), non-interactive (batch) and our key:
[paths]
default = ssh://hg#bitbucket.org/someuser/somerepo
[ui]
ssh = "C:\Program Files\TortoiseHg\lib\TortoisePlink.exe" -ssh -2 -C -batch -i "c:\keys\somekey.ppk"