rsync succeeds of fails depending on the destination directory - ssh

I am conducting both experiments below with empty /home/pantelis folder (which is the destination directory)
This command succeeds:
rsync -zalP --progress --exclude=.git --exclude=.vscode /Users/pantelis/Workspace/my-work/terragrunt/modules/ my-server:/home/pantelis/my-work/
i.e. on my-server, my-work directory is created and has the contents of /Users/pantelis/Workspace/my-work/terragrunt/modules/
On the remote machine, I now delete /home/pantelis/my-work so /home/pantelis is once again empty.
I try to run the rsync command as follows which now fails
▶ rsync -zalP --progress --exclude=.git --exclude=.vscode /Users/pantelis/Workspace/my-work/terragrunt/modules/ my-server:/home/pantelis/my-work/terragrunt/modules/
building file list ...
1114 files to consider
rsync: connection unexpectedly closed (8 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at /System/Volumes/Data/SWE/macOS/BuildRoots/d7e177bcf5/Library/Caches/com.apple.xbs/Sources/rsync/rsync-55/rsync/io.c(453) [sender=2.6.9]
I am trying this since, apparently I want the remote file structure to match the local one.
Why is it failing in the second attempt?
It it because for (some inherent reason) rsync cannot create any other dir than the leaf? (my-work)? In that case I have tried the --relative option as suggested here but with no success whatsoever.

Add this to your command to create missing directory hierarchy on my-server:
--rsync-path="mkdir -p /home/pantelis/my-work/terragrunt/modules/ && rsync"

Related

SSH failed: No such file or directory (2)

I try to copy files with rsync and i have checked multiple times that the address is correct but still i get feedback: no such directory. The code is:
rsync -azvrP -e ssh master_123#165.x.x.115:/applications/123/public_html/wp-content/uploads/2023/ /applications/321/public_html/wp-content/uploads/2023/
What could be the error?

rsync to remote location exits with code 12

I am trying to rsync a local folder to a remote location. This a command that I have run successfully a week ago, but now if i run:
rsync -vrtzu\
--chown=user:webadm
--delete
--exclude-from=.rsyncignore
FOLDER/
USER#REMOTE:/DESTINATION
Then I get the following error message:
zsh:1: no matches found: --usermap=*:USER
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]
make: *** [makefile:39: push] Error 12
The command is run from a makefile, hence the last line.
I am using a regular WSL2 Ubuntu shell, not zsh.
I am able to ssh into the remote location with USER#REMOTE.
I have also checked that both locations have rsync installed (same version).
Finally, there is plenty of disk space available on the remote location.
Any pointers? What should I be checking to improve my diagnostic?
Thanks in advance!
This can happen when the remote shell messes with the command. Not sure exactly why and what it does but it modifies escaping so that the file path becomes invalid.
In your case the shell outputs --usermap=*:USER at log in.
The solution is to change the remote (zsh) shell to bash using the chsh command
I'm pretty sure this is an rsync bug:
zsh:1: no matches found: --usermap=*:USER
It only happens the remote machine's default shell is zsh.
It was fixed somewhere between rsync 3.2.3 (where it's broken) and 3.2.5 (where the bug is gone).
You can verify this by passing -vv to rsync. This prints as one of the first output lines which command invocation rsync is doing on the remote server via SSH.
On a broken version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=*:user" "--groupmap=*:webadm"
On a fixed version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=\*:user" "--groupmap=\*:webadm"
As you can see, they inserted a \ to fix the string being interpreted by zsh.

Scp copies data to server from GitLab and exit with code 1: No such file or directory

I have my script on GitLab, and I would like to copy my scripts to my server in deploy pipeline. I create env variables with my USER_IP and USER_PASS for sshpass comand. This is the output of my pipeline:
$ sshpass -e scp -o stricthostkeychecking=no -r project root#${USER_IP}:/opt
Warning: Permanently added '1.2.3.4' (ECDSA) to the list of known hosts.
root#: No such file or directory
ERROR: Job failed: exit code 1
The problem is, the pipeline faild, but data has been copied to server. So why The job failed with exit code 1 and message root#: No such file or directory?
Is there better way how to deploy data to server from GitLab?

Rsyng doesn't run from cron, but manually

I have a simple script for backing up files from my server. It does the following:
Joins the server with SSH
Creates a MySQL dump file
Tar some folders
Exits
Starts rsnapshot to download the folder where the tar.gz and sql file are located
sshs back to the server just to clean up files
Exits
On the top of my crontab I've given the following
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SHELL=/bin/bash
However, the scripts sometimes starts, sometimes not. Also Rsnapshot sais for a few of my servers when running from cron:
/usr/bin/rsnapshot -c /backup/configs/myserver.com.conf daily: ERROR: /usr/bin/rsync returned 255 while processing user#myserver.com:/home/user/serverdump/
Do you have any idea for both the issues?

ansible - unarchive - input file not found

I'm getting this error while Ansible (1.9.2) is trying to unpack the file.
19:06:38 TASK: [jmeter | unpack jmeter] ************************************************
19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz
19:06:38
19:06:38 FATAL: all hosts have already failed -- aborting
19:06:38
I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg).
I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches!
# md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03
53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz
When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file.
What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug.
My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
The default behavior for Unarchive is to find the file on your local system, copy it to the remote, and unpack it. I suspect if you're getting a file not found error then you need to specify copy=no in your task.