rsync --exclude-from=file cannot make the relative path function correctly- how does the exclude from option - relative-path

I am trying to build a list of files to be excluded.
The absolute path works fine!
But when I try to use the relative path. I get the following error:
rsync: failed to open exclude file exclude-list: No such file or directory (2)rsync error: error in file IO (code 11) at exclude.c(1178) [client=3.1.2]
the exclude-list is the file name.
It is in the source directory at the root
My syntax is
rsync -av --delete --exclude-from='exclude-list' /source /destination
I would appreciate any help

Great! explanation from Gordon Davisson!
I created a new directory
mkdir mydir
moved file into that directory:
cd command into that directory:
ran command from that directory:
IT WORKS!

Related

Execlp says No such file or directory

I am trying to do ps -A in code, but typing execlp("/bin/ps", "ps", "-A", NULL); outputs:
/bin/ps: No such file or directory
But I can see ps in the file directory so I have no idea what is wrong.

ansible - unarchive - input file not found

I'm getting this error while Ansible (1.9.2) is trying to unpack the file.
19:06:38 TASK: [jmeter | unpack jmeter] ************************************************
19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz
19:06:38
19:06:38 FATAL: all hosts have already failed -- aborting
19:06:38
I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg).
I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches!
# md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03
53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz
When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file.
What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug.
My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
The default behavior for Unarchive is to find the file on your local system, copy it to the remote, and unpack it. I suspect if you're getting a file not found error then you need to specify copy=no in your task.

scp not working saying its a directory error

I am trying to copy a file to remote server in a certain folder.
Its an adrive backup plan. But it comes with scp. I can copy the file if I don't select directory. Even if I put a directory that doesn't exist it says its a directory.
root#host1 [/usr/src]# scp ftpdelete.sh user#host#scp.adrive.com:/mysql-only/
scp: /mysql-only/: Is a directory
Amazingly enough in my case it was that the directory didn't exists!! :|
Is the error message a bug?... or it's me. Tempted for the latter.
SCP doesn't automatically create you new directory if you want to scp file (it creates directory only if you do recursive copy). There is wrong error message. The error should be No such file or directory or similar.
It is known problem and there is upstream bugzilla about this [1].
[1] https://bugzilla.mindrot.org/show_bug.cgi?id=1768
You are copying the sh file to a new directory on the server, and the directory is expected to be there but in fact not(then the machine thinks you want to change the file to be a directory). Most probably the directory you set is wrong.
-r' Recursively copy entire directories. Note that scp follows symbolic links encountered in the tree traversal.
But it doesn't create a directory but you can do below
ssh remote mkdir /diretcory
root#host1 [/usr/src]# scp -r ftpdelete.sh user#host#scp.adrive.com:/complete_path/mysql-only/
or
rsync can do the creating of directory if not exist
its basic command syntax is similar to scp:²
$ rsync -r -e ssh ftpdelete.sh me#my-system:/complete_path/mysql-only/
I saw a similar error, when tried scp to path that relative to home directory. The error fixed after removing unnecessary leading / in path:
# scp ftpdelete.sh user#host#scp.adrive.com:mysql-only/
rather then
# scp ftpdelete.sh user#host#scp.adrive.com:/mysql-only/
^
scp -r source_location user#servername:/target_location
Don't put "/" after the directory's name.
Try to download the file directly to your local root directory and then copy it from there :
root#host1 [/usr/src]# scp user#host:/root/Desktop/file.txt /root/home/

Copying directories recursively in tcl

When I tried copying a directory recursively to another directory I am getting an error message. If foo is my source and bar is my target directory, I am getting error as
"can't overwrite file "bar/foo" with directory "foo"
and my tcl command is
file mkdir "bar"
file copy -force ./foo bar
Where am I going wrong?
You can use exec:
exec cp -f ./foo bar
It always works for me.

Putty ssh commands zip all the files within this folder then download

oh so i cd into my folder
ls
cgi-bin wp-comments-post.php wp-mail.php
googlec3erferfer228fc075b.html wp-commentsrss2.php wp-pass.php
index.php wp-config-sample.php wp-rdf.php
license.txt wp-config.php wp-register.php
php.ini wp-content wp-rss.php
readme.html wp-cron.php wp-rss2.php
wp-activate.php wp-feed.php wp-settings.php
wp-admin wp-includes wp-signup.php
wp-app.php wp-links-opml.php wp-trackback.php
wp-atom.php wp-load.php xmlrpc.php
wp-blog-header.php wp-login.php
(uiserver):u45567318:~/wsb454434801 >
What i want to do is zip all the files within this folder then download it to my computer i am really new to ssh and this is a clients website but really want to start to use command line for speed, i have been looking a this reference http://ss64.com/bash/ to find the right commands but would really like some help from somebody please??
Thanks
cd path/to/folder/foldername
zip -r foldername.zip foldername * [use * if it has any sub directory]
Please try this code, it will solve your problem.
If you are in directory itself then
zip -r zipfilename.zip *
Go to folder path using cd command
zip -r foldername.zip foldername
Ex : zip -r test-bkupname.zip test
Here test is the folder name.
tar zcvf ../my_directory.tar.gz .
will create my_directory.tar.gz file.
scp ../my_directory.tar.gz username#your-ip:/path/to/place/file
will transfer file to your computer.
Looks like this is the webroot directory.
Why not zip the directory above (httpdocs / html / whatever) and then move this into the website space, and download from there?
i.e. go into the directory above the web root. For example, if your web root is /var/www/html/ go into /var/www/ and run the following commands:
zip -r allwebfiles.zip html
mv allwebfiles.zip /html/allwebfiles.zip
Then in your web browser go to http://mydomain.com/allwebfiles.zip and just download that file.
When extracting, you'd just need to either extract into /var/www/ OR extract into webroot and move all files up one level.
Use the following Ssh command to download multiple files at one time
mget ./*