rsync create remote non-existent folder when using modules (:: syntax) - module

A similar question has been asked before but this one is different.
I am doing a rsync of a single file on remote server and destination directory does not exist. I would like the destination directory to be created if it does not exist. I am using :: syntax which uses modules and I could not find a similar case in the forums.
Here is the syntax. remote_dir2 does not exist and i want it to be created.
rsync -avz --password-file=<file> <source-file> remote-user#remote-server::remote_dir1/remote_dir2
Note: there is a module named remote-user in /etc/rsyncd.conf in the remote server and connection and everything else works, except that source file ends up in the remote_dir1 with the name of remote_dir2
Is there any solution that is different from what mentioned below ?
I do not want to open an ssh to remote server to 'mkdir'
I do not want to use -R, --relative because directory structure names in source and destination are very different.
I also know that there is a trick mentioned
here but it does not work when you specify a module. There is no error or anything in the logs, apparently it gets ignored.

Related

scp fails with "protocol error: filename does not match request"

I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp

How to delete remote file using Kettle Pentaho

I have a directory in remote Linux machine where files are being archived and kept for a certain period of time. I want to delete a file from remote (Linux) machine using kettle transformation based on some condition.
If file does not exists then job should not throw any error but if file exists at remote location, then job should delete file or raise an error in case some other reason, i.e., permission issue.
Here, the file name will be retrieved as a variable from previous steps of transformation and directory path of archived files will be fixed one.
How can I achieve this in Pentaho Kettle transformation?
Make use of "Run SSH commands" utility to pass commands to your remote server.
Assuming you do a rm -f /path/file it won't error for a non-existent file.
You can capture the output and perform an error handling as well (Filter rows and trigger the course of action).
Or you can mount remote directory to machine where kettle is, and try to delete file as regular.
Using ssh, i think, non trivial. It needs a lots of experiments to find out error types, to find way to distinguish errors. It might be and error with ssh connection or error to delete file.

Does scp allow inline file renaming in destination?

For instance, I have tried this (notice sources is remote):
scp root#$node:/sourcepath/sourcefile.log /destinationpath/destinationfile.log
The other option is to rename the file afterwards, but would be more convenient to do it on the fly while the data is downloaded via scp, therein my question. Thanks.
Maybe without scp:
ssh yourserver "cat >tmpfile && mv tmpfile datafile" <datafile
This command copies the "datafile" file to a remote server under the name "tmpfile".
Only after successful copy renames the temporary file "tmpfile" to the right name "datafile" on remote host.
If copying was not successful, the remote host will be only a temporary file.
Thus, you are protected from getting no full "datafile" file.
Sorry for my English.

PostgreSQL - inconsistent COPY permissions errors

I am using the EnterpriseDB pgAdmin III (v. 1.12.1) on a Windows 7, 32-bit machine to work with PostgreSQL databases on a remote Linux server. I am logged in as the user postgres, which allows me to access the $PGDATA directory (in this instance, it is found here: /var/lib/pgsql/data/)
If I log into the server via a terminal, run psql, and use the \copy command to import data from csv files into newly created tables, I have no problems.
If I'm in pgAdmin, however, I use the COPY command to import data from csv files into newly created tables.
COPY table_name FROM '/var/lib/pgsql/data/file.csv'
WITH DELIMITER AS ',' csv header
Sometimes this works fine, other times I get a permissions error:
ERROR: could not open file '/var/lib/pgsql/data/file.csv" for reading: Permission denied
SQL state: 42501
It is the inconsistency of the error that is confusing to me. When the error arises, I change the file permission to anywhere from 644 - 777, with no effect. I also try moving the file to other folders, e.g., var/tmp/, also with no effect.
Any ideas?
The problem is the access permissions trough the directories to the file. Postgres user does not have access to your home folder, for example. The answer is to use a folder all users have access like /tmp, or create one with the correct permissions so any user can access/read/write there, a sort of users shared folder.
I think your postgres user still don't have access to your file.
Did you tried the folowing commands ?
chown postgres /var/lib/pgsql/data/file.csv
chmod u+r /var/lib/pgsql/data/file.csv
Try \COPY table_name FROM '/var/lib/pgsql/data/file.csv'
WITH DELIMITER AS ',' csv header
Notice the backslash before copy, when you run it with back slash it runs with user permissions other wise it just runs as postmaster which in the documentation is deprecated for recent versions of pg :|, anyways this might probably do the trick for ya .

Use tnsnames.ora in Oracle SQL Developer

I am evaluating Oracle SQL Developer.
My tnsnames.ora is populated, and a tnsping to a connection defined in tnsnames.ora works fine. Still, SQL Developer does not display any connections.
Oracle SQL Developer Soars mentions, that if
you have Oracle client software and a tnsnames.ora file already installed on your machine, Oracle SQL Developer will automatically populate the Connections navigator from the net service names defined in tnsnames.ora.
I also tried to set my TNS_ADMIN environment variable, but after restarting SQL Developer there are still no connections displayed.
Any ideas?
Anyone successfully working with SQL Developer and tnsnames.ora?
In SQLDeveloper browse Tools --> Preferences, as shown in below image.
In the Preferences options expand Database --> select Advanced --> under "Tnsnames Directory" --> Browse the directory where tnsnames.ora present.
Then click on Ok,
as shown in below diagram.
tnsnames.ora available at Drive:\oracle\product\10x.x.x\client_x\NETWORK\ADMIN
Now you can connect via the TNSnames options.
This excellent answer to a similar question (that I could not find before, unfortunately) helped me solve the problem.
Copying Content from referenced answer :
SQL Developer will look in the following location in this order for a tnsnames.ora file
$HOME/.tnsnames.ora
$TNS_ADMIN/tnsnames.ora
TNS_ADMIN lookup key in the registry
/etc/tnsnames.ora ( non-windows )
$ORACLE_HOME/network/admin/tnsnames.ora
LocalMachine\SOFTWARE\ORACLE\ORACLE_HOME_KEY
LocalMachine\SOFTWARE\ORACLE\ORACLE_HOME
If your tnsnames.ora file is not getting recognized, use the following procedure:
Define an environmental variable called TNS_ADMIN to point to the folder that contains your tnsnames.ora file.
In Windows, this is done by navigating to Control Panel > System > Advanced system settings > Environment Variables...
In Linux, define the TNS_ADMIN variable in the .profile file in your home directory.
Confirm the os is recognizing this environmental variable
From the Windows command line: echo %TNS_ADMIN%
From linux: echo $TNS_ADMIN
Restart SQL Developer
Now in SQL Developer right click on Connections and select New Connection.... Select TNS as connection type in the drop down box. Your entries from tnsnames.ora should now display here.
I had the same problem, tnsnames.ora worked fine for all other tools but SQL Developer would not use it. I tried all the suggestions on the web I could find, including the solutions on the link provided here.
Nothing worked.
It turns out that the database was caching backup copies of tnsnames.ora like tnsnames.ora.bk2, tnsnames09042811AM4501.bak, tnsnames.ora.bk etc. These files were not readable by the average user.
I suspect sqldeveloper is pattern matching for the name and it was trying to read one of these backup copies and couldn't. So it just fails gracefully and shows nothing in drop down list.
The solution is to make all the files readable or delete or move the backup copies out of the Admin directory.
This helped me:
Posted: 8/12/2011 4:54
Set tnsnames directory
tools->Preferences->Database->advanced->Tnsnames Directory
https://forums.oracle.com/forums/thread.jspa?messageID=10020012&#10020012
On the newer versions of macOS, one also has to set java.library.path. The easiest/safest way to do that [1] is by creating ~/.sqldeveloper/<version>/sqldeveloper.conf file and populating it as such:
AddVMOption -Djava.library.path=<instant client directory>
[1] https://community.oracle.com/message/14132189#14132189