rsync weird behaviour - ssh

I have to sync some folders from a linux server, to another.
We have created the RSA-Key and the authentication goes well.
When we launch an rsync command, some of the files gave birth to errors like:
rsync: readlink "/var/www/sestantemultimedia.it/xxecommerce/pub/.htaccess" failed: Permission denied (13)
Now, the directory /var/ (as well as other subdirectory) have the permits set to 755.
The files into the last directories have permits set like 644.
So, theoretically, permits as set right and I can read from the other server and copy my files.
What I am missing?

Ok, I just figured it out.
Someone (because we work together as a team) had done a change on the "final folder" so that "apache" user's group can't execute (so can't traverse) the folder itself.
In that way, although it is possibile for "other group" to execute, the sum of permits that we receive (we were into "apache" group) isn't enough for "execute" the folder and retrive the file.
We just change that scenario and now it works properly!

Related

How to determine if you can write to a directory with SFTP because of your group?

Quoting How to determine if you can write to a file with SFTP because of your group? ,
You could do mode & 00002 to see if a [directory] is writable by the public
and you could get a directory listing to and see if the owner of .
matches the user that you logged in with (although stat doesn't
usually return the longname for SFTPv3 servers, which is what you'd
need to get the username from that) but what about group permissions?
In the answer to that post it was suggested that a better way to test the writeability of a file with SFTP was to actually open that file for writing. eg. something analogous to fopen('filename.ext', 'w');.
My question is... what's the best way to determine the writeability of a directory with SFTP? You can't open a directory for writing like you can a file. My best guess: just attempt to upload a temporary file in the directory in question?
Like maybe use SSH_FXF_CREAT and SSH_FXF_EXCL? Altho the possibility that the file might already exist kinda complicates things. I guess a directory listing could be obtained and then one could attempt to upload a filename that doesn't exist but, the fact that this would require read permissions not withstanding it'd also not work as well if the directory was super large.
Any ideas?
What about a login script which would do a touch on that directory with that user?
If sshd would be used you could use a pam_script (pam auth module to execute a script) simply to simply do a touch testing_file_creation in your directory. You would need to add into your pam sshd configuration.
If the directory would not be writable you would get a message during your login
touch: testing_file_creation: Permission denied. Of course, you could do more fancy stuff with that but this would be the very basic.

rsync copying from a server with differents permisions and owner

I'm trying to copy an entire folder with his content from a remote server to a new one, the remote server hasn't installed any pannel and new one has cpanel.
When I run the next instruction it looks to copy everything but it only has copied the folder and it's empty. The folder copied has permissions 700 like in remote but in new server the folders permissions are 755, of course the user and group are different.
I'm running this instruction with different options:
rsync -rlDv --no-perms --no-owner --no-group user#000.000.000.000:/home/user/public_html/folder /home/user/public_html/images/
I've used instead -rlDv -av with and without --no-perms --no-owner --no-group
Nothing works.
Any idea, thank you
Well, finally I've just fixed the problem.
The problem was that the files and folders was uploaded but I was seeing it with user own in cpanel and he can't see that files because the group and user was different, I changed in console to root and after change the owner and permissions to the same of the web and erything began to work fine.

PhantomJS sets permissions of files to 644, how to change them without going anti-pattern?

I have implemented a restful API where i receive a website and return path to screenshots
I am using PhantomJS v 2.0.0 development to make the screenshots. I am making a console command that is executed with a secured, system()-like function. I use SU to log as a user 'phantomjs', which is restricted to use only specified commands, and then call my phantomjs script to render webpages as jpegs. The files outputted are owned by 'phantomjs' and have permissions set to 644. We have added the user 'phantomjs' to group www-data.
We are putting lots of webshots in a directory, and separate them alphabetically in sub-directories.So far so good. Our work scenario requires that we occasionally delete webshots.
So i implemented a restful delete-api which clients can use to delete their webshots. The real problem emerges from the fact that the files are owned by 'phantomjs' and with permissions set to 644, we can't delete the files through www-data. Since 'phantomjs' is in the same group with www-data, we need the permissions to be set to 664 in order to be able to unlink() those.
We wouldn't like to allow the user 'phantomjs' to use chmod or rm, because this would go against our design idea to restrict the user maximally, since this user runs shell commands.
We don't like the solution to run a cronjob that changes file permissions, because the server is pretty busy, and this solution is very undesirable and should be our last resort/
We tried using ACL making every file in webshots/ dir to have default permissions of 664, but this didn't work with our NFS as expected. Upgrading the NFS is not an option for me right now.
We tried umask 002, but this worked only with files created through the shell, so it couldn't help us either.
I've noticed that if i save an empty file through sublime-text , file permissions are 664.
If i save a screenshot with phantomjs, f.p. are 644.
My questions are:
-What governs default file permissions, the running process itself or some user preferences from which you run the application?
-Can you give me any more ideas how to delete webshots securely or make 'phantomjs' owned files have perms 664 without giving the user too much 'dangerous' permissions.
Any help would be highly appreciated.
Thank you.

'Invalid argument' error when using 'chown apache' on web server folder

On a mac in terminal when executing:
chown apache uploads/
I get the error:
chown: apache: Invalid argument
The foder is on a shared web server. I need to change the owner of the folder because otherwise my PHP script for creating simple text files will return a permission denied error. Please don't suggest chmodding the folder to 777 (which does work), since almost all advice against it.
Is it possible that the server doesn't run scripts as the user 'apache'? How can I find this out?
"Invalid argument" makes me think this directory is on an HFS+ volume with owners disabled; you won't be able to change it in that case. You may be able to switch owners on, although it's possible that requires reformatting.
(The advice to check /etc/passwd is wrong, or at least inaccurate, on OS X; you need dscl . list /Users.)
There are two things you might want to check:
1) Is there a user called apache? Maybe it's httpd. You can search /etc/passwd. (Or whatever your platform uses to store user names, you didn't mention your operating system.)
2) What user do scripts run as? You can check this by running a test script. For example:
#/bin/bash
echo Content-Type: text/plain
echo
id -a
If you save this as test.cgi and put it in a CGI directory, you should be able to run it and get it to tell you what user it's running as.

Can't read or write to directory CFFILE despite 777 permissions coldfusion

This is installed on a Unix system I don't have direct access to, but can get insight on by sitting with a network team.
The problem is this, I have 3 folders I need access to, read and write. The problem is, I only have access to 1 of them, and only read. This is via ColdFusion, I can get into them fine with the user they are assigned to (and the CF server runs on, which is the "www" user).
I CAN read and write to the temporary file directory, the place files are stored before they are moved to the destination directory (SERVER-INF/ etc etc etc), but that's not helpful. I have tried having the network people set the permissions for the other folders to the same thing, but with no results. The current settings of the folder I can access are rwxrws--- and the other folders are rwxrwxr-x, so I should have more permissions ( the "s" is not a mistake in the first folder).
We have tried setting the other folders to 777 and we did not even get read capability. Does the server need to be restarted on a Unix box after setting new permissions for ColdFusion to be able to get to them? I'm out of ideas right now, I'll take any new suggestions.
TL;DR
All using ColdFusion
temp directory - can read and write to
folder 1 - can read from (including subdirectories)
folder 2 - cannot read or write to (permission denied)
folder 3 - cannot read or write to (permission denied)
Goal: Get upload functionality working.
Edit: Server using apache
Just a random guess... Have you checked that paths you are trying to access are fully correct? They should be absolute for file operations, and www user must have X permissions on the all path directories -- to enter them.
The problem ended up being a restart was required after setting the new folder permissions. We didn't think this was an issue on a Unix box, however ColdFusion apparently did. This worked.