Change default file permissions on debian [closed] - permissions

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've setup a Debian cloud server. I installed apache, php and then vsftpd. I created users and set permission etc.
When I upload a file, its default permissions are 600 and I can't view the file unless I manually change it to 774 or 775.
So, I'd like to change the default permissions of all files that I upload to /var/www/ to 754.
I know that chmod -R 754 /var/www makes all files within that directory to 774 but it doesn't change the default permissions of all new files that are uploaded.
My user is 'joe' for demo purpose since I'm learning, so I even tried chown -R joe /var/www but that didn't change the default permissions either.
How do I change it default permissions from 600 to 774? In which file should I write and what?

You should use umask. More info here: http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html

You must change the umask of the user(s) writing to the directory. And BTW do NOT set execute permissions when they are not needed.
A umask is a negative mask of permissions which should be applied. By default, all files would be created with 666 and all directories with 777. With a umask of 002, which seems to be what you want, these become 664 and 775.
Now, how to set the umask depends on the program which actually writes the file, and whether this setting is available in its configuration file.
Another, less known way, would be to set POSIX ACLs to the upload directory: for this, you can use setfacl with the -d option on /var/www (provided your OS, and filesystem, support it both).

One of your comments suggests you are uploading the files through proftpd. If this is the case, then your question is really specific to that piece of software. The answer is not to go modifying /etc/profile, as that is going to change the default umask for all users that use Bourne Shell or similar (i.e. Bash). Furthermore, a user must actually log into the shell for /etc/profile to be read, and on a properly configured system, the user your daemon is running as does not actually log in. Check http://www.proftpd.org/docs/howto/Umask.html for information specific to proftpd and umasks.

Related

Unkown permission issues preventing wsl2 from accessing random windows files/directories [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm having permissions issues when accessing seemingly random directories/files on the windows filesystem with wsl2/ubuntu. Some directories are not accessible and I get a 'permission denied' error when I try to access them or any of the files in them. However, I have no issues accessing them from Windows itself through explorer or a non-admin powershell or command-line shell.
From the WSL side I am the owner of the files and directories and have correct permissions but I still cannot access them. I can however access these directories/files if I switch to root. I shouldn't have to though since the permissions on this directory are the same as the ones on other directores.
drwxr-xr-x me me
I've tried looking at the directory properties from the Windows side and making them more permissive ("Full-control" to each group in the properties>security menu) to all of the various groups with no success. I am the only user of this computer and the only groups that exist are...
Authenticated Users
SYSTEM
Administrators (${my-machine-name}\Administrators)
Users (${my-machine-name}\Users)
I can provide more info if needed.
Make sure that not only the directory that contains the files has rx for your WSL user but also every directory above it (Sorry, would have commented but I don't have enough rep yet).
Try creating a /etc/wsl.conf with the following:
[automount]
options="metadata,uid=1000,gid=1000,umask=022"
After creating the file:
Exit your WSL session
wsl --terminate <distro> or wsl --shutdown
Then restart and test the file/directory permissions again.
The uid and gid probably already default to those values since you mention that the files and directories on the NTFS drive are showing as owned by your user. So they can probably be left out.
The metadata option is important, as it allows WSL to map Linux permissions on to files and directories created in WSL on those NTFS drivers. But again, this isn't really your problem here either.
The umask is hopefully the long-term answer to your problem, as it will map WSL/Linux rwxr-xr-x to directories created in Windows, and rw-r–r– to files.

scp not allowing file transfer except to home directory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I need to automate a file transfer using scp and I have created a new ssh key and sent the public key to the remote server where I'll be sending files to (# ~/.ssh).
The problem is that it won't allow me to scp the file anywhere except the home directory. If I transfer it to the home directory, it works fine, but not anywhere else.
Is there something that needs to be done here? Thanks!
If you can scp the file to your home directory, then your key is working. That is unlikely to be an issue.
The kinds of problems you might have would be:
You don't have permission to write to the destination directory
$ scp test.txt myserver:/root
scp /root/test.txt: Permission denied
In this case you need to get permission to write to the directory, or choose a different destination that you do have access to.
The destination directory doesn't exist
$ scp test.txt myserver:foo/bar/
scp foo/bar: No such file or directory
In this case, check that you're uploading to the correct path.
A destination like myserver:foo/bar/ (note: no / after the :) means a relative path to your home directory. So, it might be /home/seumasmac/foo/bar/ in this case.
A destination like myserver:/var/www/ (note: there is a / after the :) is an absolute path. It means the directory /var/www/ on the server.
The error that you get when you try to upload should tell you which of the above is the problem in this case.

SSH Viewing/editing files across multiple accounts [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Apologies if this has already been asked already, but I tried a quick search and couldn't find my problem.
Basically I am trying to SSH a file onto my friends server from my computer for him to read and modify himself. He has given me my own login and sufficent rights etc, but he is unable to see what I've uploaded to the server, nor can I see what he has added.
I am currently using:
scp hello.txt username#domain.com:/home/username/
which uploads correctly and I can see it.
Could someone please help me out and explain why he is unable to view what he's uploaded, and vice versa?
How can we set it up so we can see each others files and modify them (some sort of public folder?)?
The problem are most likely the access rights on the directory/file. A non-root user might not be able to see the contents of the home directory of another user. If you upload a file to your home directory, your friend can consequently not see the uploaded file and vice versa.
The solution is simple: you need a directory on which both of you have the appropriate permissions, as you already assumed. Try this:
# on the server
mkdir /var/your_share/
chmod o+rwx /var/your_share/
# on your host
scp hello.txt username#domain.com:/var/your_share/
# on the server
ls -l /var/your_share/hello.txt
The ls -l displays the permissions of the uploaded file.
-rw-r--r-- 1 username username 10 Oct 13 15:49 hello.txt
If it says something like this, your friend will not have permissions to change the file but only to read it. Use the following command to grant him write permissions for that file:
# on the server
chmod o+w /var/your_share/hello.txt
ls -l /var/your_share/hello.txt
The output should then be something like:
-rw-r--rw- 1 username username 10 Oct 13 15:49 hello.txt
Note: The permissions granted in these commands are not only for the account of your friend but for all accounts on the server. That means everybody can read and write to the file. If you want to change that, you have to setup a group and only grant rights to the group.

automatic ssh session when moving to mapped sshfs [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
If you have a network drive mapped via sshfs, is there a way to automatically log on via ssh whenever changing to that directory?
$USER:$LOCALHOST:~: sshfs $USER:$REMOTEHOST /Volumes/dev0
$USER:$LOCALHOST:~: cd /Volumes/dev0
$USER:$REMOTEHOST:~
Thomas Jansson provides a guide on integrating sshfs with autofs. I'll summarize his guide here, so this answer will still be worth something if his site ever goes offline:
Create an /etc/auto.master:
/mnt/sshfs /etc/auto.sshfs uid=1000,gid=1000,--timeout=30,--ghost
Make sure your uid and gid match your userid and guid in /etc/passwd or whatever you use to provide system accounts.
Now add lines into /etc/auto.sshfs, one per desired filesystem, in the following form:
bar -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#tjansson#bar.com\:
Be sure to change tjansson#bar.com to whatever user account and hostname you're going to be using. Change the leading bar to whatever you'd like the directory to be named. When you cd /mnt/sshfs/bar, autofs will automatically mount the FUSE filesystem for you. Of course, using SSH keys and the ssh-agent(1) will make this far more pleasant.
Update
... create a directory that literally logs you into the other machine.
Hey, that's pretty clever idea. You could either write a shell function that checks the directory name you want to cd into and start a new ssh for you. Maybe you can (ab)use the PROMPT_COMMAND variable to ssh to the host if the directory name matches. Be warned that either approach will slow down your normal cd or every prompt display.
Another approach that I've used and enjoyed is a small little helper script, ~/bin/ssh-to:
#!/bin/bash
hostname=`basename $0`
ssh $hostname $*
Symlink new names to this shell script: ln -s ssh-to sarnold.org and then you can run a command or log in on a remote site without typing the ssh all the time:
sarnold.org python foo.py
It'll log you in to whatever machine you've used for the name of the symbolic link and run whatever command you give it.

apache and sftp permissions for wordpress automatic update in ubuntu [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It's my first time trying to set up Wordpress or any website on a cloud hosting. I am on Ubuntu server, and Wordpress is located in var/www/mydomain/public folder.
What I want to achieve is this: Both Wordpress (PHP) and SFTP users can access and modify the same files. And Wordpress should be able to do it's automatic update for plugins, etc.
This is what I have done so far:
I have chmodded this folder to 775 to allow group read/write permissions.
I have added apache user (www-data) and SFTP user (suser) both to group wp.
I have made wp as the group owner of all files inside the wordpress folder.
What works:
I can edit theme and plugin files with Wordpress's built-in theme/plugin editor.
What does not work:
Wordpress update still asks for the FTP details to carry out the update
When I create a new file with SFTP user, it's permissions will be 644, but they should be 775
What I've tried
I have tried all the steps here (answer by caf): A general linux file permissions question: Apache and WordPress
I have tried this: http://jeff.robbins.ws/articles/setting-the-umask-for-sftp-transactions
I have also tried adding umask 002 to my SFTP startup login files, but I do not know where they are located.
As far as I understand, the problem lies somewhere with the permissions/umask thing. I know very little about linux so this may be a stupid question with a simple solution, but I have no idea how to fix it.
UPDATE: I did not know that I would have to restart the ssh server. I did it with this command /etc/init.d/ssh restart and after that files created with SFTP have permissions 664 (as they are supposed to)
Also, it seems that Apache has to be restarted as well, with this command: /etc/init.d/apache2 restart
However, Wordpress still won't do automatic update (asks for FTP credentials)
If you're able to install the SSH2 PHP Module Wordpress will then give you the option to upgrade over SFTP.
In Ubuntu:
sudo apt-get install libssh2-php
In CentOS (EPEL required):
sudo yum install php-pecl-ssh2
I was trying to do the same thing with Wordpress updates, until I realized that Wordpress only supports FTP which, confusingly, SFTP is not. From Wikipedia:
FTPS should not be confused with the SSH File Transfer Protocol
(SFTP), an incompatible secure file transfer subsystem for the Secure
Shell (SSH) protocol. It is also different from Secure FTP, the
practice of tunneling FTP through an SSH connection.
I'm still trying to figure out if there's a secure way to do Wordpress updates automatically; I don't know yet whether or not FTPS is truly secure.