When creating per-user php5-fpm pools on an Apache mod_fastcgi setup which of the following is the most secure way and efficient way of granting webserver permissions to the PHP pool?
Option 1:
Set the group to www-data:
listen.owner = username
listen.group = www-data
listen.mode = 0660
user = username
group = www-data
While this works files created by PHP would have the ownership set to username:www-data while files uploaded via SCP will have username:username.
Option 2:
Add www-data to the supplementary group username:
listen.owner = username
listen.group = username
listen.mode = 0660
user = username
group = username
-
usermod -aG username www-data
Which of these options are secure? You may also share a better method.
I checked the following guides:
http://www.howtoforge.com/php-fpm-nginx-security-in-shared-hosting-environments-debian-ubuntu
http://www.binarytides.com/php-fpm-separate-user-uid-linux/
But they were all written before bug #67060 was discovered and fixed.
I am using following setup on my LEMP (Nginx + PHP-FPM). For Apache this should also be applicable.
PHP-FPM runs several pools as nobody:user1, nobody:user2 ...
Nginx runs as nginx:nginx
User nginx is a member of each user1, user2.. groups:
# usermod -a -G user5 nginx
File permissions:
root:root drwx--x--x /home
user1:user1 drwx--x--- /home/user1 (1)
user1:user1 rwxr-x--- /home/user1/site.com/config.php (2)
user1:user1 drwxrwx--- /home/user1/site.com/uploads (3)
nobody:user1 rw-rw---- /home/user1/site.com/uploads/avatar.gif (4)
(1) User's home dir has no x permission for other, so php-fpm pool running as nobody:user2 will not have access to /home/user1 and vice versa.
(2) php script doesn't have w for group, so it cannot create files in htdocs.
(3) On uploads dir we should manually enable write access for group user1, to enable php script to put files there. Don't forget to disable php handler for uploads, in nginx this is made by
server {
....
location ^~ /uploads/ { }
but for Apache you should check.
(4) uploaded files should also have w for group if we want user1 to be able to edit these files later via ftp or ssh (logging in as user1:user1). Php code is also editable via ftp since user1 is its owner.
Nginx will have read access to all users and write access to all user's uploads since user nginx is a member of each user1, user2, ... groups.
You should not forget to add it to all later groups. You can also modify useradd script to do it automatically.
Related
I have samba services (v 4.3.9) set up on a development web server. For simplicity sake, I have a folder "/samba/billfolder" that requires access from the web service AND my samba users. When a samba user creates a file or folder, the permissions are not properly set and the web service cannot access the file / folder. I need a folder that I can create folders and files that will give full permissions to both groups
My smb.conf look like this:
[global]
workgroup = MYDOMAIN.LOCAL
server string = Samba Server %v
netbios name = TestServer
security = user
#============================ Share Definitions ==============================
[Billing]
path=/samba/billfolder
valid users = #alfdevelopers, #www-data
guest ok = no
writeable = yes
browsable = yes
create mask = 0664
force directory mode = 2775
Can someone please offer some suggestions on how to properly do this. The client machine is LinuxMint and the fstab entry looks like this:
//192.168.1.200/Billing /mnt/Billing cifs user=myuser,password=mypassword,rw,iocharset=utf8 0 0
Thanks for any help that can be provided.
Your samba configuration seems correct. File permissions, group permission from the linux filesystem are checked and applied after the inital samba configuration stage.
Create a common group having users from #alpha-developers & www-data say for example #developerswebservices. You can easily add the necessary users to the group using /etc/group file.
developerswebservices:x:xx: user1,user2,www-data
Put users from both the groups as shown above.Go to the working directory of the share i.e Billing
Force the newly created file inside to belong to the group of the parent, i.e Billing,
$sudo chgrp -R developerswebservices Billing
$sudo chmod -R g+s Billing
You can change the permission create mask with the necessary mask bits using the umask command.
test#linuxserver:/$umask 0007
This wil give permission to newly created file to the user test as rw-rw--- for the user test.
Umask allows you to set the default permission bits for a file/directory
creation for a user.
To customize the umask for all users specify the umask bits in the /etc/login.defs file.
Hope this helps ! Thanks.
I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?
rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).
You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).
Setup:
Websites are setup as users in /home/
Website users restricted to home directories as /home/websiteuser/ is
owned by root
Website users are part of the websites group
www-data is part of the websites group
Virtual host points to: /home/websiteuser/html/
/home/websiteuser/html/ is set to -R 755
Files inside /html/ are owned by websiteuser:websites
Website user is used to access website via sftp
Everything works great except apache requires us to recursively chmod 777 the /home/websiteuser/html/files/ directory or images won’t display and the CMS can’t write it’s mysql backups.
The website user owns the files so the sftp access works but do I have to make www-data own the files - or is there a way where sftp login works and apache can still have access as well?
We've seen many questions around this but don’t understand the answers sorry - any help would be much appreciated.
Cheers
We've solved this by making a "websites" group and adding the apache user (www-data) to this group like this (must be done as root - switch to root with $ su root or use sudo in front of the commands like this $ sudo useradd username:
Add a new group - this will be the name of the group used for all websites:
$ addgroup websites
List groups to check it was created
$ getent websites
Add the apache user to the websites-group so apache has access to run the websites
$ usermod -G websites www-data
Check www-data is part of the websites-group:
$ grep '^ websites' /etc/group
Add a new website user (this will be the user used to run the website)
$ useradd username
Give the user a password
$ passwd username
Follow the prompts to add a password
Add website user to websites group
$ usermod -G websites username
Create a new directory for the user to serve websites from:
$ mkdir /home/username
The owner of the website directory must be root or sftp will fail
Make root the owner and group of website user’s home directory:
$ chown root:websites /home/username
Give website user limited access to their home directory:
$ chmod 750 /home/username
Move into the website user’s directory:
$ cd /home/username
Make a web root directory (this is the opublic directory where the website's files will live):
$ mkdir html
Give website user owner:group on web root:
$ chown username:websites html
Change permissions on the html directory:
$ chmod 750 html
Copy all the website's files into the html directory
Recursively set ownership on all files within the web-root
$ chown -R username:ssb-websites html
Recursively set premissions on all files within the web-root (owner and group have read, write, execute permissions):
$ chmod -R 770 html
Recursively set permissions on all files within web-root:
$ chmod 644 $(find . ! -type d)
If having issues, make sure directory permissions are set like this (the top-level website directory /home/username/ must be owned by root or sFTP access won't work):
/home/username | drwxr-x--- | root:websites
/home/username/html | drwxr-x--- | username:websites
/home/username/html/directories/ | drwxrwx--- | username:websites
/home/username/html/files.html | -rw-r--r-- | username:websites
We're designers so this is the way we worked it out, if anyone can see improvements, feel free to edit!
I have created a simple php page on var/www/tuto director, but when I'm trying to open this page (this is it's URL : http://localhost/tuto/index.php ) I got this message :
Forbidden
You don't have permission to access /tuto/index.php on this server.
Apache/2.2.22 (Ubuntu) Server at localhost Port 80
The tuto directory has aimad as group and owner.
The drwx------ means only you have read/write/execute permission on the directory.
d means the node is directory
r(4) means read permission
w(2) means write permission
x(1) means execute permission
The order for permissions is user, group, world.
To fix that you'll need to correct the permissions so apache can read from it. This is done with this command.
chmod -R 755 /var/www/tuto
user: 7 = r + w + x
group: 5 = r + x
world: 5 = r + x
It will set the correct permission for the directory and everything inside.
An even better approach would be to change the directory's group to www-data which apache uses on Ubuntu and then set the permissions to allow the group.
chown -R aimad:www-data /var/www/tuto
chmod -R 750 /var/www/tuto
To get a better understanding of how permissions work look at the Wikipedia page.
http://en.wikipedia.org/wiki/Filesystem_permissions
I was given some login information for an EC2 machine, basically an ec2-X-X-X.compute-X.amazonaws.com plus a username and password.
How do I access the machine? I tried sshing:
ssh username#ec2-X-X-X.compute-X.amazonaws.com
but I get a Permission denied, please try again. when I enter the password. Is sshing the right way to access the EC2 machine? (Google hits I found suggested that you could ssh into the machine, but they also used keypairs.) Or is it more likely that the problem is that I was given invalid login credentials?
If you are new to AWS and need to access a brand new EC2 instance via ssh, keep in mind that you also need to allow incoming traffic on port 22.
Assuming that the EC2 instance was created accepting all the default wizard suggestions, access to the machine will be guarded by the default security group, which basically prohibits all inbound traffic. Thus:
Go to the AWS console
Choose Security Groups on the left navigation pane
Choose default from the main pane (it may be the only item in the list)
In the bottom pane, choose Inbound, then Create a new rule: SSH
Click Add rule and then Apply Rule Changes
Next, assuming that you are in possession of the private key, do the following:
$ chmod 600 path/to/mykey.pem
$ ssh -i path/to/mykey.pem root#ec2-X-X-X.compute-X.amazonaws.com
My EC2 instance was created from a Ubuntu 32-bit 12.04 image, whose configuration does not allow ssh access to root, and asks you to log in as ubuntu instead:
$ ssh -i path/to/mykey.pem ubuntu#ec2-X-X-X.compute-X.amazonaws.com
Cheers,
Giuseppe
Our Amazon AMI says to "Please login as the ec2-user user rather than root user.", so it looks like each image may have a different login user, e.g.
ssh -i ~/.ssh/mykey.pem ec2-user#ec2-NN-NNN-NN-NN.us-foo-N.compute.amazonaws.com
In short, try root and it will tell you what user you should login as.
[Edit] I'm supposing that you don't have AWS management console credentials for the account, but if you do, then you can navigate to the EC2->Instances panel of AWS Management Console, right click on the machine name and select "Connect..." A list of the available options for logging in will be displayed. You will (or should) need a key to access an instance via ssh. You should have been given this or else it may need to be generated.
If it's a Windows instance, you may need to use Remote Desktop Connection to connect using the IP or host name, and then you'll also need a Windows account login and password.
The process of connecting to an AWS EC2 Linux instance via SSH is covered step-by-step (including the points mentioned below) in this video.
To correct this particular issue with SSH-ing to your EC2 instance:
The ssh command you ran is not in the correct format. It should be:
ssh -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
Note, you need access to the private key (.pem) file to use in the command above. AWS prompts you to download this file when you first launch your instance. You will need to run the following command to ensure that only your root user has read-access to it:
chmod 400 /path/to/yourKeyFile.pem
Depending on your Linux distribution, the user you need to specify when you run ssh may be one of the following:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don't work, check with your AMI provider.
You need to enable an inbound SSH firewall. This can be done under the Security Groups section of AWS. Full details for this piece can be found here.
For this you need to be have a private key it's like keyname.pem.
Open the terminal using ctrl+alt+t.
change the file permission as a 400 or 600 using command chmod 400 keyname.pem or chmod 600 keyname.pem
Open the port 22 in security group.
fire the command on terminal ssh -i keyname.pem username#ec2-X-X-X.compute-X.amazonaws.com
Indeed EC2 (Amazon Elastic Compute Cloud) does not allow password authentication to their instances (linux machines) by default.
The only allowed authentication method is with an SSH key that is created when you create the instance. During creation they allow you to download the SSH key just once, so if you loose it, then you have to regenerate it.
This SSH key is only for the primary user - usually named
"ec2-user" (Amazon Linux, Red Hat Linux, SUSE Linux)
"root" (Red Hat Linux, SUSE Linux)
"ubuntu" (Ubuntu Linux distribution)
"fedora" (Fedora Linux distribution)
or similar (depending on distribution)
See connection instructions: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html
If you want to add a new user the recommended way is to generate and add a new SSH key for the new user, but not specify a password (which would be useless anyway since password authentication is not enabled by default).
Managing additional users: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
After all if you want to enable password authentication, which lowers down the security and is not recommended, but still you might need to do that for your own specific reasons, then just edit
/etc/ssh/sshd_config
For example:
sudo vim /etc/ssh/sshd_config
find the line that says:
PasswordAuthentication no
and change it to
PasswordAuthentication yes
Then restart the instance
sudo reboot
After restarting, you are free to create additional users with password authentication.
sudo useradd newuser
sudo passwd newuser
Add the new user to the sudoers list:
sudo usermod -a -G sudo newuser
Make sure user home folder exists and is owned by the user
sudo mkdir /home/newuser
sudo chown newuser:newuser /home/newuser
New you are ready to try and login with newuser via ssh.
Authentication with ssh keys will continue to work in parallel with password authentication.