Using lsync to sync apache webroot files - running into permission issues - apache

I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?

rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).

You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).

Related

rsync daemon and permissions

Problem
I am confused about rsync daemon and permissions. Unfortunately I cannot figure out why I get
rsync: opendir "/." (in share) failed: Permission denied (13) and
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1650) [generator=3.1.2].
Obviously, by searching the web and looking into the manpage of rsync / rsyncd.conf I was not able to solve this issue.
Setup
Here is my /etc/rsyncd.conf owned by root with 644 permissions:
log file = /var/log/rsyncd.log
[share]
comment = data
path = /path/to/data
uid = root
gid = root
read only = false
auth users = syncuser
secrets file = /etc/rsyncd.secrets
Note, /path/to/data is owned by root with 755 permissions (though random user or syncuser is also not working).
Besides, /etc/rsyncd.secrets has 600 permissions and is owned by root (I also tried 400 though same issue)
syncuser:passwd
To start the service (on CentOS7)
sudo systemctl start rsyncd
A first test as a random user on the host machine running the daemon, (also running with sudo or as syncuser has no effect)
rsync user#host::
returns share data showing that the configuration is fine?!
However
rsync user#host::share
leads to the errors mentioned above.
Tries
Playing with chmod (777) and chown (root:root, random user:user, syncuser:syncuser) of /path/to/data was not changing anything. Moreover I varied uid and gid to nobody but also without success.
Running above rsync command on an actual client, from which I want to ultimately copy data to the host is of course also failing.
So what am I missing here? Any hints are of course highly appreciated.
could you check SELinux?
If it is enforced, the directory /path/to/data need to be labeled correctly.
For example,
chcon -R -t public_content_t /path/to/data

Can't login with root user in native templates of environments Jelastic

When I create a new environment in some nodes, (i.e. with the Nginx) I can't access to this node with root user
I logged with user a not with root.
Using username "251X-XXX".
Authenticating with public key "rsa-key-XXXXXXXX"
Last login: Thu Sep 28 09:11:56 2017
nginx#node251X-delete ~ $ sudo date
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for nginx:
Sorry, try again.
Brief:
I didn't receive root password to my email (I'm the owner of this environment).
I can't change this node to a Docker image
There's no Reset Password option on Dashboard
Sudo it doesn't work.
Also it happens with other non-docker nodes (Tomcat, MySQL,...)
Any alternative or configuration to enter with root user to this node.
Thanks
Jelastic doesn't provide root access to separate containers. At the same time while accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
For example, you can restart nginx with the following command:
sudo /etc/init.d/nginx restart
No password will be requested.
Note: If you deploy any application, change the configurations or add any extra functionality via SSH to your Jelastic environment, this
will not be displayed at the Jelastic dashboard.
Using our documentation you’ll find out how to:
use SFTP and FISH protocols
manage containers via SSH with Capistrano
Root user is only provided for self-managed nodes (custom Docker / Elastic VPS).
You can execute specific whitelisted commands with sudo (e.g. sudo service nginx restart). Besides that you shouldn't need root access.
If you feel otherwise then contact your hosting provider to discuss your needs and they can find a solution for you.

Store a private key outside of ~/.ssh

I have to deal with a rather annoying situation. I must transfer a file via shell script using scp from one server to another. The problem is that I do not have root access on either of them. I'm not allowed to install any packages like, sshpass, ssh2, expect etc. I don't even have write permission in the home directory of the user I have to use on the second server.
Since I can't use sshpass etc. to enable my script to enter the login credentials, I thought about using an ssh keypair for auth. Actually that was my first thought, but since the user on the second server doesn't have write permissions in its home directory but only in a subsequent directory, ssh-keygen fails as it can't put the keys in ~/.ssh.
Both are Debian servers btw.
Is there any way to generate a ssh keypair and use it outside of ~/.ssh?
Any help is greatly appreciated.
On the clientside yes. However, on serverside, unless configured differently, sshd will expect your credentials in that directory.
If you can scp from the server where you can't access .ssh to the one where you can, you can use -i option to specify the keyfile location.
Do you have an alternative transport mechanism? Can you put the filn your public_html and wget it on the other side?
You can have the keypairs anywhere. What is key is that the permissions are set correctly on the keypair. The ownership needs to be set to the user chown user:user keyfile and the permissions must be chmod 400 keyfile.
Once you have your key moved and permissions set all that's left is to tell scp which key to use. You can do this by using the -i flag.
IE: scp /source/file user#host:/target/location/ -i keyfile
Edit:
As Amadan alluded to in his answer - this assumes the server you're connecting to already has the key as an authorized key on the user. If not it would require an /etc/ssh/sshd_config change that only someone with the right access can do. It might be worth trying a cat /etc/ssh/sshd_config on the server if your user has access to it at all right now. If you have read access you'll be able to discern the expected authorized_keys location. It's possible the server admin has already customized the expected key location to something you have write access to.

Apache PHP and a CIFS write access

I've been working on an internal site that has Apache / PHP running.
I have CIFS Mount in a root directory. /filesys/Images/ that points to a file server.
My apache runs under the 'apache' user account.
The CIFS is mounted with the user and group of the apache user. (and 777)
When I write a PHP script to read or write from this CIFS mount and run it on the command line (both under normal and apache users) everything is fine.
As soon as I try to call the script from apache things fail. No read or write permissions.
My error log will show (for mkdir) 'file exists' although it does not.
My PHPInfo verifies that safe mode is not on.
Any ideas?
My problem had to do with SELinux and getting that configured properly.
semanage boolean -m --on httpd_use_cifs

showing apache (php) errors to the web

I'm looking for a way to display errors that apache generates under
/var/log/apache/error_log
for my colleagues who are working on the same project. The coding language is php.
First I tried to do a simple php readfile script, but since the file is only visible too the root user i was unsuccessful. I do not want to use things like cpanel or kloxo. can anyone help ?
To answer this properly we'd need to know a lot more about your setup.
/var/log/apache/error_log
Kind of implies that this some variant of Unix - but which one?
First I tried to do a simple php readfile script
Does that mean you're trying to make the file available via HTTP or do the relevant users have shell accounts?
If its just the permissions problem you need to solve and it's one of the mainstream Linux variants, then it's probably setup to use logrotate, and the config files for logrotate will be in /etc/logrotate.d/httpd or /etc/logrotate.d/apache so go read the man page for logrotate then change the file to something like.....
/var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
create 660 apache webdev
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
Here the 'create 660 apache webdev' makes the file owned by apache uid, webdev gid with permission -rw-rw----
That will then take effect for all future log rollovers. You still need to set the permissions on the directory, e.g.
chmod a+rx /var/log/apache
chmod a+rx /var/log
will give everyone read access to the directories
..and change permissions on the current files...
chgrp webdev /var/log/apache/*.log
chmod g+r /var/log/apache/*.log
You have to change error_log's file permissions in order to be able to read it.