The scenario is:
AWS Ec2 instances have two users, namely root and ubuntu. root is not
advisable, but AWS recommends using ubuntu as the default
user and this user has all sudo permissions.
Ansible's controlling machine runs on an EC2 instance. The Ansible playbook bootstraps another EC2 instance and installs certain softwares on them.
Node.js web app triggers this Ansible scripts from root user.
The setup works well when all the files for Ansible and Node.js are kept in the same folder. But when organised in different folder, gives Ansible ssh-error.
Error:
So, when organised in separate folders. The nodejs app triggers the ansible scripts. The new Ec2 instance is bootstrapped, but when the SSH-port is ready, cannot install the required softwares as it gives the ssh permission denied error.
The nodejs code that triggers the ansible scripts is executed as
child.exec("ansible playbook ../playbook.yml");
The only change in the code, after organising into folders is the addition of "../" path.
Debugging:
As I told you, there are two users in EC2 instance, the ec2-key module while bootstrapping stores the root's ssh-key to the newly bootstrapped instance. But, while installing the software on the newly bootstrapped instance, ubuntu's ssh=key is used for getting access. Thus, the conflicts with the keys give the ssh-permission denied error. And, this error particularly occurs after organising the Ansible files and Node.js files into separate folders. If all the Ansible and Node.js files are put inn the same folder, then no error is raised. FYI: All files are stored in the ubuntu user.
Just puzzled about this!
As said by #udondan, adding cwd to exec function works.
Related
On an Ubuntu 18.01 LFS system (my private home server, only accessible from my home LAN), I have created an Apache/WSGI/flask web application that mounts encrypted drives (via cryptsetup luksOpen and then mount) on web request. Password for cryptsetup is sent via HTTP POST, a python script (that calls itself via sudo and then calls cryptsetup and mount via subprocess.run) is entered in /etc/sudoers.d... without the need for a password, so everything runs fine with sudo -u www-data ./starthepythonscript_that_mounts_stuff_via_sudo.py without a password. I can see mounted drives and cd into them after calling this script as just mentioned.
However, this behavior seems to be different when the script is called from the same user "www-data", but from the Apache WSGI Ubuntu service. In this case, mounting seems to succeed, but the mounts are not visible on the system (neither when listing contents of the corresponding mounted folder, nor when typing mount - e.g. as root - on the system that runs Apache). They just don't show up - is there a kind of sandbox mechanism implemented for the Apache service on Ubuntu?
My goal is to mount the drives via the Apache/WSGI/flask/sudo script, but in a "normal" way such that users on the same machine can see this via mount or cd into it.
Any hint is appreciated!
Nevermind - I have found the answer on unix.stackexchange.com - the apache service is started with PrivateTmp=true, which creates a private mount kernel namespace. One option is to remove that flag or set it to false - another would be to keep the setting and (from the root namespace) use nsenter -a -t APACHEPID bash to start a bash that sees the mounts.
I've launched Wordpress on Google Compute Engine (via their automated launcher process). It installs quickly and easily and visiting the external IP displayed in my Compute Engine VM Instances Dashboard, I am able access the fresh installation of Wordpress.
However, when I scp an existing Wordpress installation oldWPsite into var/www/ then replace my html directory
mv html htmlFRESH
mv oldWPsite html
my site returns a 'failed to open' error. Directory permissions user:group are identical.
Moreso, when I return the directories to their original configuration
mv html oldWPsite
mv htmlFRESH html
Still, the error persists.
I am familiar with other hosting paradigms where I can easily switch between the publicly served files by simply modifying directory names. Is there something unique about Google Compute Engine? What is the best way to import existing sites, files, etc into the Google Cloud environment?
Replicate
Install Wordpress via Google Launcher on a micro-VM.
Visit public IP of the VM instance.
SCP a fresh installation of Wordpress tovar/www.
Replace the Google installed html directory with the newly created and copied Wordpress directory using mv commands.
Visit public IP of the VM instance.
===
Referenced Questions:
after replacing /var/www/html directory, apache does not work anymore
permission for var/www/html directory - a2enmod command unrecognized on new G-compute VM
The import .htaccess file had https redirect which caused the server to prompt failure since https is not setup in a fresh launch of Wordpress through GCE. Compounding the issue, the browser cache held that memory when the previous site was moved back to the initial conditions.
Per usual, the solution involved the investigation of user errors.
Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!
I've just installed Concrete 5 CMS by following the instructions on the website.
The folders application/files/, application/config/, packages/ and
updates/ will need to be writable by the web server process. This can
mean that the folders will need to be "world writable", depending on
your hosting environment. If your server supports running as
suexec/phpsuexec, the files should be owned by your user account, and
set as 755 on all of them. That means that your web server process can
do anything it likes to them, but nothing else can (although everyone
can view them, which is expected.) If this isn't possible, another
good option is to set the apache user (either "apache" or "nobody") as
having full rights to these file. If neither are possible, chmod 777
to files/ and all items within (e.g. chmod -R 777 file/*)
The packages folder has permission 777 and root/tmp folder has permission 755.
I've uploaded a new theme to /packages over FTP. When I try to install the new theme I see the following error:
An unexpected error occurred. fopen(/root/tmp/1419851019.zip) [function.fopen]: failed to open stream:
Permission denied
I have FTP access to the server and access to CPanel. How do I get this working without granting too many permissions which pose a security risk?
My install has the folders application/files, application/config, packages, and updates all set to 755 and it's working just fine.
You get that error because the system is trying to write to /root/tmp, which apparently is the environment configuration for a temp folder when your PHP request is handled.
Try adding the folder application/files/tmp in your file system (within your concrete5 installation). And then make sure that the user can write to that folder that is running PHP in your environment. As explained in the concrete5's own documentation (that you linked originally), it depends on your server which user this is.
Usually in shared hosting environments it's the same as the account you use to login there through SSH or FTP. In these cases, the 755 permissions should be enough if your own user owns the tmp folder you just created.
I've been referring to http://toroid.org/ams/git-website-howto as a starting point for a production web server running on a managed VPS. The VPS runs cPanel and WHM, and it will eventually host multiple client websites, each with its own cPanel account (thus, each with its own Linux user and home directory from which sites are served). Each client's site is a separate Git repository.
Currently, I'm pushing each repository via SSH to a bare repo in the client's home folder, e.g. /home/username/git/repository.git/. As per the tutorial above, each repo has been configured to checkout to another directory via a post-receive hook. In this case, each repo checks out to its own /home/username/public_html (the default DocumentRoot for new cPanel accounts), where the files are then served by Apache. While this works, it requires me to set up (in my local development environment) my remotes like this:
url = ssh://username#example.com/home/username/git/repository.git/
It also requires me to enter the user's password every time I push to it, which is less than ideal.
In an attempt to centralize all of my repositories in one folder, I also tried pushing to /root/git/repository.git as root and then checking out to the appropriate home directory from there. However, this causes all of the checked-out files to be owned by root, which prevents Apache from serving the site, with errors like
[error] [client xx.xx.xx.xx] SoftException in Application.cpp:357: UID of script "/home/username/public_html/index.php" is smaller than min_uid
(which is a file ownership/permissions issue, as far as I can tell)
I can solve that problem with chown and chgrp commands in each repo's post-receive hook--however, that also raises the "not quite right" flag in my head. I've also considered gitosis (to centralize all my repos in /home/git/), but I assume that I'd run into the same file ownership problem, since the checked-out files would then be owned by the git user.
Am I just approaching this entire thing the wrong way? I feel like I'm completely missing a third, more elegant solution to the overall problem. Or should I just stick to one of the methods I described above?
It also requires me to enter the user's password every time I push to it, which is less than ideal
It shouldn't be necessary if you publish your public ssh key to the destintion account ".ssh/authorized_keys" file.
See also locking down ssh authorized keys for instance.
But also the official reference Pro Git Book "Setting Up the Server".