Drives mounted via WSGI Apache sudo command not visible on host - apache

On an Ubuntu 18.01 LFS system (my private home server, only accessible from my home LAN), I have created an Apache/WSGI/flask web application that mounts encrypted drives (via cryptsetup luksOpen and then mount) on web request. Password for cryptsetup is sent via HTTP POST, a python script (that calls itself via sudo and then calls cryptsetup and mount via subprocess.run) is entered in /etc/sudoers.d... without the need for a password, so everything runs fine with sudo -u www-data ./starthepythonscript_that_mounts_stuff_via_sudo.py without a password. I can see mounted drives and cd into them after calling this script as just mentioned.
However, this behavior seems to be different when the script is called from the same user "www-data", but from the Apache WSGI Ubuntu service. In this case, mounting seems to succeed, but the mounts are not visible on the system (neither when listing contents of the corresponding mounted folder, nor when typing mount - e.g. as root - on the system that runs Apache). They just don't show up - is there a kind of sandbox mechanism implemented for the Apache service on Ubuntu?
My goal is to mount the drives via the Apache/WSGI/flask/sudo script, but in a "normal" way such that users on the same machine can see this via mount or cd into it.
Any hint is appreciated!

Nevermind - I have found the answer on unix.stackexchange.com - the apache service is started with PrivateTmp=true, which creates a private mount kernel namespace. One option is to remove that flag or set it to false - another would be to keep the setting and (from the root namespace) use nsenter -a -t APACHEPID bash to start a bash that sees the mounts.

Related

Running docker commands with an user without root privileges (possibly with www-data user of Apache)

I am developing a simple Flask application (configured with a Apache webserver) which provides a web interface for docker management. My apache server runs as ‘www-data’ user and it uses the same for all of its API operations.
But i get the ‘Permission denied’ error for the following,
docker images
docker run, etc…
as it doesnt allow ‘www-data’ user to run the above commands.
Can you please provide me a suggestion on using the ‘www-data’ user for docker operations.
I dont want to add ‘www-data’ user to sudoers list.
Is adding the user to docker group alone will be a proper solution ???
Or please suggest me a best practice solution for this.
Thanks
GuruPrasad
It would be easier, clearer, and no less dangerous to tell Apache to run your process as root.
Remember that, if you can run any Docker command at all, you can trivially get unrestricted root-level access to anything on the system. For example, if your tool decides it really does want www-data to be in the host's sudoers list, it can
docker run --rm -v /:/host busybox \
sh -c 'echo www-data ALL = (ALL) NOPASSWD: ALL >> /host/etc/sudoers'
Depending on what your management tool does, it potentially is offering equal unprotected root-level access to the host to anyone who can reach the Web page. Even if it isn't, you need to be extremely careful with how you invoke Docker (another SO answer I was looking at had the potential to root the system if a user could create a directory with an arbitrary name and run the script from there, for instance).

Apache2 Not Responding: Bitnami Magento Install (Legacy)

For reasons too insane to even go into, I am attempting to install using the Bitnami Magento 1.9.2.4 image on a fresh Amazon AWS/Lightsail Ubuntu 16.04 instance (2gbs to avoid complaints and be sure I don't run into anything unnecessary).
I think this is really more of an Apache question. After I finish the install (success), I can't get the server to respond via the instance IP address at the default port (8080).
Regarding the old Bitnami Image, you can get (or wget) that Magento 1.9.2.4 image still, it's over here:
wget "https://downloads.bitnami.com/files/stacks/magento/1.9.2.4-3/bitnami-magento-1.9.2.4-3-linux-x64-installer.run"
So for the sake of anyone who's trying to work through the whole process, once you pull the above down to your instance you need to chmod the above file to 755. This assumes you are in the directory with your download:
chmod 755 bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Then run it using it's full path, like:
/home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
So the install is going to ask a bunch of questions, for anyone keeping track my answers were all yes (ie. yes to Git, PhpMyAdmin, Beetailer... whatever that is).
Then I created an admin user / password etc.
As far as the port I didn't have anything running on 8080 so the install defaulted the port to 8080 with HTTPS on 8443 with MySQL on 3306 (more on ports in a minute).
I think Host/Domain is one of the keys to this problem. When I couldn't get the server to respond I just recreated an instance and tried a different Domain during the install process. I tried: internal AWS IP, External ACTUAL IP, 127.0.0.1
Here's what the Magento 1.9 Domain prompt looks like:
So basically that sort of brings us up to date.
Once I finished the install, like a normal human used to using bitnami as a cloud image I assumed the server would respond at whatever the default path was at the IP address it was running on. Ie:
BASEIPADDRESS:8080/magento
Not the case. When I hit that the server does NOT respond, hence the question. In addition to the above I have also tried the BASEIPADDRESS, and the BASEIPADDRESS:8080
Results checking open ports
So since the server is not responding I figured I would check the ports.
First I checked using netstat:
netstat -lntu
I got back:
Then I realized that netstat is now depreciated... so I went with:
ss -lntu
I got back:
(excuse the images, formatting wouldn't work for text)
To me it looks like 8080 (default) is open in both of those results. So why isn't the server responding at the default location?
#Bitnami Status = OK
Checking the status with:
/home/ubuntu/magento-1.9.2.4-3/ctlscript.sh status
Everything looks good:
apache already running
mysql already running
Memcached not running
Since it says Memcached was not running, I started memcached to see if that was the issue, no it was not.
I can access the instance via SSH and yes I am sure the IP is right. See images above.
I also posted this to the Bitnami community but haven't heard anything over there. Will cross populate as I get ideas.
It looks to me that you configured Magento using the private IP address, so you would not be able to access from your browser. A way to check it is by executing the following command in your machine:
curl -L 127.0.0.1:8080/magento
If that provides output, then the IP is misconfigured, so you would need to reinstall using the proper IP
So this ended up being PRIMARILY due to not running the Bitnami stack installer as root / sudo:
sudo /home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Why Install with Sudo on AWS/Lightsail?
So the reason you need to install as sudo has to do with the fact that when run as the normal user (ie. not root) the installer defaults to port 8080 which is NOT open on aws by default. To complicate matters further you may not be able to get things running properly even if you manually swap to port 80 AFTER you run the installer.
To avoid a scenario where port 80 requires root access to utilize I just re-created my instance and ran the installer as root with the above command.
Host Setting
During install I selected the public IP for the "Host" prompt and everything worked as I thought it might (straight out of the box).
Thanks to Javier Salermon who put me on the right track and the devs at Bitnami for cueing me into the fact that 8080 is not open by default.

SSH and FTP showing different files

I am using a host to try and deploy my Django site but I am confused by the SSH vs. FTP.
Background info:
I got the IP address, name and password from my host for the VPS.
I logged in using the same information via Putty and via WinSCP.
Both show me as having accessed root#[VPS IP Address].
Running ls on Putty shows nothing (no files or folders). So I created a file hello.txt.
WinSCP shows a lot of folders at the root, unlike Putty. I then searched all the folders for the hello.txt that I created and it's nowhere to be found.
Why would accessing the same VPS via two different methods show completely different things?
If you are indeed sure that you are logged into the same host, with the same user account you should check that you are in the same folder.
Using ssh you can issue the command pwd (print working directory) to view the current the directory you are in.
To change to another directory using the shell, use the cd command, for example:
cd .. # This moves up to the parent directory
cd /var/www/html
The Winscp user interface should also show you in what directory you are currently in.
Navigation to another directory using Winscp should be fairly straightforward.
There's no reason to think these methods will put you in the same directory location at all.
When you SSH in using Putty, you will almost certainly be put in your home directory, and that will be where your hello.txt was created.
But the FTP service has presumably been configured to put you in the common area where your service's files are located, which is not under your home directory. Where it is will be specific to the configuration of that machine.
Using SSH you will probably be able to use cd to change directory to the FTP location, if you can find out what it is; however, the reverse is not true and you almost certainly won't be able to navigate to the home directory via FTP.
(Note, this is not a question about Django, and should probably have been asked on ServerFault.)

ssh-error when using Ansible with AWS EC2

The scenario is:
AWS Ec2 instances have two users, namely root and ubuntu. root is not
advisable, but AWS recommends using ubuntu as the default
user and this user has all sudo permissions.
Ansible's controlling machine runs on an EC2 instance. The Ansible playbook bootstraps another EC2 instance and installs certain softwares on them.
Node.js web app triggers this Ansible scripts from root user.
The setup works well when all the files for Ansible and Node.js are kept in the same folder. But when organised in different folder, gives Ansible ssh-error.
Error:
So, when organised in separate folders. The nodejs app triggers the ansible scripts. The new Ec2 instance is bootstrapped, but when the SSH-port is ready, cannot install the required softwares as it gives the ssh permission denied error.
The nodejs code that triggers the ansible scripts is executed as
child.exec("ansible playbook ../playbook.yml");
The only change in the code, after organising into folders is the addition of "../" path.
Debugging:
As I told you, there are two users in EC2 instance, the ec2-key module while bootstrapping stores the root's ssh-key to the newly bootstrapped instance. But, while installing the software on the newly bootstrapped instance, ubuntu's ssh=key is used for getting access. Thus, the conflicts with the keys give the ssh-permission denied error. And, this error particularly occurs after organising the Ansible files and Node.js files into separate folders. If all the Ansible and Node.js files are put inn the same folder, then no error is raised. FYI: All files are stored in the ubuntu user.
Just puzzled about this!
As said by #udondan, adding cwd to exec function works.

Remotely create a vhost on a docker container running rabbitmq

I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..