Unable to change SELinux security context for the VirtualBox shared folder - apache

I'm facing the following situation. For web development purposes, I've managed to set up a CentOS 7 guest VM with VirtualBox. I've installed a LAMP stack and configured Apache (vhost, added apache member of the group vboxsf, added the firewall rule) to access VirtualBox shared folder.
Configuration setting of the GUEST CentOS 7 VM Guest machine:
Virtual machine hostname: dickwan.dev
Shared Folders:
Name | Read-only | Auto-mount
------------------------------------
dickwan | no | yes
------------------------------------
Networking: NAT (with port forwarding rules)
Port Forwarding Rules:
Name | Protocol | Host IP | Host Port | Guest IP | Guest Port
--------------------------------------------------------------------------------------
HTTP | TCP | . . . | 8080 | . . . | 80
--------------------------------------------------------------------------------------
MariaDB | TCP | . . . | 9306 | . . . | 3306
--------------------------------------------------------------------------------------
SSH | TCP | . . . | 2222 | . . . | 22
Now when in my host machine, I open a browser and navigate to (let us say):
http://dickwan.dev:8080/server-status
I get the message:
Forbidden
You don't have permission to access /server-status on this server.
I've track down the problem to a SELinux security context type problem.
When SELinux is disabled everything works just fine (well... fine yeah hum).
But It feels to me like a bad practice just to shutdown the security feature. I've tried to change the context of the shared folder, but I was not able to conduct the operation
Is there a chance to have access to the shared folder through Apache without deactivating SELinux?

Since the security context of VBox shared folders cannot be changed, you can modify the SELinux security policy to allow Apache to work with the context. It is similar to opening a port in your firewall to expose a certain port to an application.
First, make sure your apache user is part of the group which owns the shared folder, if it is not, you can add it with a command that would look like this (the user/group names can be different on your system):
usermod -aG vboxsf apache
Then, you can use audit2allow to generate a new security policy to work around your issues. Here is a good tutorial.
If you are lazy and only want to allow Apache read access to your VBox shared folders, you can probably adapt the following my_httpd_t.te policy file and use the included commands to apply it on your system.
module my_httpd_t 1.0;
require {
type httpd_t;
type vmblock_t;
class dir read;
class file { read getattr open };
}
#============= httpd_t ==============
allow httpd_t vmblock_t:dir read;
allow httpd_t vmblock_t:file { getattr open read };
# Generated by audit2allow
# To apply this policy:
## checkmodule -M -m -o my_httpd_t.mod my_httpd_t.te
## semodule_package -o my_httpd_t.pp -m my_httpd_t.mod
## semodule -i my_httpd_t.pp
## systemctl restart httpd

I had a similar problem (except Fedora 20 as host and guest OS). What I did:
sudo mount -t vboxsf shared_folder /media/shared_folder
sudo ln -s /media/shared_folder/ /var/www/
sudo chcon -R --reference=/var/www /var/www/shared_folder
And this works for me :)
Before I've tried to set security context to automatically mounted shared folder (by VirtualBox) but without success thus I mount it manually

Related

Transfer a file from guest to host in Vagrant

Does anyone know how to transfer a file from my vagrant VM to my host machine?
Heres what I'm trying but its not working for me - No errors but the file is not appearing:
scp -i /Users/myuser/path/mypath/.vagrant/machines/proj/virtualbox/private_key -P 2222 vagrant#127.0.0.1:/home/vagrant/database.sql
I have also tried this:
scp -P 2222 vagrant#127.0.0.1:/home/vagrant/database.sql .
And I get the error 'scp: .: not a regular file type'
Why are you trying to transfer from 127.0.0.1? That is always the IP of the machine you are currently on, so in this case the VM itself. So you are trying to do a local copy, essentially the same as cp /home/vagrant/database.sql .
You need to use an actual IP address of the host system for this to work, not 127.0.0.1 or any other of the 127.x.x.x. local loopback addresses

Websocket + SSL on AWS Application Load Balancer

I have a Django Application deployed on ElastickBeanstalk.
I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)).
HTTP, HTTPS and Web Socket protocol are working fine.
But I can't find a way to deploy Websocket over Secure SSL.
It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.
Here is my ALB Configuration
Does anyone as a solution?
After 2 more days investigating, I finally cracked this config!
Here is the answer:
The right, and MINIMUM, aws - ALB Config:
Indeed, we need to
Decode SSL ( this is not a End-to-End encryption )
Forward All traffic to Daphne.
The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.
The minimum Deployment CONF
NO NEED of complet .ebextension override proxy in deployment:
.ebextensions/05_channels.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
sudo python --version > /tmp/version_check.txt
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer
start_worker.sh
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python /opt/python/current/app/fxf/manage.py runworker
supervisord.conf
`
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
`
If some are still struggling with this conf, I might post a tuto on medium or something.
Don't hesitate to push me for it on answers ;)
I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.
I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.
Now my load balancer configuration looks like this
...as HTTPS connection from the browser will cut a non secure ws:// peer requests.
One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.
var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);
Good luck!
you should use wss:// instead of ws://.
and change setting about proxy. I just added my wsgi.conf.
<VirtualHost *:80>
WSGIPassAuthorization On
WSGIScriptAlias / /opt/python/current/app/config/wsgi.py
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so
ProxyPreserveHost On
ProxyRequests Off
ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
<Directory /opt/python/current/app/>
Require all granted
</Directory>
</VirtualHost>
then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.
Before you make this file, you should check your daphne server is on.
Problems What I went through are djangoenv and worker for daemon.config.
first, djangoenv should be on one line. It means no linebreak.
second, if you use django channel v2, then it doesn't need worker. so erase it.
this is my daemon.config(I use 8001 port.):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
And Double check your security group alb to ec2. good luck!

How to use production webserver configuration locally

I'm using Nginx virtual hosts to serve a domain and I want to test my configuration locally before deploying.
The only way I've found to do that is to run nginx on local port 80 and temporarily add the following line to my /etc/hosts file:
127.0.0.1 example.com
which causes example.com to resolve to my local nginx instance.
Is there a better way to deal with this?
Local Host
When I just need to quickly check a server running on my local host, the following shell script has proven convenient:
spoof() {
hosts_file='/etc/hosts'
temp=$(mktemp)
cp "$hosts_file" "$temp"
trap 'sudo sh -c "mv \"$temp\" \"$hosts_file\""; trap "" EXIT; return 0' 0 1 2 3 9 15
hosts_lines="6i# SPOOFS:\n"
for i in "$#"; do
hosts_lines+="127.0.0.1\t$i\n"
shift
done
sudo sh -c "sed -i \"$hosts_lines\" \"$hosts_file\""
echo "Press CTRL-C to exit..."
sleep infinity
}
It takes any number of domains, spoofs them, and replaces the original /etc/hosts upon exit. Example usage:
$ spoof example.com example.net example.org
Vagrant
For long-term use, I use Vagrant along with the vagrant-hostsupdater plugin.
After installing, simply adding config.hostsupdater.aliases = ['dev.example.com'] to any Vagrantfile allows access to "example.com" on the VM via "dev.example.com" on the host.

Count connections using netstat on cPanel user through ssh

I want to know how many connections there are active to my site wich is on a shared hosting account.
The hosting provider is using cPanel and I can access it through ssh.
The problem is if I run the command:
netstat -tuna | wc -l
It returns a wild 2555 connection count, but when I go to google analytics and access the real time section, there are only 15-20 users active.
My question is are those 2555 connections to my site, or to the server as a whole regardless the user I am using to run the command. (I don't have root access).
Your netstat command is showing the all connection of your server NOT only Apache connection, If you want to check only Apache connection. You will have to user following command.
netstat -anp |grep 80 |wc -l
But with the above command you will get total numbers of Apache connection. Your site is hosted on shared server and due to that you can not check your site connection.
To check our site connection your will have to assign dedicated IP to your site and use that IP in above command to check your site Apache connection
netstat -anp |grep 80 |grep 1.1.1.1 | wc -l
Thanks

SSH overseas with Raspberry pi

I currently have my Raspberry pi setup with network connectivity and i can connect to it via local ip addres like this:
192.168.0.x
Is there anyway i use my puplic ip to ssh into my raspberry pi ?
I think a Dynamic DNS is usualy the way to go. I use FreeDNS and I think it's pretty good. Instructions for setup by dentaku65:
First of all register your account on Freedns. Freedns offers a bunch of domain names, from my taste the best ones (or the ones easy to remember) are:
mooo.com
ignorelist.com
Assume that you register: your_host>.ignorelist.com
Install inadyn:
sudo apt-get install inadyn curl
Open the url: http://freedns.afraid.org/dynamic/
Login with your account
Select the link Direct URL beside .ignorelist.com
Copy everything from the right of the ? in the address bar (alphanumeric string)
Create configuration file of inadyn:
sudo gedit /etc/inadyn.conf
And save this content:
--username <your_username>
--password <your_password>
--update_period 60000
--forced_update_period 320000
--alias <your_host>.ignorelist.com,alphanumeric string
--background
--dyndns_system default#freedns.afraid.org
--syslog
Add inadyn to crontab:
export EDITOR=gedit && sudo crontab -e
Edit the file to add the following line:
#reboot /usr/sbin/inadyn
Reboot your PC
Wait 3 minutes
Check if inadyn is running:
ps -A | grep inadyn
Check inadyn behaviour:
more /var/log/messages |grep INADYN
Check if your host is up:
ping <your_host>.ignorelist.com
There are two possible solutions to this problem.
If your ISP provides public ip, you can use dynamic DNS services from no-ip or dyndns or any other equivalent service providers and you can forward port #22 to rpi ip using your router menu.
If your ISP doesn't provide public ip and you are behind NAT. You can make use of reverse remote ssh method mentioned in this link. But to access via this method, you need a server in between that's having a public ip. http://www.tunnelsup.com/raspberry-pi-phoning-home-using-a-reverse-remote-ssh-tunnel
Hope it helps.
you may need to enable portfowarding on your router