Websocket + SSL on AWS Application Load Balancer - ssl

I have a Django Application deployed on ElastickBeanstalk.
I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)).
HTTP, HTTPS and Web Socket protocol are working fine.
But I can't find a way to deploy Websocket over Secure SSL.
It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.
Here is my ALB Configuration
Does anyone as a solution?

After 2 more days investigating, I finally cracked this config!
Here is the answer:
The right, and MINIMUM, aws - ALB Config:
Indeed, we need to
Decode SSL ( this is not a End-to-End encryption )
Forward All traffic to Daphne.
The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.
The minimum Deployment CONF
NO NEED of complet .ebextension override proxy in deployment:
.ebextensions/05_channels.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
sudo python --version > /tmp/version_check.txt
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer
start_worker.sh
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python /opt/python/current/app/fxf/manage.py runworker
supervisord.conf
`
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
`
If some are still struggling with this conf, I might post a tuto on medium or something.
Don't hesitate to push me for it on answers ;)

I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.
I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.
Now my load balancer configuration looks like this
...as HTTPS connection from the browser will cut a non secure ws:// peer requests.
One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.
var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);
Good luck!

you should use wss:// instead of ws://.
and change setting about proxy. I just added my wsgi.conf.
<VirtualHost *:80>
WSGIPassAuthorization On
WSGIScriptAlias / /opt/python/current/app/config/wsgi.py
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so
ProxyPreserveHost On
ProxyRequests Off
ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
<Directory /opt/python/current/app/>
Require all granted
</Directory>
</VirtualHost>
then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.
Before you make this file, you should check your daphne server is on.
Problems What I went through are djangoenv and worker for daemon.config.
first, djangoenv should be on one line. It means no linebreak.
second, if you use django channel v2, then it doesn't need worker. so erase it.
this is my daemon.config(I use 8001 port.):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
And Double check your security group alb to ec2. good luck!

Related

How to run Mercure in production with Apache

I have a Symfony project on an Apache server that uses Mercure and I try to setup the Mercure hub in production.
To run the Mercure hub in production, I extract the archive mercure_0.6.2_Linux_x86_64.tar.gz (https://github.com/dunglas/mercure/releases) into a subfolder mercure at the root of my project.
Then I run the command:
JWT_KEY='myJWTKey' ACME_HOSTS='example.com' ./mercure
with my informations
But the hub doesn't run with this error:
FATA[0000] listen tcp :443: bind: permission denied
I saw a similar question (How to run Mercure in production)
but the proposed answer uses ADDR to change port, and according to the documentation, "Let's Encrypt only supports the default port: to use Let's Encrypt, do not set this variable.".
How do I run Mercure in production?
Here are the steps I did to resolve my problem :
I run Mercure with this command:
JWT_KEY='aVerySecretKey' ADDR='myhub.com:3000' CORS_ALLOWED_ORIGINS='https://mywebsite.com' DEBUG=1 ALLOW_ANONYMOUS=1 ./mercure
So, Mercure run here: http://myhub.com:3000.
I use Apache as a proxy with this parameters:
ProxyPass / http://myhub.com:3000/
ProxyPassReverse / https://myhub.com/
So now, I can access the hub in HTTPS here https://myhub.com/hub from my domain https://mywebsite.com.
Thanks to dunglas, the author of Mercure.
I don't know if this is helpful, but after a lot of struggle I got Mercure working on a live server like this. (I'm using port 9090 throughout.) In Apache domain conf:
ProxyPass /hub/ http://localhost:9090/
ProxyPassReverse /hub/ http://localhost:9090/
In Javascript:
new URL('https://www.example.com/hub/.well-known/mercure');
In Symfony:
MERCURE_PUBLISH_URL=https://www.example.com/hub/.well-known/mercure
Being careful not to confuse MERCURE_JWT_TOKEN with MERCURE_JWT_SECRET.
From root, running Mercure server like this for testing:
docker run -e JWT_KEY='!ChangeMe!' -e DEMO=1 -e ALLOW_ANONYMOUS=1 -e CORS_ALLOWED_ORIGINS='*' -e PUBLISH_ALLOWED_ORIGINS='*' -p 9090:80 dunglas/mercure
So now everything is working, without https / 443 problems.

Enable https in cartodb installation

I am using this image https://github.com/sverhoeven/docker-cartodb to start my own container, and then created a reverse proxy by nginx to enable ssl.
The ssl website pulls up but the problem is carto does not detect the change in protocol, and i am getting mixed content warning due to api requests by cbd.js on HTTP protocol.
for example: http://carto.gq/user/dev/api/v1/viz/?tag_name=&q=&page=1&type=&exclude_shared=false&per_page=20&locked=&tags=&shared=no&only_liked=false&order=updated_at&types=table&exclude_raster=true.
What file I should to change the protocol of api calls, here is my installation https://carto.gq .
Enable SSL
Enable Nginx on the server and redirect all traffic to ssl (ubuntu):
Install Nginx
sudo apt-get update
sudo apt-get install nginx
Install certbot-nginx
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx
Now follow the instructions on screen as they appear
Edit Nginx File
sudo nano /etc/nginx/sites-available/default
Now Find the existing server_name line and replace the underscore, _, with
your domain name:
server_name example.com www.example.com;
Now replace the block below:
location / {
…………………………
}
With this given below and as nginx-change.txt in docs folder
client_max_body_size 150M;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
#try_files $uri $uri/ =404;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8088;
}
Then, verify the syntax and reload Nginx to load the new configuration.
sudo nginx -t
sudo systemctl reload nginx
Skip Firewall for now will add later.
Obtaining an SSL Certificate
sudo certbot --nginx -d carto.ml -d www.carto.ml
Follow the instructions on Screen after this is done over
Enable SSL on a carto container, after it is running on 8088 internal port:
Start container
docker run -d -p 127.0.0.1:8088:80 -h carto.ml sandeepgadhwal/cartodb
Get Container ID
docker ps -a
example: 99718a6233b9
Connect to container bash
docker exec -it {containerid} /bin/bash
example: docker exec -it ac2ce1a67df3 /bin/bash
Install missing components
apt-get install nano
Edit baseurl in rails file.
nano cartodb/app/models/user/user_decorator.rb
you’ll need to edit line 100,
replace base_url: public_url, with base_url: public_url.sub('http','https')
This will update the user_data.base_urlglobal which cdb.js uses to build
many of the api calls
Edit Api protocols and portd in app_config.yml you can find the same in
docker-cartodb folder.
nano /cartodb/config/app_config.yml
you’ll need to edit line 35,37,40,42,46,49,51,54,258,,259,261,262.
replace port to 443 or https port of your server, replace http to https
This will update the api_urls for sql and maps Api.
get out of editor and the bash
Press ctrl + c to exit
Confirm write to file by pressing Y and pressing enter
Then to exit bash
press ctrl + q + p
Restart Container
docker restart {containerid}

Put different containers containing a server in the same server

I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker

How to clear Apache cache in XAMMP?

How can I clear apache cache in xammp?
I tried the 'htcacheclean -r' command, but it's always generated error.
If I know well the apache can't cache the files/ scripts, but a system administrator said this: 'The apache casheing the site, so clear the apache(!) cache.'.
Take a look at this:
Use mod_cache at http://httpd.apache.org/docs/2.0/mod/mod_cache.html
CacheDisable /local_files
Description: Disable caching of specified URLs Syntax: CacheDisable url-string Context: server config, virtual host
Try this if others not working:
htcacheclean -p C:\xampp\htdocs\yourproject -rv -L 1000M
In this way, you specify the -p path clearly, not to expect xampp to find that path.
The -r = Clean thoroughly. This assumes that the Apache web server is
not running. This option is mutually exclusive with the -d
option and implies -t.
The -v = Be verbose and print statistics. This option is mutually
exclusive with the -d option.
The -L 1000M = Specify LIMIT as the total disk cache inode limit.(in Megabytes)

How to use production webserver configuration locally

I'm using Nginx virtual hosts to serve a domain and I want to test my configuration locally before deploying.
The only way I've found to do that is to run nginx on local port 80 and temporarily add the following line to my /etc/hosts file:
127.0.0.1 example.com
which causes example.com to resolve to my local nginx instance.
Is there a better way to deal with this?
Local Host
When I just need to quickly check a server running on my local host, the following shell script has proven convenient:
spoof() {
hosts_file='/etc/hosts'
temp=$(mktemp)
cp "$hosts_file" "$temp"
trap 'sudo sh -c "mv \"$temp\" \"$hosts_file\""; trap "" EXIT; return 0' 0 1 2 3 9 15
hosts_lines="6i# SPOOFS:\n"
for i in "$#"; do
hosts_lines+="127.0.0.1\t$i\n"
shift
done
sudo sh -c "sed -i \"$hosts_lines\" \"$hosts_file\""
echo "Press CTRL-C to exit..."
sleep infinity
}
It takes any number of domains, spoofs them, and replaces the original /etc/hosts upon exit. Example usage:
$ spoof example.com example.net example.org
Vagrant
For long-term use, I use Vagrant along with the vagrant-hostsupdater plugin.
After installing, simply adding config.hostsupdater.aliases = ['dev.example.com'] to any Vagrantfile allows access to "example.com" on the VM via "dev.example.com" on the host.