I used this snort rule to block a website but it is not blocking the website. I already made the mode inline but still it is not working. Can anyone help me in this regard?? It would be really helpful for me. BTW I installed snort in ubuntu OS in Virtual box. Although in below screenshot you can see that it is showing that it already dropped the packet but actually i still can browse the website. Thanks.
here is the rule: drop http $HOME_NET any -> 34.102.136.180 $HTTP_PORTS (msg:"Dropping packets"; flow:to_server,established; http_uri; metadata: service http; priority:1; sid:10000001; rev:1; )>
Command I used: sudo snort -Q -c /usr/local/etc/snort/snort.lua -R /usr/local/etc/rules/local.rules -i enp0s3 -A alert_fast -s 65535 -k none
I have a script that sends POST requests to Apache load balancer to change status_D parameter of the specified worker. This is supposed to enable or disable worker (0 - enable, 1 - disable).
This used to work, but not anymore. Script is in Perl, but I tried sending the same request using curl, same result - status does not change.
If I open load balancer web page in browser and change it from there - it works.
I even captured browser's POST request parameters from the Apache log, copied and pasted them into curl command, but it still did not work, which makes me think that parameters are fine, but perhaps something has changed in Apache or proxy_balancer_module recently? Apache version is 2.4.52.0.1.
In new versions you need to add the referer in the http request.
curl -s -o /dev/null -XPOST "http://${server}:${port}/${manager}?" \
-H "Referer: http://${server}:${port}/${manager}?b=${balancer}&w=${worker}&nonce=${nonce}" -d b="${balancer}" \
-d w="${worker}" -d nonce="${nonce}" -d w_status_D=1
I have a Django Application deployed on ElastickBeanstalk.
I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)).
HTTP, HTTPS and Web Socket protocol are working fine.
But I can't find a way to deploy Websocket over Secure SSL.
It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.
Here is my ALB Configuration
Does anyone as a solution?
After 2 more days investigating, I finally cracked this config!
Here is the answer:
The right, and MINIMUM, aws - ALB Config:
Indeed, we need to
Decode SSL ( this is not a End-to-End encryption )
Forward All traffic to Daphne.
The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.
The minimum Deployment CONF
NO NEED of complet .ebextension override proxy in deployment:
.ebextensions/05_channels.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
sudo python --version > /tmp/version_check.txt
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer
start_worker.sh
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python /opt/python/current/app/fxf/manage.py runworker
supervisord.conf
`
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
`
If some are still struggling with this conf, I might post a tuto on medium or something.
Don't hesitate to push me for it on answers ;)
I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.
I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.
Now my load balancer configuration looks like this
...as HTTPS connection from the browser will cut a non secure ws:// peer requests.
One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.
var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);
Good luck!
you should use wss:// instead of ws://.
and change setting about proxy. I just added my wsgi.conf.
<VirtualHost *:80>
WSGIPassAuthorization On
WSGIScriptAlias / /opt/python/current/app/config/wsgi.py
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so
ProxyPreserveHost On
ProxyRequests Off
ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
<Directory /opt/python/current/app/>
Require all granted
</Directory>
</VirtualHost>
then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.
Before you make this file, you should check your daphne server is on.
Problems What I went through are djangoenv and worker for daemon.config.
first, djangoenv should be on one line. It means no linebreak.
second, if you use django channel v2, then it doesn't need worker. so erase it.
this is my daemon.config(I use 8001 port.):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
And Double check your security group alb to ec2. good luck!
How can I clear apache cache in xammp?
I tried the 'htcacheclean -r' command, but it's always generated error.
If I know well the apache can't cache the files/ scripts, but a system administrator said this: 'The apache casheing the site, so clear the apache(!) cache.'.
Take a look at this:
Use mod_cache at http://httpd.apache.org/docs/2.0/mod/mod_cache.html
CacheDisable /local_files
Description: Disable caching of specified URLs Syntax: CacheDisable url-string Context: server config, virtual host
Try this if others not working:
htcacheclean -p C:\xampp\htdocs\yourproject -rv -L 1000M
In this way, you specify the -p path clearly, not to expect xampp to find that path.
The -r = Clean thoroughly. This assumes that the Apache web server is
not running. This option is mutually exclusive with the -d
option and implies -t.
The -v = Be verbose and print statistics. This option is mutually
exclusive with the -d option.
The -L 1000M = Specify LIMIT as the total disk cache inode limit.(in Megabytes)
I am trying to use apache to access a XML from tomcat url like so:
http://localhost:8081/solr-example/select/?q=blah&version=2.2&start=0&rows=10&indent=on
However, I am getting a permission denied error. I have tried chown, chmod and chcon on both the tomcat and solr directories and it still gives me the error.
I am on centos/linux. Any help with this is much appreciated.
Cheers :)
Ke
Possibles solutions:
Check if the xml is under WEB-INF
directory.
Change the owner of the document to 'apache'.
PS: If you could post some of the log information, the detailed error (denied from what? the server, the SO, it's a 303 forbidden, etc) it will help.
This is due to SELinux enforcing
By default, only port 80 is allowed to do HTTP. You can add non standard ports using the command
semanage port -a -t http_port_t -p tcp 8081
I had the same issue with SOLR, which I solved using the above command.
It is explained here:
http://digitalpbk.blogspot.com/2011/10/solve-failed-to-query-solr-using-errno.html