Varnish configured, socket(): Address family not supported by protocol - apache

I am working on Centos 6.8 with Apache 2.4 and I had no trouble installing varnish and configured everything correctly for my Magento 2 site.
I have installed a SLL Certificate so I added the -p feature=+esi_ignore_https to my sysconfig file.
everything looks good at first
> service varnish restart
> Stopping Varnish Cache: [ OK ]
> Starting Varnish Cache: [ OK ]
Then when I Start the Varnish CLI with
> varnishd -d -f /etc/varnish/default.vcl
>
> Type 'help' for command list.
> Type 'quit' to close CLI session.
> Type 'start' to launch worker process.
>
> **socket(): Address family not supported by protocol**
and when I enter start I get an endless Loop of this
Child cleanup complete
socket(): Address family not supported by protocol
child (30530) Started
Child (30530) said Child starts
Child (30530) died signal=6
Child (30530) Panic message:
Assert error in vca_acct(), cache/cache_acceptor.c line 386:
Condition((listen(ls->sock, cache_param->listen_depth)) == 0) not true.
errno = 98 (Address already in use)
thread = (cache-acceptor)
version = varnish-4.0.4 revision 386f712
ident = Linux,2.6.32-042stab113.11,x86_64,-smalloc,-smalloc,-hcritbit,epoll
Backtrace:
0x432425: varnishd() [0x432425]
0x40d71d: varnishd() [0x40d71d]
0x7f105722aaa1: /lib64/libpthread.so.0(+0x7aa1) [0x7f105722aaa1]
0x7f1056f77bcd: /lib64/libc.so.6(clone+0x6d) [0x7f1056f77bcd]
Also when try I to log into varnishlog
Can't open VSM file (Abandoned VSM file (Varnish not running?)
/var/lib/varnish/patriciasouths.com/_.vsm )

service varnish start
Starting Varnish Cache: Error: Cannot open socket: :6081: Address family not supported by protocol
So this is really dumb.. the default install is this..
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-p thread_pool_min=${VARNISH_MIN_THREADS} \
-p thread_pool_max=${VARNISH_MAX_THREADS} \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}"
Here we have a variable to use ${VARNISH_LISTEN_ADDRESS}... but its NOT DEFINED PREVIOUSLY!
# Telnet admin interface listen address and port
VARNISH_ADMIN_LISTEN_PORT=6082
DOH! So add it.
# Telnet admin interface listen address and port
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
Starting Varnish Cache: [ OK ]
varnish.x86_64 0:4.1.10-1.el6 FAIL from https://packagecloud.io/varnishcache/varnish41/el/6/SRPMS

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

Websocket + SSL on AWS Application Load Balancer

I have a Django Application deployed on ElastickBeanstalk.
I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)).
HTTP, HTTPS and Web Socket protocol are working fine.
But I can't find a way to deploy Websocket over Secure SSL.
It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.
Here is my ALB Configuration
Does anyone as a solution?
After 2 more days investigating, I finally cracked this config!
Here is the answer:
The right, and MINIMUM, aws - ALB Config:
Indeed, we need to
Decode SSL ( this is not a End-to-End encryption )
Forward All traffic to Daphne.
The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.
The minimum Deployment CONF
NO NEED of complet .ebextension override proxy in deployment:
.ebextensions/05_channels.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
sudo python --version > /tmp/version_check.txt
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer
start_worker.sh
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python /opt/python/current/app/fxf/manage.py runworker
supervisord.conf
`
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
`
If some are still struggling with this conf, I might post a tuto on medium or something.
Don't hesitate to push me for it on answers ;)
I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.
I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.
Now my load balancer configuration looks like this
...as HTTPS connection from the browser will cut a non secure ws:// peer requests.
One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.
var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);
Good luck!
you should use wss:// instead of ws://.
and change setting about proxy. I just added my wsgi.conf.
<VirtualHost *:80>
WSGIPassAuthorization On
WSGIScriptAlias / /opt/python/current/app/config/wsgi.py
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so
ProxyPreserveHost On
ProxyRequests Off
ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
<Directory /opt/python/current/app/>
Require all granted
</Directory>
</VirtualHost>
then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.
Before you make this file, you should check your daphne server is on.
Problems What I went through are djangoenv and worker for daemon.config.
first, djangoenv should be on one line. It means no linebreak.
second, if you use django channel v2, then it doesn't need worker. so erase it.
this is my daemon.config(I use 8001 port.):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
And Double check your security group alb to ec2. good luck!

docker: Says connection refused when attempting to connect to a published port

I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/

How to use production webserver configuration locally

I'm using Nginx virtual hosts to serve a domain and I want to test my configuration locally before deploying.
The only way I've found to do that is to run nginx on local port 80 and temporarily add the following line to my /etc/hosts file:
127.0.0.1 example.com
which causes example.com to resolve to my local nginx instance.
Is there a better way to deal with this?
Local Host
When I just need to quickly check a server running on my local host, the following shell script has proven convenient:
spoof() {
hosts_file='/etc/hosts'
temp=$(mktemp)
cp "$hosts_file" "$temp"
trap 'sudo sh -c "mv \"$temp\" \"$hosts_file\""; trap "" EXIT; return 0' 0 1 2 3 9 15
hosts_lines="6i# SPOOFS:\n"
for i in "$#"; do
hosts_lines+="127.0.0.1\t$i\n"
shift
done
sudo sh -c "sed -i \"$hosts_lines\" \"$hosts_file\""
echo "Press CTRL-C to exit..."
sleep infinity
}
It takes any number of domains, spoofs them, and replaces the original /etc/hosts upon exit. Example usage:
$ spoof example.com example.net example.org
Vagrant
For long-term use, I use Vagrant along with the vagrant-hostsupdater plugin.
After installing, simply adding config.hostsupdater.aliases = ['dev.example.com'] to any Vagrantfile allows access to "example.com" on the VM via "dev.example.com" on the host.

KVM/Bridge: No Route To Host

I've setup a VM on Fedora 17 with KVM and have configured a bridge network for the KVM. Both the host and the VM use manual IP configuration, with the host's IP as 192.168.0.2, the VM's 192.168.0.10.
From the VM I can connect to the host without any problems, but from the host I can't SSH to the VM,even though I still can ping the KVM from the host. Trying to ssh just gives me the result "no route to host".
Oh, I have iptables disabled so I don't think this is the problem of the firewall.
Also ensure that the kernel is configure for ip forwarding:
$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1
It should have a value of 1, not 0. If needed, enable with these commands:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
There are two ways :
* Using proxy tunnel to create a channel for host from guest :
From guest run following command :
ssh -L 2000:localhost_ip:2000 username#hostip
explore ssh man to get the inside.
* Difficult to setup, but proper configuration while running guest :
follow
http://www.cse.iitd.ernet.in/~prathmesh/random.html#Connecting_qemu_guest_to_real_network