I'm using Nginx virtual hosts to serve a domain and I want to test my configuration locally before deploying.
The only way I've found to do that is to run nginx on local port 80 and temporarily add the following line to my /etc/hosts file:
127.0.0.1 example.com
which causes example.com to resolve to my local nginx instance.
Is there a better way to deal with this?
Local Host
When I just need to quickly check a server running on my local host, the following shell script has proven convenient:
spoof() {
hosts_file='/etc/hosts'
temp=$(mktemp)
cp "$hosts_file" "$temp"
trap 'sudo sh -c "mv \"$temp\" \"$hosts_file\""; trap "" EXIT; return 0' 0 1 2 3 9 15
hosts_lines="6i# SPOOFS:\n"
for i in "$#"; do
hosts_lines+="127.0.0.1\t$i\n"
shift
done
sudo sh -c "sed -i \"$hosts_lines\" \"$hosts_file\""
echo "Press CTRL-C to exit..."
sleep infinity
}
It takes any number of domains, spoofs them, and replaces the original /etc/hosts upon exit. Example usage:
$ spoof example.com example.net example.org
Vagrant
For long-term use, I use Vagrant along with the vagrant-hostsupdater plugin.
After installing, simply adding config.hostsupdater.aliases = ['dev.example.com'] to any Vagrantfile allows access to "example.com" on the VM via "dev.example.com" on the host.
Related
I have a Django Application deployed on ElastickBeanstalk.
I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)).
HTTP, HTTPS and Web Socket protocol are working fine.
But I can't find a way to deploy Websocket over Secure SSL.
It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.
Here is my ALB Configuration
Does anyone as a solution?
After 2 more days investigating, I finally cracked this config!
Here is the answer:
The right, and MINIMUM, aws - ALB Config:
Indeed, we need to
Decode SSL ( this is not a End-to-End encryption )
Forward All traffic to Daphne.
The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.
The minimum Deployment CONF
NO NEED of complet .ebextension override proxy in deployment:
.ebextensions/05_channels.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
sudo python --version > /tmp/version_check.txt
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer
start_worker.sh
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python /opt/python/current/app/fxf/manage.py runworker
supervisord.conf
`
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
`
If some are still struggling with this conf, I might post a tuto on medium or something.
Don't hesitate to push me for it on answers ;)
I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.
I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.
Now my load balancer configuration looks like this
...as HTTPS connection from the browser will cut a non secure ws:// peer requests.
One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.
var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);
Good luck!
you should use wss:// instead of ws://.
and change setting about proxy. I just added my wsgi.conf.
<VirtualHost *:80>
WSGIPassAuthorization On
WSGIScriptAlias / /opt/python/current/app/config/wsgi.py
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so
ProxyPreserveHost On
ProxyRequests Off
ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
<Directory /opt/python/current/app/>
Require all granted
</Directory>
</VirtualHost>
then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.
Before you make this file, you should check your daphne server is on.
Problems What I went through are djangoenv and worker for daemon.config.
first, djangoenv should be on one line. It means no linebreak.
second, if you use django channel v2, then it doesn't need worker. so erase it.
this is my daemon.config(I use 8001 port.):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
And Double check your security group alb to ec2. good luck!
I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker
I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.
I'm trying to set up a simple dev environment with Vagrant. The base box (that I created) has CentOS 6.5 64bit with Apache and MySQL.
The issue is, the httpd service doesn't start on boot after I reload the VM (vagrant reload or vagrant halt then up).
The problem only occurs when I run a provision script that alters the DocumentRoot and only after the first time I halt the machine.
More info:
httpd is on chkconfig on levels 2, 3, 4 and 5
There are no errors written to the error_log (on /etc/httpd/logs).
If I ssh into the machine and start the service manually, it starts with no problem.
I had the same issue with other CentOS boxes (like the chef/centos-6.5 available on vagrantcloud.com), that's why I created one myself.
Other services, like mysql, start fine, so it's a problem specific to apache.
Resuming:
httpd always start on first boot, even with the provision script (like after vagrant destroy)
httpd always start when I don't run a provision script (but I need it to set the DocumentRoot)
httpd doesn't start after first halt, with a provision script that messes with DocumentRoot (not sure if that's the problem).
This is my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "centos64_lamp"
config.vm.box_url = "<url>/centos64_lamp.box"
config.vm.hostname = "machine.dev"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.synced_folder ".", "/vagrant", owner: "root", group: "root"
config.vm.provision :shell, :path => "vagrant_files/bootstrap.sh"
end
I tried to create the vagrant folder with owner/group root and apache. Same problem with both (as with owner vagrant).
These are the provision scripts (bootstrap.sh) that I tried. The only thing that I want them to do is to change the DocumentRoot to the vagrant folder. Neither worked.
Try 1
#!/usr/bin/env bash
sudo rm -rf /var/www/html
sudo ln -fs /vagrant/app/webroot /var/www/html
Try 2
#!/usr/bin/env bash
sudo cp /vagrant/vagrant_files/httpd.conf /etc/httpd/conf
sudo service httpd restart
The httpd.conf on the second try is equal to the default one, except for the DocumentRoot path. This second alternative allows me to do vagrant up --provision to force the restart of the service, but that should be an unnecessary step.
What else can I try to solve this? Thank you.
Apparently the problem was due to the vagrant folder not being mounted when Apache tries to start. Although I still don't understand why no error is thrown.
I solved it by creating an Upstart script (on the folder /etc/init) to start the service after vagrant mounts its folder (it emits an event called vagrant-mounted)
This is the script I used (with the filename httpd.conf but I don't think that's necessary).
# start apache on vagrant mounted
start on vagrant-mounted
exec sudo service httpd start
Upstart can do much more but this solves it.
First of all, check if httpd suppose to be started for specific runlevels (at least 2-5) by (which you did):
chkconfig | grep httpd
In that case it may be related that your DocumentRoot or its symlink points to Vagrant synced folder, so it's not available yet during service being started.
Workaround is to add service start httpd command at the end of your shell provisioning script, e.g.:
service httpd status || service httpd start
in order to fix it.
For more bullet-proof workaround, add it into trap function (for Bash script), e.g.:
trap onerror 1 2 3 15 ERR
#--- onerror()
onerror() {
service httpd status || service httpd start
}
This may be not enough, so to make it start in halt & up cases, you need to run your shell as always in your Vagrantfile, for example:
config.vm.provision :shell, run: "always", :inline => "service httpd status || service httpd start"
or provide a script, e.g.:
config.vm.provision :shell, run: "always", path: "scripts/check_vm_services.sh"
Then the script may look like:
#!/usr/bin/env bash
# Script to re-check VM state.
# Run each time when vagrant command is invoked.
# Check if httpd service is running.
echo Checking services...
service httpd status || service httpd start
Alternatively check: Launching services after Vagrant mount which uses upstart event that Vagrant emits each time it mounts a synced folder that is called vagrant-mounted, so we can modify the upstart configuration file for services that depend on the Vagrant synced folder to listen and start check and restart the services after the vagrant-mounted event is emitted.
i confirm that the above ^ solution absolutely works.
i added a file named vagrant-mounted.conf within /etc/init, containing:
start on vagrant-mounted
exec sudo sh /etc/startup.sh
the shell script /etc/startup.sh i had already added, as a means of manually starting up httpd, mysqld and sendmail but required logging in via vagrant ssh after vagrant up to do so... now it's automatic. great!
My nginx was not starting up on Vagrant reload or Vagrant up, so this is my solution:
sudo cat > /etc/init/vagrant-mounted.conf << EOL
# start services on vagrant mounted
start on vagrant-mounted
exec sudo service php5-fpm restart
exec sudo service mysql restart
exec sudo service memcached restart
exec sudo service nginx restart
exec sudo nginx
EOL
I've setup a VM on Fedora 17 with KVM and have configured a bridge network for the KVM. Both the host and the VM use manual IP configuration, with the host's IP as 192.168.0.2, the VM's 192.168.0.10.
From the VM I can connect to the host without any problems, but from the host I can't SSH to the VM,even though I still can ping the KVM from the host. Trying to ssh just gives me the result "no route to host".
Oh, I have iptables disabled so I don't think this is the problem of the firewall.
Also ensure that the kernel is configure for ip forwarding:
$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1
It should have a value of 1, not 0. If needed, enable with these commands:
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
There are two ways :
* Using proxy tunnel to create a channel for host from guest :
From guest run following command :
ssh -L 2000:localhost_ip:2000 username#hostip
explore ssh man to get the inside.
* Difficult to setup, but proper configuration while running guest :
follow
http://www.cse.iitd.ernet.in/~prathmesh/random.html#Connecting_qemu_guest_to_real_network