I am trying to connect to a remote DB using SSH programatically from a web application.
I have been trying ssh2 module and I can connect to the remote server but what I would like to do is to connect directly to the tables in the DB.
I also tried tunnel-ssh in the bootstrap.js file so I can tunnel to the db but now what I believe is happening is that the db connection is starting before the tunnel is set up and therefore I am getting a Connection refused.
Is it possible to achieve this tunneling using some kind of configuration in the connections.js file of sails js?
Any other suggestion?
Thanks
Finally what I did is using a docker-compose with 2 dockers 1) web app and 2) SSH client. My docker-compose file looks something like this:
services:
db:
image: agonza1/sshclient
volumes:
- .:/ssh
command: "ssh -Ng -L 5432:localhost:5432 -i sshkey.pem -o StrictHostKeyChecking=no root#site.com -p 1234"
web:
build: .
env_file:
- .env
ports:
- "443:443"
links:
- db
depends_on:
- db
where sshclient is just something like this: https://hub.docker.com/r/kroniak/ssh-client/~/dockerfile/
Related
The scene is: I want to exec docker run & push in docker runner, and the docker registry and docker runner is in same server. so I want to pass host ip as variable into drone pipeline container so I can push docker image without a remote registry server. But it seem that only drone allowable environment variable can be used in ‘${}’. I try to export EXTERNALIP in host machine and try to get ${EXTERNALIP} but got nothing.
so Is there some way I can get external ip for communicating to localhost or another way to achieve this?
You should be able to push to localhost if its on the same host, that said, I was not able to do this using the packages plugin but was able to to replicate using direct docker:
steps:
- name: docker-${DRONE_EVENT}
image: docker:19.03
when:
event: [ push, pull_request ]
status: [ success ]
environment:
DOCKER_PASSWORD:
from_secret: docker_password
commands:
- echo $DOCKER_PASSWORD | docker login --username user_name --password-stdin localhost
- docker build -t localhost/demo-web:latest .
- if [ "${DRONE_EVENT}" == "push" ]; then docker push localhost/demo-web:latest; fi;
volumes:
- name: docker-socket
path: /var/run/docker.sock
volumes:
- name: docker-socket
host:
path:
/var/run/docker.sock
Couple caveats, obviously you will need to have trusted access in the repo configuration or --trusted if using local exec. Enjoy!
The below configuration in docker-compose file with a replica of 5 will create five containers with same VNC port and different internal IP or hostname. If we did the same thing on an ec2 machine then how we access those VNC desktops via public IP?
chrome_node:
image: selenium/node-chrome-debug:3.141.59-gold
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
networks:
- test
entrypoint: bash -c 'SE_OPTS="-host $$HOSTNAME -port 5557" /opt/bin/entry_point.sh'
ports:
- "5557:5900"
deploy:
replicas: 5
Adding the same entry multiple times in docker-compose file with different IP will do the trick but I am looking for any other alternative solution.
Change ports section to
ports:
- "5900-5999:5900"
I have these two containers, say backend (CentOs) and mongo. What I would like to have is that from within the backend container I can connect to the mongo database as if it was running locally, $> mongo localhost:27017
Anyway, as far as I understand all this, you can map the port localhost:27017 to mongo:27017 like this
$backend> ssh -L 27017:mongo:27017 root#mongo
However, if I do this I have to provide the root password and after that it logs me into the mongo container and no port forwarding is happening
Background: I want to do this because I'm running a Java program which connects to a Mongo database on localhost and I cannot change that.
I found the correct SSH port forwarding command
$> ssh root#mongo -L 27017:localhost:27017 -Nf
Normally the idea with this command is that you map a non-public port - through a public server to you own server/compute.
* `root#mongo` - the public server
* -L <port on your server>:<third server address>:<port>
* `-Nf` - Do not login
Because the public server and third server are the same computer/container you have to use localhost :)
Suppose for example that I want to make an SSH host in Docker. I understand that I can EXPOSE 22 inside Dockerfile. I also understand that I can use -p 22222:22 so I can SSH into that Docker container from another physical machine on my LAN on port 22222 as ssh my_username#docker_host_ip -p 22222:22. But suppose that I'm so lazy that I can't be bothered to docker run the container with the option -p 22222:22 every time. Is there a way that the option -p 22222:22 can be automated in a config file somewhere? InDockerfile` maybe?
You can use docker compose
You can defind listening port in docker-compose.yml file as below:
version: '2'
services:
web:
image: ubuntu
ssh_service:
build: .
command: ssh ....
volumes:
- .:/code
ports:
- "22222:22"
depends_on:
- web
Hi I have a requirement of connecting three docker containers so that they can work together. I call these three containers as
container 1 - pga (apache webserver at port 80)
container 2 - server (apache airavata server at port 8930)
container 3 - rabbit (RabbitMQ at port 5672)
I have started rabbitMQ as (container 3)
docker run -i -d --name rabbit -p 15672:15672 -t rabbitmq:3-management
I have started server (container 2) as
docker run -i -d --name server --link rabbit:rabbit --expose 8930 -t airavata_server /bin/bash
Now from inside server(container 2) I can access rabbit (container 3) at port 5672. When i try
nc -zv container_3_port 5672 it says connection successful.
Till this point I am happy with the docker connection through link.
Now I have created another container pga(container 1) as
docker run -i -d --name pga --link server:server -p 8080:80 -t psaha4/airavata_pga /bin/bash
now from inside the new pga container when I am trying to access the service of server (container 2) its saying connection refuse.
I have verified that from inside server container service is running at 8930 port and it was exposed while creating the container but still its refusing the connection from other containers to which it is linked.
I could not find a similar situation described by anyone anywhere and also clueless how to debug the same. Please help me find out a way.
The output of command: docker exec server lsof -i :8930
exec: "lsof": executable file not found in $PATH
Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
Error starting exec command in container fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2: Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
NOTE: Intend to expand on this but my kid's just been sick. Will address debugging issue from question when I get a chance.
You may find it easier to use docker-compose for this as it lets you run them all with one command and keep the configuration under source control. An example configuration file (from my website) looks like this:
database:
build: database
env_file:
- database/.env
api:
build: api
command: /opt/server/dist/build/ILikeWhenItWorks/ILikeWhenItWorks
env_file:
- api/.env
links:
- database
tty:
false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- api:/opt/server/
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- api
volumes_from:
- api
I find these files very readable and comprehensible, they essentially say exactly what they're doing. You can see how it relates to the surrounding directory structure in my source code.