I have a TLS secured docker demon running. I use TLS for remote accessing the docker demon and access docker locally without any TLS. Normally...
Recently, I have updated Docker. Apparently I cannot connect to the local socket anymore. I suppose Docker is using now TLS for remote and local connections.
Is there a way to disable TLS for the local Docker socket?
Output of ps auxw | grep dockerd:
/usr/bin/dockerd -H 0.0.0.0:2376 --tlsverify --tlscacert /home/dockermanager/.docker/ca.pem --tlscert /home/dockermanager/.docker/server-cert.pem --tlskey /home/dockermanager/.docker/server-key.pem
Had been able to fix this myself.
I needed to migrate to these two systemd files provided by Docker:
https://github.com/moby/moby/tree/master/contrib/init/systemd
One service file is for the docker demon and there is one for the docker socket separately. The docker socket is a required dependency by docker.service and will be loaded, restartet and stopped accordingly.
Then i needed to add the docker demon parameter -H unix:// in order to activate the docker demon listening to the docker socket.
Afterwards everything worked as always and I assume local docker.socket communication does not need tls verification at all.
Start command now:
/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2376 --tlsverify --tlscacert /home/dockeruser/.docker/ca.pem --tlscert /home/dockeruser/.docker/server-cert.pem --tlskey /home/dockeruser/.docker/server-key.pem
Related
I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion
We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.
I am using elastic cache single node shard redis 4.0 later version.
I enabled In-Transit Encryption and gave redis auth token.
I created one bastion host with stunnal using this link
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
I am able to connect to elastic cache redis node using following way
redis-cli -h hostname -p 6379 -a mypassword
and i can do telnet also.
BUT
when I ping (expected response "PONG") on redis-cli after connection it is giving
"Error: Connection reset by peer "
I checked security group of both side.
Any idea ?
Bastion Host ubuntu 16.04 machine
As I mentioned in question, I was running the command like this:
redis-cli -h hostname -p 6379 -a mypassword
The correct way to connect into a ElastiCache cluster through stunnel should be using "localhost" as the host address,like this:
redis-cli -h localhost -p 6379 -a mypassword
There is explanation for using the localhost address:
when you create a tunnel between your bastion server and the ElastiCache host through stunnel, the program will start a service that listen to a local TCP port (6379), encapsulate the communication using the SSL protocol and transfer the data between the local server and the remote host.
you need to start the stunnel, check if the service is listening on the localhost address (127.0.0.1), and connect using the "localhost" as the destination address: "
Start stunnel. (Make sure you have installed stunnel using this link https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/)
$ sudo stunnel /etc/stunnel/redis-cli.conf
Use the netstat command to confirm that the tunnels have started:
$ netstat -tulnp | grep -i stunnel
You can now use the redis-cli to connect to the encrypted Redis node using the local endpoint of the tunnel:
$redis-cli -h localhost -p 6379 -a MySecretPassword
localhost:6379>set foo "bar"
OK
localhost:6379>get foo
"bar"
Most probably ElastiCache Redis Instance is using Encryption in-transit and Encryption at-rest and by design, the Redis CLI is not compatible with the encryption.
You need to setup stunnel to connect redis cluster
https://datanextsolutions.com/blog/how-to-fix-redis-cli-error-connection-reset-by-peer/
"Error: Connection reset by peer" indicates that Redis is killing your connection without sending any response.
One possible cause is you are trying to connect to the Redis node without using SSL, as your connection will get rejected by the Redis server without a response [1]. Make sure you are connecting through the correct port in your tunnel proxy. If you are connecting directly from the bastion host, you should be using local host.
Another option is that you have incorrectly configured your stunnel to not include a version of SSL that is supported by Redis. You should double check the config file is exactly the same as the one provided in the support doc.
It that doesn't solve your problem, you can try to build the cli included in AWS open source contribution.[2] You'll need to check out the repository, follow the instructions in the readme, and then do make BUILD_SSL=yes make redis-cli.
[1] https://github.com/madolson/redis/blob/unstable/src/ssl.c#L464
[2] https://github.com/madolson/redis/blob/unstable/SSL_README.md
This might just be my rookie knowledge of Docker,
but I can't get the networking to work.
I'm trying to run a Mule-server via the pr3d4t0r/mule repository.
I can run it, hot-swap applications but I can reach it.
I can run a local server without Docker, and it works flawlessly.
But not when I try it with Docker.
When I try to do a simple curl command I get "curl: (56) Recv failure: Connection reset by peer"
curl http://localhost:8090/Sven
I have tried exposing the ports via -P and separately via -p 8090:8090 but no luck.
When the docker is running it blocks the ports (I tried running Docker and the normal server at the same time but the normal one said the ports where already in use).
When I try another Image like jboss/wildfly and I use -p 8080:8080 there's no problem, it works perfectly.
The application in the mule-server will log and respond a simple "hello World", the output says that the application is deployed, but no messages or logging while I try to reach it.
Any suggestions?
In my case it was actually the app that was configured incorrectly. It had localhost as host. It should have been 0.0.0.0 without this it was acting only on localhost aka the docker container but not from outside of it.
You should not need to use -net=host.
So check if there's a configuration
In application.properties need set 0.0.0.0 ip not 127.0.0.0.
error
"curl: (56) Recv failure: Connection reset by peer"
mean that no process in docker image listening to the port. Option -p is bind of port in host system and image.
-p <port in host os to be binded to>:<port in container>
So, check your image, maybe your app in container use different port and you need
-p 8080:8090
if you have this , comment or remove it, server.address=localhost in your application.properties
I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143