From inside a docker container, I'm running
# openssl s_client -connect rubygems.org:443 -state -nbio 2>&1 | grep "^SSL"
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
That's all I get
I can't connect to any https site from within the docker container. The container is running on an openstack vm. The vm can connect via https.
Any advice?
UPDATE
root#ce239554761d:/# curl -vv https://google.com
* Rebuilt URL to: https://google.com/
* Hostname was NOT found in DNS cache
* Trying 216.58.217.46...
* Connected to google.com (216.58.217.46) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
and then it hangs.
Also, I'm getting intermittent successes now.
Sanity Checks:
changing the docker ips doesn't fix the problem
The docker containers work on my local machine
The docker containers work on other clouds
Docker 1.10.0 doesn't work in the vms
Docker 1.9.1 works in the vms
I was given a solution by the Docker community
OpenStack network seems to use lower MTU values and Docker does not infer the MTU settings from the host's network card since 1.10.
To run docker daemon with custom MTU settings, you can follow this blog post, that says:
$ cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
Edit a line in the new file to look like this:
ExecStart=/usr/bin/docker daemon -H fd:// --mtu=1454
Or (as suggested below by Dionysius), create and edit the file
/etc/systemd/system/docker.service.d/fixmtu.conf as follow:
[Service]
# Reset ExecStart & update mtu (see original command in /lib/systemd/system/docker.service)
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --mtu=1454.
MTU of 1454 is the value that seems to be common with OpenStack. You can look it up in your host using ifconfig.
Finally restart Docker:
$ sudo systemctl daemon-reload
$ sudo service docker restart
Related
I am using elastic cache single node shard redis 4.0 later version.
I enabled In-Transit Encryption and gave redis auth token.
I created one bastion host with stunnal using this link
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
I am able to connect to elastic cache redis node using following way
redis-cli -h hostname -p 6379 -a mypassword
and i can do telnet also.
BUT
when I ping (expected response "PONG") on redis-cli after connection it is giving
"Error: Connection reset by peer "
I checked security group of both side.
Any idea ?
Bastion Host ubuntu 16.04 machine
As I mentioned in question, I was running the command like this:
redis-cli -h hostname -p 6379 -a mypassword
The correct way to connect into a ElastiCache cluster through stunnel should be using "localhost" as the host address,like this:
redis-cli -h localhost -p 6379 -a mypassword
There is explanation for using the localhost address:
when you create a tunnel between your bastion server and the ElastiCache host through stunnel, the program will start a service that listen to a local TCP port (6379), encapsulate the communication using the SSL protocol and transfer the data between the local server and the remote host.
you need to start the stunnel, check if the service is listening on the localhost address (127.0.0.1), and connect using the "localhost" as the destination address: "
Start stunnel. (Make sure you have installed stunnel using this link https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/)
$ sudo stunnel /etc/stunnel/redis-cli.conf
Use the netstat command to confirm that the tunnels have started:
$ netstat -tulnp | grep -i stunnel
You can now use the redis-cli to connect to the encrypted Redis node using the local endpoint of the tunnel:
$redis-cli -h localhost -p 6379 -a MySecretPassword
localhost:6379>set foo "bar"
OK
localhost:6379>get foo
"bar"
Most probably ElastiCache Redis Instance is using Encryption in-transit and Encryption at-rest and by design, the Redis CLI is not compatible with the encryption.
You need to setup stunnel to connect redis cluster
https://datanextsolutions.com/blog/how-to-fix-redis-cli-error-connection-reset-by-peer/
"Error: Connection reset by peer" indicates that Redis is killing your connection without sending any response.
One possible cause is you are trying to connect to the Redis node without using SSL, as your connection will get rejected by the Redis server without a response [1]. Make sure you are connecting through the correct port in your tunnel proxy. If you are connecting directly from the bastion host, you should be using local host.
Another option is that you have incorrectly configured your stunnel to not include a version of SSL that is supported by Redis. You should double check the config file is exactly the same as the one provided in the support doc.
It that doesn't solve your problem, you can try to build the cli included in AWS open source contribution.[2] You'll need to check out the repository, follow the instructions in the readme, and then do make BUILD_SSL=yes make redis-cli.
[1] https://github.com/madolson/redis/blob/unstable/src/ssl.c#L464
[2] https://github.com/madolson/redis/blob/unstable/SSL_README.md
I have a TLS secured docker demon running. I use TLS for remote accessing the docker demon and access docker locally without any TLS. Normally...
Recently, I have updated Docker. Apparently I cannot connect to the local socket anymore. I suppose Docker is using now TLS for remote and local connections.
Is there a way to disable TLS for the local Docker socket?
Output of ps auxw | grep dockerd:
/usr/bin/dockerd -H 0.0.0.0:2376 --tlsverify --tlscacert /home/dockermanager/.docker/ca.pem --tlscert /home/dockermanager/.docker/server-cert.pem --tlskey /home/dockermanager/.docker/server-key.pem
Had been able to fix this myself.
I needed to migrate to these two systemd files provided by Docker:
https://github.com/moby/moby/tree/master/contrib/init/systemd
One service file is for the docker demon and there is one for the docker socket separately. The docker socket is a required dependency by docker.service and will be loaded, restartet and stopped accordingly.
Then i needed to add the docker demon parameter -H unix:// in order to activate the docker demon listening to the docker socket.
Afterwards everything worked as always and I assume local docker.socket communication does not need tls verification at all.
Start command now:
/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2376 --tlsverify --tlscacert /home/dockeruser/.docker/ca.pem --tlscert /home/dockeruser/.docker/server-cert.pem --tlskey /home/dockeruser/.docker/server-key.pem
I have installed Kubernetes using contrib/ansible scripts.
When I run cluster-info:
[osboxes#kube-master-def ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
The cluster is exposed on localhost with insecure port, and exposed on secure port 443 via ssl
kube 18103 1 0 12:20 ? 00:02:57 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=https://10.57.50.161:443 -- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt
kube 18217 1 0 12:20 ? 00:00:15 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=https://10.57.50.161:443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
root 27094 1 0 12:21 ? 00:00:00 /bin/bash /usr/libexec/kubernetes/kube-addons.sh
kube 27300 1 1 12:21 ? 00:05:36 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://10.57.50.161:2379 --insecure-bind-address=127.0.0.1 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/certs/server.crt
I have copied the certificates from kube-master machine to my local machine, I have installed the ca root certificate. The chrome/safari browsers are accepting the ca root certificate.
When I'm trying to access the https://10.57.50.161/ui
I'm getting the 'Unauthorized'
How can I access the kubernetes ui?
You can use kubectl proxy.
Depending if you are using a config file, via command-line run
kubectl proxy
or
kubectl --kubeconfig=kubeconfig proxy
You should get a similar response
Starting to serve on 127.0.0.1:8001
Now open your browser and navigate to
http://127.0.0.1:8001/ui/ (deprecated, see kubernetes/dashboard)
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You need to make sure the ports match up.
This works for me that you can access from network
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Quick-n-dirty (and unsecure) way to access the Dashboard:
$ kubectl edit svc/kubernetes-dashboard --namespace=kube-system
This will load the Dashboard config (yaml) into an editor where you can edit it.
Change line type: ClusterIP to type: NodePort.
Get the tcp port:
$ kubectl get svc kubernetes-dashboard -o json --namespace=kube-system
The line with the tcp port will look like:
"nodePort": 31567
In newer releases of kubernetes you can get the nodeport from get svc:
# This is kubernetes 1.7:
donn#host37:~$ sudo kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.3.0.234 <nodes> 80:31567/TCP 2h
Do kubectl describe nodes to get a node IP address.
Browse to:
http://NODE_IP:31567
Good for testing. Not good for production due to lack of security.
Looking at your apiserver configuration, you will need to either present a bearer token (valid tokens will be listed in /etc/kubernetes/tokens/known_tokens.csv) or client certificate (signed by the CA cert in /etc/kubernetes/certs/ca.crt) to prove to the apiserver that you should be allowed to access the cluster.
https://github.com/kubernetes/kubernetes/issues/7307#issuecomment-96130676 describes how I was able to configure client certificates for a GKE cluster on my Mac.
To pass bearer tokens, you need to pass an HTTP header Authorization with a value Bearer ${KUBE_BEARER_TOKEN}. You can see an example of how this is done with curl here; in a browser, you will need to install an add-on/plugin to pass custom headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
Run the following command in your local laptop(or where you want to access the GUI)
ssh -L 8877:127.0.0.1:8001 -N -f -l root master_IPADDRESS
Get the secret key
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
Open the dashboard
http://localhost:8877/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Perform role-binding if you get any errors.
You can use kubectl proxy --address=clusterIP --port 8001 --accept-hosts '.*'
api server is already accessible on 6443 port on the node, but not authorize accesss to https://:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
i've generated client certificats signed by kubernetes ca cert, and converted to pkcs12 and integreted to my browser... when try to access to the this url they says that user are not authorized to access to the uri...
I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/
I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.