I have a kafka cluster on docker using confluent images. I am using docker-compose to build the containers.
When I tried to run the container it starts but can't communicate with any broker due to SSL handshake failed. I don't know if I miss some configuration
[kafka-admin-client-thread | adminclient-1] ERROR org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -3 (/XXX:19092) failed authentication due to: SSL handshake failed
My Kafka brokers are configured as follows:
kafka1:
image: confluentinc/cp-kafka:5.2.2
container_name: kafka1
ports:
- "19092:19092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_ADVERTISED_LISTENERS: SSL://XXXX:19092
KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker1.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: broker1_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS: broker1_sslkey_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker1.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker1_truststore_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_SECURITY_PROTOCOL: SSL
volumes:
- ./../../secrets:/etc/kafka/secrets
I am trying to bring a Confluent REST Proxy API into another container using the configurations:
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.2.2
hostname: kafka-rest-proxy
ports:
- "18082:18082"
environment:
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
KAFKA_REST_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: SSL://XXX:19092,SSL://XXX:19092,SSL://XXX:19092
KAFKA_REST_CLIENT_SECURITY_PROTOCOL: SSL
KAFKA_REST_CLIENT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.keystore.jks
KAFKA_REST_CLIENT_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.truststore.jks
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_KEY_PASSWORD: XXX
KAFKA_REST_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.keystore.jks
KAFKA_REST_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.truststore.jks
KAFKA_REST_SSL_TRUSTSTORE_PASSWORD: XXX
volumes:
- ./../../secrets:/etc/kafka/secrets
I configured the SSH connection only with the truststore (I removed the keystore config completely) and I used the OPTS environment variable:
docker run -d \
--name krp \
-p 8082:8082 \
...
-v /home/ubuntu/kafka-keys:/kafka-keys \
-e KAFKA_REST_CLIENT_OPTS="-Dssl.keystore.location=/kafka-keys/kafka.client.keystore.jks -Dssl.keystore.password=changeit -Dssl.truststore.location=/kafka-keys/kafka.client.truststore.jks" \
confluentinc/cp-kafka-rest:5.3.1
And the connection worked.
In my case (kubernetes with helm) i had to add to change
"listeners":"http://0.0.0.0:8082" to "listeners":"https://0.0.0.0:8082"
i see the same mistake in your configuration
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
After that you will see in the end of the startup logs that it tryes to load the keystore file
Related
We are implementing a Hyperledger Fabric solution. To do so, we set up a fabric-CA, using the minimal configuration (we are still trying to figure out how the things works) in a specific docker.
As we need to login our users, using a email/password couple, we set up a LDAP component. We choosed to use OpenLDAP, using osixia/openldap implementation in a different docker.
We set the parameters in the fabric-ca-server-config.yaml to connect Fabric CA to the LDAP. At the start of both dockers, the logs seems fine :
Successfully initialized LDAP client
When we carry on the Fabric-CA tutorial, we fail at the command :
fabric-ca-client enroll -u http://cn=admin,dc=example:admin#localhost:7054
The result is :
[INFO] 127.0.0.1:46244 POST /enroll 401 23 "Failed to get user: Failed to connect to LDAP server over TCP at localhost:389: LDAP Result Code 200 "": dial tcp 127.0.0.1:389: connect: connection refused"
The LDAP is setup and functionning correctly, when sollicitated in CLI and via PHPLdapAdmin, an LDAP Browser, using the same credentials.
This is a bit of the fabric-ca-server-config.yaml:
ldap:
enabled: true
url: ldap://cn=admin,dc=example:admin#localhost:389/dc=example
userfilter: (uid=%s)
tls:
enabled: false
certfiles:
client:
certfile: noclientcert
keyfile:
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name: example
value: peer
Anyone could help ?
Thanks for reading,
I see two issues here:
First is more related with docker rather than fabric-ca. You have to set netowrk_mode to host to remove network isolation between the container and the Docker host. Then your docker container will see OpenLDAP located on Docker host
Please look into sample docker-compose.yaml file
version: '2'
services:
fabric-ca-server:
image: hyperledger/fabric-ca:1.1.0
container_name: fabric-ca-server
ports:
- "7054:7054"
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
volumes:
- ./fabric-ca-server:/etc/hyperledger/fabric-ca-server
command: sh -c 'fabric-ca-server start'
network_mode: host
More about docker network you can find here: https://docs.docker.com/network/
When network issue will be resolved, you have also to modify userfilter to relate with admin prefix so it should looks like this: userfilter: (cn=%s) If userfilter will not be repaired then you will get info that admin cannot be found in LDAP.
I did not using the local LDAP server, instead I am using the one line for the quick test...
http://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/
However I am still getting the error as well.
My fabric-ca-server-config.yaml is
ldap:
enabled: true
url: ldap://cn=read-only-admin,dc=example,dc=com:password#ldap.forumsys.com:389/dc=example,dc=com
tls:
certfiles:
client:
certfile:
keyfile:
# Attribute related configuration for mapping from LDAP entries to Fabric CA attributes
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name:
value:
And I run it by:
fabric-ca-server start -c fabric-ca-server-config.yaml
I saw logs:
Successfully initialized LDAP client
Here is the screenshot for phpLDAPAdmin:
I am using the same script for testing:
$fabric-ca-client enroll -u http://cn=read-only-admin,dc=example,dc=com:password#localhost:7054
$fabric-ca-client enroll -u http://uid=tesla,dc=example,dc=com:password#localhost:7054
But still not good, getting something like:
POST /enroll 401 23 "Failed to get user: User 'uid=tesla,dc=example,dc=com' does not exist in LDAP directory"
I'm trying to run traefik with ssl -
on a self signed certificate.
this is my docker-compose.yml file
traefik:
image: traefik
restart: unless-stopped
command: -c /dev/null --web --docker --logLevel=INFO --defaultEntryPoints='https' --entryPoints="Name:https Address::443 TLS:/certs/cert.pem,/certs/key.pem" --entryPoints="Name:http Address::80 Redirect.EntryPoint:https"
ports:
- '80:80'
- '443:443'
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./certs:/certs/
When running docker-compose up - i'm getting this error in log:
level=error msg="Error creating TLS config: bad TLS Certificate KeyFile format, expected a path"
after that:
level=fatal msg="Error preparing server: bad TLS Certificate KeyFile format, expected a path
And then:
traefik exited with code 1
I'm running Docker Version 17.06.0 on a Mac
Any clue on what could be the issue here ?
I have 2 microservices (spring boot app) running in different docker containers and configured with zuul api gateway. Routing to other container is not working. Container 1 is running in 8030 port & container 2 is running on port 8030.
Below is the zuul configuration in application.yml -
server:
port: 8030
# TODO: figure out why I need this here and in bootstrap.yml
spring:
application:
name: zuul server
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
routes:
zuultest:
url: http://localhost:8080
stripPrefix: false
ribbon:
eureka:
enabled: false
When access through localhost:8030/zuultest/test am getting the exception as -
2016-09-19 09:10:14.597 INFO 1 --- [nio-8030-exec-3] hello.SimpleFilter : GET request to http://localhost:8030/zuultest/test
2016-09-19 09:10:14.600 WARN 1 --- [nio-8030-exec-3] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
Can I know why I am getting this?
you can use links option in the docker-compose.yml to link between the two containers.
demo1:
image: <demo1 image name>
links: - demo2
demo2:
image: <demo2 image name>
Then in the zuul:routs:url configuration you can use the conatiner name, demo2 instead of it's IP.
How did you start the 2 containers? Both cannot have the same port if you exposed them to the docker host.
docker run --name service A --net=host -p 8030:8030 ...
docker run --name service B --net=host -p 8030:8031 ...
Without this, if you are calling localhost:8030, you are calling the host (not the container), and you are not getting a response.
You need to map the port to the host when you start them with different ports, and call them with localhost to the right exposed port
I have installed Kubernetes using contrib/ansible scripts.
When I run cluster-info:
[osboxes#kube-master-def ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
The cluster is exposed on localhost with insecure port, and exposed on secure port 443 via ssl
kube 18103 1 0 12:20 ? 00:02:57 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=https://10.57.50.161:443 -- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt
kube 18217 1 0 12:20 ? 00:00:15 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=https://10.57.50.161:443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
root 27094 1 0 12:21 ? 00:00:00 /bin/bash /usr/libexec/kubernetes/kube-addons.sh
kube 27300 1 1 12:21 ? 00:05:36 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://10.57.50.161:2379 --insecure-bind-address=127.0.0.1 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/certs/server.crt
I have copied the certificates from kube-master machine to my local machine, I have installed the ca root certificate. The chrome/safari browsers are accepting the ca root certificate.
When I'm trying to access the https://10.57.50.161/ui
I'm getting the 'Unauthorized'
How can I access the kubernetes ui?
You can use kubectl proxy.
Depending if you are using a config file, via command-line run
kubectl proxy
or
kubectl --kubeconfig=kubeconfig proxy
You should get a similar response
Starting to serve on 127.0.0.1:8001
Now open your browser and navigate to
http://127.0.0.1:8001/ui/ (deprecated, see kubernetes/dashboard)
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You need to make sure the ports match up.
This works for me that you can access from network
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Quick-n-dirty (and unsecure) way to access the Dashboard:
$ kubectl edit svc/kubernetes-dashboard --namespace=kube-system
This will load the Dashboard config (yaml) into an editor where you can edit it.
Change line type: ClusterIP to type: NodePort.
Get the tcp port:
$ kubectl get svc kubernetes-dashboard -o json --namespace=kube-system
The line with the tcp port will look like:
"nodePort": 31567
In newer releases of kubernetes you can get the nodeport from get svc:
# This is kubernetes 1.7:
donn#host37:~$ sudo kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.3.0.234 <nodes> 80:31567/TCP 2h
Do kubectl describe nodes to get a node IP address.
Browse to:
http://NODE_IP:31567
Good for testing. Not good for production due to lack of security.
Looking at your apiserver configuration, you will need to either present a bearer token (valid tokens will be listed in /etc/kubernetes/tokens/known_tokens.csv) or client certificate (signed by the CA cert in /etc/kubernetes/certs/ca.crt) to prove to the apiserver that you should be allowed to access the cluster.
https://github.com/kubernetes/kubernetes/issues/7307#issuecomment-96130676 describes how I was able to configure client certificates for a GKE cluster on my Mac.
To pass bearer tokens, you need to pass an HTTP header Authorization with a value Bearer ${KUBE_BEARER_TOKEN}. You can see an example of how this is done with curl here; in a browser, you will need to install an add-on/plugin to pass custom headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
Run the following command in your local laptop(or where you want to access the GUI)
ssh -L 8877:127.0.0.1:8001 -N -f -l root master_IPADDRESS
Get the secret key
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
Open the dashboard
http://localhost:8877/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Perform role-binding if you get any errors.
You can use kubectl proxy --address=clusterIP --port 8001 --accept-hosts '.*'
api server is already accessible on 6443 port on the node, but not authorize accesss to https://:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
i've generated client certificats signed by kubernetes ca cert, and converted to pkcs12 and integreted to my browser... when try to access to the this url they says that user are not authorized to access to the uri...
Hi I have a requirement of connecting three docker containers so that they can work together. I call these three containers as
container 1 - pga (apache webserver at port 80)
container 2 - server (apache airavata server at port 8930)
container 3 - rabbit (RabbitMQ at port 5672)
I have started rabbitMQ as (container 3)
docker run -i -d --name rabbit -p 15672:15672 -t rabbitmq:3-management
I have started server (container 2) as
docker run -i -d --name server --link rabbit:rabbit --expose 8930 -t airavata_server /bin/bash
Now from inside server(container 2) I can access rabbit (container 3) at port 5672. When i try
nc -zv container_3_port 5672 it says connection successful.
Till this point I am happy with the docker connection through link.
Now I have created another container pga(container 1) as
docker run -i -d --name pga --link server:server -p 8080:80 -t psaha4/airavata_pga /bin/bash
now from inside the new pga container when I am trying to access the service of server (container 2) its saying connection refuse.
I have verified that from inside server container service is running at 8930 port and it was exposed while creating the container but still its refusing the connection from other containers to which it is linked.
I could not find a similar situation described by anyone anywhere and also clueless how to debug the same. Please help me find out a way.
The output of command: docker exec server lsof -i :8930
exec: "lsof": executable file not found in $PATH
Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
Error starting exec command in container fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2: Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
NOTE: Intend to expand on this but my kid's just been sick. Will address debugging issue from question when I get a chance.
You may find it easier to use docker-compose for this as it lets you run them all with one command and keep the configuration under source control. An example configuration file (from my website) looks like this:
database:
build: database
env_file:
- database/.env
api:
build: api
command: /opt/server/dist/build/ILikeWhenItWorks/ILikeWhenItWorks
env_file:
- api/.env
links:
- database
tty:
false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- api:/opt/server/
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- api
volumes_from:
- api
I find these files very readable and comprehensible, they essentially say exactly what they're doing. You can see how it relates to the surrounding directory structure in my source code.