Routing to different container in docker using zuul not working - api

I have 2 microservices (spring boot app) running in different docker containers and configured with zuul api gateway. Routing to other container is not working. Container 1 is running in 8030 port & container 2 is running on port 8030.
Below is the zuul configuration in application.yml -
server:
port: 8030
# TODO: figure out why I need this here and in bootstrap.yml
spring:
application:
name: zuul server
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
routes:
zuultest:
url: http://localhost:8080
stripPrefix: false
ribbon:
eureka:
enabled: false
When access through localhost:8030/zuultest/test am getting the exception as -
2016-09-19 09:10:14.597 INFO 1 --- [nio-8030-exec-3] hello.SimpleFilter : GET request to http://localhost:8030/zuultest/test
2016-09-19 09:10:14.600 WARN 1 --- [nio-8030-exec-3] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
Can I know why I am getting this?

you can use links option in the docker-compose.yml to link between the two containers.
demo1:
image: <demo1 image name>
links: - demo2
demo2:
image: <demo2 image name>
Then in the zuul:routs:url configuration you can use the conatiner name, demo2 instead of it's IP.

How did you start the 2 containers? Both cannot have the same port if you exposed them to the docker host.
docker run --name service A --net=host -p 8030:8030 ...
docker run --name service B --net=host -p 8030:8031 ...
Without this, if you are calling localhost:8030, you are calling the host (not the container), and you are not getting a response.
You need to map the port to the host when you start them with different ports, and call them with localhost to the right exposed port

Related

Not able to access the rabbimq Cluster which is setup using rabbitmq clustor operator

I have an AWS instance where I have minkibe installed. I have also added RabbitMQ cluster operator to it. After that I started rabbit cluster with 3 nodes. I am able to see 3 pods and their logs got no error. The service for Rabbitmq is started as loadbalancer. When i try to list URL for the service I get Rabbitmq, Rabbitmq management UI and Prometheus pod on ports. The external IP is not generated for the service. I use patch command to assign external IP.
MY issue is the RabbitMQ cluster is running fine with no errors but I am not able to access it from using public IP of the AWS instance so other services can send message to it.
Here are all the files --
clientq.yml file
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: clientq
spec:
replicas: 3
image: rabbitmq:3.9-management
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
rabbitmq:
additionalConfig: |
log.console.level = info
channel_max = 1700
default_user= guest
default_pass = guest
default_user_tags.administrator = true
service:
type: LoadBalancer
all setup --
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/clientq-server-0 1/1 Running 0 11m
pod/clientq-server-1 1/1 Running 0 11m
pod/clientq-server-2 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/clientq LoadBalancer 10.108.225.186 12.27.54.12 5672:31063/TCP,15672:31340/TCP,15692:30972/TCP
service/clientq-nodes ClusterIP None <none> 4369/TCP,25672/TCP
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
NAME READY AGE
statefulset.apps/clientq-server 3/3 11m
NAME ALLREPLICASREADY RECONCILESUCCESS AGE
rabbitmqcluster.rabbitmq.com/clientq True True 11m
here 12.27.54.12 is my public ip of the instance
which i patched using
kubectl patch svc clientq -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["12.27.54.12"]}}'
the urls for service are --
minikube service clientq --url
http://192.168.49.2:31063
http://192.168.49.2:31340
http://192.168.49.2:30972
I am able to curl these from instance it self. But I am not able to access them from public ip of instance. Did i missed something or there is a way to expose these ports ? please let me know
I have enabled all ports for inbound and outbound traffic

Apache web server custom index kubernetes

I have deployed a apache web server on kubernetes cluster using standard httpd image from dockerhub. I want to make changes to index file so that it prints the container id instead of default index file. How can i achieve this?
Answering the question:
How can I have a Apache container in Kubernetes that will output the id of the container in the index.html or other .html file.
One of the ways how it could be handled is by a lifecycle hooks (specifically postStart):
PostStart
This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
-- Kubernetes.io: Docs: Concepts: Containers: Container lifecycle hooks: Container hooks
As for the example of such how setup could be implemented:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd # <-- APACHE IMAGE
# LIFECYCLE DEFINITION START
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
# LIFECYCLE DEFINITION END
ports:
- containerPort: 80
Taking a specific look on:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
This part will write/save a hostname of the container to the hostname.html
To check if each of Pod has the hostname.html you can create a Service and run either:
$ kubectl port-forward svc/apache 8080:80 -> curl localhost:8080/hostname.html
$ kubectl run -it --rm nginx --image=nginx -- /bin/bash -> curl apache/hostname.html
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Attach handler lifecycle event: Define postStart and preStop handlers

How can I pass a external environment variable to drone docker runner?

The scene is: I want to exec docker run & push in docker runner, and the docker registry and docker runner is in same server. so I want to pass host ip as variable into drone pipeline container so I can push docker image without a remote registry server. But it seem that only drone allowable environment variable can be used in ‘${}’. I try to export EXTERNALIP in host machine and try to get ${EXTERNALIP} but got nothing.
so Is there some way I can get external ip for communicating to localhost or another way to achieve this?
You should be able to push to localhost if its on the same host, that said, I was not able to do this using the packages plugin but was able to to replicate using direct docker:
steps:
- name: docker-${DRONE_EVENT}
image: docker:19.03
when:
event: [ push, pull_request ]
status: [ success ]
environment:
DOCKER_PASSWORD:
from_secret: docker_password
commands:
- echo $DOCKER_PASSWORD | docker login --username user_name --password-stdin localhost
- docker build -t localhost/demo-web:latest .
- if [ "${DRONE_EVENT}" == "push" ]; then docker push localhost/demo-web:latest; fi;
volumes:
- name: docker-socket
path: /var/run/docker.sock
volumes:
- name: docker-socket
host:
path:
/var/run/docker.sock
Couple caveats, obviously you will need to have trusted access in the repo configuration or --trusted if using local exec. Enjoy!

Confluent REST proxy API SSL handshake fails

I have a kafka cluster on docker using confluent images. I am using docker-compose to build the containers.
When I tried to run the container it starts but can't communicate with any broker due to SSL handshake failed. I don't know if I miss some configuration
[kafka-admin-client-thread | adminclient-1] ERROR org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -3 (/XXX:19092) failed authentication due to: SSL handshake failed
My Kafka brokers are configured as follows:
kafka1:
image: confluentinc/cp-kafka:5.2.2
container_name: kafka1
ports:
- "19092:19092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_ADVERTISED_LISTENERS: SSL://XXXX:19092
KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker1.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: broker1_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS: broker1_sslkey_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker1.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker1_truststore_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_SECURITY_PROTOCOL: SSL
volumes:
- ./../../secrets:/etc/kafka/secrets
I am trying to bring a Confluent REST Proxy API into another container using the configurations:
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.2.2
hostname: kafka-rest-proxy
ports:
- "18082:18082"
environment:
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
KAFKA_REST_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: SSL://XXX:19092,SSL://XXX:19092,SSL://XXX:19092
KAFKA_REST_CLIENT_SECURITY_PROTOCOL: SSL
KAFKA_REST_CLIENT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.keystore.jks
KAFKA_REST_CLIENT_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.truststore.jks
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_KEY_PASSWORD: XXX
KAFKA_REST_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.keystore.jks
KAFKA_REST_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.truststore.jks
KAFKA_REST_SSL_TRUSTSTORE_PASSWORD: XXX
volumes:
- ./../../secrets:/etc/kafka/secrets
I configured the SSH connection only with the truststore (I removed the keystore config completely) and I used the OPTS environment variable:
docker run -d \
--name krp \
-p 8082:8082 \
...
-v /home/ubuntu/kafka-keys:/kafka-keys \
-e KAFKA_REST_CLIENT_OPTS="-Dssl.keystore.location=/kafka-keys/kafka.client.keystore.jks -Dssl.keystore.password=changeit -Dssl.truststore.location=/kafka-keys/kafka.client.truststore.jks" \
confluentinc/cp-kafka-rest:5.3.1
And the connection worked.
In my case (kubernetes with helm) i had to add to change
"listeners":"http://0.0.0.0:8082" to "listeners":"https://0.0.0.0:8082"
i see the same mistake in your configuration
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
After that you will see in the end of the startup logs that it tryes to load the keystore file

Kubernetes Redis Cluster issue

I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks
Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.