How can I create router and load balance service added to traefik via consulCatalog? - reverse-proxy

I have nextcloud running on bare metal 2 nodes:
node1: 192.168.1.10
node2: 192.168.1.11
In the consul I have defined nextcloud service as such on both the nodes:
{
"service": {
"name": "nextcloud",
"tags": ["nextcloud", "traefik"],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": ["ping", "-c1", "127.0.0.1"],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
now this shows up in consul fine:
static config: traefik.yaml
global:
# Send anonymous usage data
sendAnonymousUsage: true
api:
dashboard: true
debug: true
log:
level: DEBUG
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: "/config/"
watch: true
consulCatalog:
defaultRule: "Host(`{{ .Name }}.sub.mydomain.com`)"
endpoint:
address: http://127.0.0.1:8500
certificatesResolvers:
linode:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
email: myemail#domain.com
storage: acme.json
dnsChallenge:
provider: linode
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
and then dynamic /config/config.yaml:
http:
routers:
nextcloud#consulCatalog:
entryPoints:
- "https"
rule: "Host(`home.sub.mydomain.com`) && Path(`/nextcloud`)"
tls:
certResolver: linode
service: nextcloud
services:
nextcloud:
loadBalancer:
servers:
- url: http://192.168.1.10
- url: http://192.168.1.11
passHostHeader: true
but this shows up as file provider with TLS in instead in addtion to exisiting consulcatalog provider.
and not IP or domain mapped.
actual consulcatalog provider showing up but no tls
I am wondering why my dynamic configuration in http did not updated the nextcloud#consulcatalog and set the https entrypoint.
Any help will be greatly appreciated, I am struggling very hard to get this to work.
I have tried following the docs on traefik but its very confusing specially on the consulcatalog part.

Your configuration is showing up as being defined via the file provider because you are statically defining it in the file at /config/config.yaml.
In order to dynamically retrieve this configuration from Consul, you should not be defining the static config file and instead configure tags on the Consul service registrations that will instruct Traefik to route traffic to your service.
For example:
{
"service": {
"name": "nextcloud",
"tags": [
"nextcloud",
"traefik.enable=true",
"traefik.http.routers.nextcloud.entrypoints=https",
"traefik.http.routers.nextcloud.rule=(Host(`home.sub.mydomain.com`) && Path(`/nextcloud`))",
"traefik.http.routers.nextcloud.tls.certresolver=linode",
"traefik.http.services.nextcloud.loadbalancer.passhostheader=true"
],
"port": 80,
"check": {
"tcp": "localhost:80",
"args": [
"ping",
"-c1",
"127.0.0.1"
],
"interval": "10s",
"status": "passing",
"success_before_passing": 3,
"failures_before_critical": 3
}
}
}
More info can be found on the Routing Configuration docs for Traffic's Consul catalog provider.

Related

traefik listens on port 80 and forwards the request to minio console(5000) 404

I deployed minio and the console in K8S, used ClusterIP to expose ports 9000 & 5000
Listening for port 80 and 5000 forwarding requests to minio.service(ClusterIP)
Request console all right through port 5000
By requesting the console on port 80, you can see the console, but the request is 404 in the browser
enter image description here
enter image description here
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Release.Namespace }}
name: minio-headless
labels:
app: minio-headless
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: 9000
targetPort: 9000
- name: console
port: 5000
targetPort: 5000
selector:
app: minio
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingress-route-minio
namespace: {{ .Release.Namespace }}
spec:
entryPoints:
- minio
- web
routes:
- kind: Rule
match: Host(`minio-console.{{ .Release.Namespace }}.k8s.zszc`)
priority: 10
services:
- kind: Service
name: minio-headless
namespace: {{ .Release.Namespace }}
port: 5000
responseForwarding:
flushInterval: 1ms
scheme: http
strategy: RoundRobin
weight: 10
traefik access log
{
"ClientAddr": "192.168.4.250:55485",
"ClientHost": "192.168.4.250",
"ClientPort": "55485",
"ClientUsername": "-",
"DownstreamContentSize": 19,
"DownstreamStatus": 404,
"Duration": 688075,
"OriginContentSize": 19,
"OriginDuration": 169976,
"OriginStatus": 404,
"Overhead": 518099,
"RequestAddr": "minio-console.etb-0-0-1.k8s.zszc",
"RequestContentSize": 0,
"RequestCount": 1018,
"RequestHost": "minio-console.etb-0-0-1.k8s.zszc",
"RequestMethod": "GET",
"RequestPath": "/api/v1/login",
"RequestPort": "-",
"RequestProtocol": "HTTP/1.1",
"RequestScheme": "http",
"RetryAttempts": 0,
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
"StartLocal": "2023-01-27T13:20:06.337540015Z",
"StartUTC": "2023-01-27T13:20:06.337540015Z",
"entryPointName": "web",
"level": "info",
"msg": "",
"time": "2023-01-27T13:20:06Z"
}
It looks to me like the request for /api is conflicting with rules for the Traefik dashboard. If you look at the access log in your question, we see:
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
If you have installed Traefik from the Helm chart, it installs an IngressRoute with the following rules:
- kind: Rule
match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
services:
- kind: TraefikService
name: api#internal
In theory those are bound only to the traefik entrypoint, but it looks like you may have customized your entrypoint configuration.
Take a look at the IngressRoute resource for your Traefik dashboard and ensure that it's not sharing an entrypoint with minio.

Service discovery with eureka is not working in docker container

When I run my API gateway in docker container then it is not able to find my services which are registered in eureka.
API Gateway
-- ocelot.json
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/values",
"DownstreamScheme": "http",
"UseServiceDiscovery": true,
"ServiceName": "sampleservice",
"UpstreamPathTemplate": "/sample-api/{catchAll}"
}
],
"GlobalConfiguration": {
"UseServiceDiscovery": true,
"ServiceDiscoveryProvider": {
"Type": "Eureka",
"Host": "myeurekaserver",
"Port": "8761"
}
}
}
-- appsettings.json for API Gateway
{
"eureka": {
"client": {
"shouldRegisterWithEureka": false,
"serviceUrl": "http://myeurekaserver:8761/eureka/",
"ValidateCertificates": false
},
"instance": {
"appName": "gateway",
"hostName": "myeurekaserver",
"port": "7000"
}
}
}
Service Configuration --appsettings.json
{
"eureka": {
"client": {
"shouldRegisterWithEureka": true,
"serviceUrl": "http://myeurekaserver:8761/eureka/",
"ValidateCertificates": false
},
"instance": {
"appName": "sampleservice",
"hostName": "myeurekaserver",
"port": "7001"
}
}
}
docker-compose.yml
version: '3.4'
services:
sampleapi:
image: ${DOCKER_REGISTRY-}sampleapi
ports:
- "7001:80"
networks:
- ecnetwork
build:
context: .
dockerfile: SampleAPI/Dockerfile
gateway:
image: ${DOCKER_REGISTRY-}gateway
ports:
- "7000:80"
networks:
- ecnetwork
build:
context: .
dockerfile: Gateway/Dockerfile
myeurekaserver:
image: ${DOCKER_REGISTRY-}myeurekaserver
ports:
- "8761:8761"
networks:
- ecnetwork
build:
context: .
dockerfile: MyEurekaServer/Dockerfile
networks:
ecnetwork:
external: true
When I run command docker-compose up and check on http://localhost:8761/ I find my services have been registred in the eureka server, but I run http://localhost:7000/sample-api/order
It returns
localhost is currently unable to handle this request. HTTP ERROR 500
I checked my console window, then It is API gateway is able to discover the services, here is the log.
gateway_1 | dbug: Steeltoe.Discovery.Eureka.DiscoveryClient[0]
gateway_1 | FetchRegistryDelta returned: OK
gateway_1 | dbug: Steeltoe.Discovery.Eureka.DiscoveryClient[0]
gateway_1 | FetchRegistry succeeded
It's an application error, check your app API gateway.
500 Internal Server Error
A generic error message, given when an unexpected condition was encountered and no more specific message is suitable
Try to debug your application without Docker.
Check in the docker on which port the service is registered 7000 or 80?
Then see if the 7000 port is accessible for you in local by telnet

Cannot use spring cloud config and istio 1.1.1 together-cannot recover when HTTP 404 error to get remote config

when I'm tring to mix the spring cloud config with istio 1.1.1, When my app container(with istio envoy auto-injected) starts, the spring cloud config client will try to get config(applicationContext.yaml) from remote cloud config server(started in advance with good status), unfornately it fails with HTTP 404 error. Even if I've configged my app to have retry for cloud config client, it keeps retring alway with HTTP 404 error(I've confirmed the config server URL is correct from another container) and there's no chance to recover. It happens sometimes. I knew that Istio envoy and my app are in the same kubernetes POD, the app may start before istio envoy, in which case there might be network error but as soon as the envoy is up, everything should be OK. I really don't understand why my app cannot recover automatically. Here're my diagnostic steps:
1. Add retry mechanism in my app(with retry libs included in POM and modified yaml. - retry works but each retry failed with HTTP 404 error
spring-config/
fail-fast: true
retry:
initial-interval: 10000
max-attempts: 100
2. Add 'sleep xx' before my java app starts in my app k8s deployment file - less chance to have HTTP 404 error, but problem is not eliminated
command: ["/bin/sh","-c","sleep 20; java -jar -Xms512m -Xmx1024m app.jar"]
3. get the istio envoy's access log and compare the victim app's and good app's - it sounds like the good log has values for upstream_cluster and upstream_cluster key; the fields for the bad log are empty
the good access log
{
"response_code": "200",
"user_agent": "Java/1.8.0_121",
"response_flags": "-",
"start_time": "2019-06-25T01:17:29.661Z",
"method": "2019-06-25T01:17:29.661Z",
"request_id": "d3d27512-161b-4303-bb48-05a6e19e05b7",
"upstream_host": "172.20.3.104:9083",
"x_forwarded_for": "-",
"requested_server_name": "-",
"bytes_received": "0",
"istio_policy_status": "-",
"bytes_sent": "1144",
"upstream_cluster": "outbound|9083||fota-spring-config.ns-fota.svc.cluster.local",
"downstream_remote_address": "172.20.2.115:45816",
"path": "/fota-spring-config/fota-task/dev/master",
"authority": "fota-spring-config.ns-fota.svc.cluster.local:9083",
"protocol": "HTTP/1.1",
"upstream_service_time": "289",
"upstream_local_address": "-",
"duration": "290",
"downstream_local_address": "172.21.1.152:9083"
}
the bad access log:
{
"upstream_cluster": "-",
"downstream_remote_address": "172.20.2.118:41980",
"path": "/fota-spring-config/fota-dmserver/dev/master",
"authority": "fota-spring-config.ns-fota.svc.cluster.local:9083",
"protocol": "HTTP/1.1",
"upstream_service_time": "-",
"upstream_local_address": "-",
"duration": "0",
"downstream_local_address": "172.21.1.152:9083",
"response_code": "404",
"user_agent": "Java/1.8.0_121",
"response_flags": "NR",
"start_time": "2019-06-25T01:21:24.197Z",
"method": "2019-06-25T01:21:24.197Z",
"request_id": "346716e4-1def-465f-b370-cb1e71e30d25",
"upstream_host": "-",
"x_forwarded_for": "-",
"requested_server_name": "-",
"bytes_received": "0",
"istio_policy_status": "-",
"bytes_sent": "0"
}
the K8S deployment file is attached.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fota-car
spec:
template:
metadata:
labels:
app: fota-car
version: v1
spec:
serviceAccountName: fota-serviceaccount
imagePullSecrets:
- name: uaes-docker2
containers:
- name: fota-car
image: 192.168.119.22:18080/uaes-fota/fota-car:dev-release-1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8085
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysql-ali-dev.ns-fota-ext-svc/fota-car?useUnicode=true&characterEncoding=utf-8&useSSL=false
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: mysql-ali-dev-secret
key: username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-ali-dev-secret
key: password
command: ["/bin/sh","-c","java -jar -Xms512m -Xmx1024m app.jar"]
readinessProbe:
httpGet:
path: /actuator/health
port: 18085
initialDelaySeconds: 60
timeoutSeconds: 1
kind: Service
apiVersion: v1
metadata:
labels:
app: fota-car
name: fota-car
spec:
ports:
- name: http
port: 8085
selector:
app: fota-car

Kafka Connect failing to read from Kafka topics over SSL

Running kafka connect in our docker-swarm, with the following compose file:
cp-kafka-connect-node:
image: confluentinc/cp-kafka-connect:5.1.0
ports:
- 28085:28085
secrets:
- kafka.truststore.jks
- source: kafka-connect-aws-credentials
target: /root/.aws/credentials
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_LOG4J_ROOT_LEVEL: TRACE
CONNECT_REST_PORT: 28085
CONNECT_GROUP_ID: cp-kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: dev_cp-kafka-connect-config
CONNECT_OFFSET_STORAGE_TOPIC: dev_cp-kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: dev_cp-kafka-connect-status
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
CONNECT_PLUGIN_PATH: /usr/share/java/
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
KAFKA_HEAP_OPTS: '-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2'
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 4gb
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 2000s
secrets:
kafka.truststore.jks:
external: true
kafka-connect-aws-credentials:
external: true
The kafka connect node starts up successfully, and I am able to set up tasks and view the status of those tasks...
The connector I setup I called kafka-sink, I created it with the following config:
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"s3.region": "eu-central-1",
"flush.size": "1",
"schema.compatibility": "NONE",
"tasks.max": "1",
"topics": "input-topic-name",
"s3.part.size": "5242880",
"timezone": "UTC",
"directory.delim": "/",
"locale": "UK",
"s3.compression.type": "gzip",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"name": "kafka-sink",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"s3.bucket.name": "my-s3-bucket",
"rotate.schedule.interval.ms": "60000"
}
This task now says that it is running.
When I did not include the SSL config, specifically:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
and instead pointed to a bootstrap server that was exposed with no security:
CONNECT_BOOTSTRAP_SERVERS: insecurekafka:9092
It worked fine, and read from the appropriate input topic, and output to the S3 bucket with default partitioning...
However, when I run it using the SSL config against my secure kafka topic, it logs no errors, throws no exceptions, but does nothing at all despite data continuously being pushed to the input topic...
Am I doing something wrong?
This is my first time using Kafka Connect, normally, I connect to kafka using Spring Boot apps where you just have to specify the truststore location and password in the config.
Am I missing some configuration in either my compose file or my task config?
I think you need to add SSL config for both consumer and producer. Check here Kafka Connect Encrypt with SSL
Something like this
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>

How to parameterize ports in OpenShift JSON Project Template

I'm trying to create a custom project template in OpenShift Origin. The Service configuration specifically, looks like below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${NAME}",
"annotations": {
"description": "Exposes and load balances the node.js application pods"
}
},
"spec": {
"ports": [
{
"name": "web",
"port": "${APPLICATION_PORT}",
"targetPort": "${APPLICATION_PORT}",
"protocol": "TCP"
}
],
"selector": {
"name": "${NAME}"
}
}
},
where, APPLICATION_PORT is supplied as a user parameter:
"parameters": [
{
"name": "APPLICATION_PORT",
"displayName": "Application Port",
"description": "The exposed port that will route to the node.js application",
"value": "8000"
},
When I try to use this template to create a project, I get the following error:
spec.ports[0].targetPort: Invalid value: "8000": must be an IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])*...
I get a similar error in my DeploymentConfig as well, for the http ports in the liveness and readiness probes:
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
where, APPLICATION_ADMIN_PORT, again, is user-supplied.
Error:
spec.template.spec.containers[0].livenessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
spec.template.spec.containers[0].readinessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
I've been following https://blog.openshift.com/part-2-creating-a-template-a-technical-walkthrough/ to understand templates, and it, unfortunately, does not have any examples of ports being parameterized anywhere.
It almost seems as if strings are not allowed as the values of these ports. Is that the case? What's the right way to parameterize these values? Should I switch to YAML?
Versions:
OpenShift Master: v1.1.6-3-g9c5694f
Kubernetes Master: v1.2.0-36-g4a3f9c5
Edit 1: I tried the same configuration in YAML format, and got the same error. So, JSON vs YAML is not the issue.
Unfortunately it is not currently possible to parameterize non-string field values: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
" Parameters can be referenced by placing values in the form "${PARAMETER_NAME}" in place of any string field in the template."
Templates are in the process of being upstreamed to Kubernetes and this limitation is being addressed there:
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/templates.md
The proposal is being implemented in PRs 25622 and 25293 in the kubernetes repo.
edit:
Templates now support non-string parameters as documented here: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
I don't know if this option was available in 2016 when this post was added but now you can use ${{PARAMETER_NAME}} to parameterize non-string field values.
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: ${{PORT_PARAMETER}}
protocol: TCP
targetPort: ${{PORT_PARAMETER}}
sessionAffinity: None
This may a be a bad practice but I'm using sed to substitute int parameters:
cat template.yaml | sed -e 's/PORT/8080/g' > proxy-template-subst.yaml
Template:
apiVersion: template.openshift.io/v1
kind: Template
objects:
- apiVersion: v1
kind: Service
metadata:
name: ${NAME}
namespace: ${NAMESPACE}
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: PORT
protocol: TCP
targetPort: PORT
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
parameters:
- description: Desired service name
name: NAME
required: true
value: need_real_value_here
- description: IP adress
name: IP
required: true
value: need_real_value_here
- description: namespace where to deploy
name: NAMESPACE
required: true
value: need_real_value_here