Why is dockerized config client unable to connect to dockerized config server - ssl

I'm out of ideas on this and appreciate any suggestions. I have a handful of dockerized springboot microservices which include a config server. Here are the characteristics:
Springboot version 2.3.0-RELEASE
Standard Springboot config server with basic auth turned on.
3 Springboot microservices that are also config clients to config server.
-- I use a simple Dockerfile model for microservices and springboot maven plugin with default docker layers capabilities.
SSL is enabled for all including the config server.
-- For dev and testing, I use a self signed cert.
All microservices use a JKS to sign JWTs
Docker image for java is openjdk8 alpine.
Docker compose is used to orchestrate container launch and settings.
The docker container for config server runs perfectly fine. I can even query for config via a browser following the HTTPS URL: https://app-dev.localhost.com:8443/config-server/shopping-svc/dev.
The Problem
I cannot manage to successfully start container 'shopping-svc'. It fails with this error.
2023-01-25T23:44:12.375221300Z
2023-01-25 23:44:12.575 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : https://app-dev.localhost.com:8443/config-server
2023-01-25 23:44:12.829 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - https://app-dev.localhost.com:8443/config-server. Will be trying the next url if available
2023-01-25 23:44:12.836 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
2023-01-25T23:44:12.837487100Z
java.lang.IllegalStateException: Could not locate PropertySource and the fail fast property is set, failing
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locate(ConfigServicePropertySourceLocator.java:155) ~[spring-cloud-config-client-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.cloud.bootstrap.config.PropertySourceLocator.locateCollection(PropertySourceLocator.java:52) ~[spring-cloud-context-2.2.9.RELEASE.jar:2.2.9.RELEASE]
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locateCollection(ConfigServicePropertySourceLocator.java:170) ~[spring-cloud-config-client-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:98) ~[spring-cloud-context-2.2.9.RELEASE.jar:2.2.9.RELEASE]
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:626) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:370) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:314) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at com.shopping.app.ShoppingApplication.main(ShoppingApplication.java:35) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_212]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_212]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_212]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) [application/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:109) [application/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) [application/:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) [application/:na]
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://app-dev.localhost.com:8443/config-server/shopping-app/dev": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:748) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:674) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:583) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
Investigations
At first, I thought perhaps the port 8443 is blocked somehow by my OS firewall but that's not it. Clearing the port makes no difference
Then I thought, perhaps it's a cert issue so I tried supplying the cert differently with the JAVA_TOOL_OPTIONS argument with the override populated: -Djavax.net.ssl.trustStore=/path/to/cert, etc... No dice.
I read several posts and articles suggesting services inside docker containers should refer to others via service name. While this poses a bit of confusion for me since my certs are generated against a hostname, I tried swapping the URL of config server in shopping-app YML to something like: https://config-server:8443/config-server/ or the same without https to see if at least successful connection would be made.
Last thing I tried was to change the compose network driver to 'host' instead of 'bridge' so the containers would use the host machine network config. The rationale was that at least, it's obvious it's all on same network.
I am not sure what or where to look anymore
References
=====
Docker compose file:
version: "3"
networks:
default:
driver: bridge
frontend:
driver: bridge
backend:
driver: bridge
services:
config-server:
image: config-server
env_file: .env
hostname: app-dev.localhost.com # Not sure this is necessary I add this because the self signed cert was generated with this domain name
volumes: #I'm developping on windows, hence the backslash "\"
- shoppingapp:/var/opt
- shoppingapp\certs\server.jks:/etc/certs/server.jks
- shoppingapp\certs\ssl/app-dev.localhost.com.p12:/etc/certs/ssl/app-dev.localhost.com.p12
ports:
- "8443:8443"
networks:
- backend
shopping-svc:
image: shopping-svc
env_file: .env
hostname: app-dev.localhost.com # Not sure this is necessary I add this because the self signed cert was generated with this domain name
volumes:
- shoppingapp:/var/opt
- shoppingapp\certs\server.jks:/etc/certs/server.jks
- shoppingapp\certs\ssl\app-dev.localhost.com.p12:/etc/certs/ssl/app-dev.localhost.com.p12
ports:
- "8444:8444"
depends_on:
config-server:
condition: service_started
networks:
- backend

Related

Elastic Search Docker search service error socket hang up

I have my docker-compose.override.yml set up below in Visual Studio 2022
elasticsearch:
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
restart: always
ports:
- "9200:9200"
- "9300:9300"
networks:
elastic:
kibana:
container_name: kibana
restart: always
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
elastic:
networks:
elastic:
driver: bridge
Every time I try to configure kibana, after supplying a token, I get the following error from the kibana container w/out kibana getting fully configured.
2023-02-06 21:40:38 [2023-02-07T04:40:38.210+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2023-02-06 21:40:38 [2023-02-07T04:40:38.366+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2023-02-06 21:40:38 [2023-02-07T04:40:38.508+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2023-02-06 21:40:39 [2023-02-07T04:40:39.605+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2023-02-06 21:40:39 [2023-02-07T04:40:39.741+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 192.168.128.3:60856, Remote: 192.168.128.2:9200
2023-02-06 21:40:40 [2023-02-07T04:40:40.773+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
2023-02-06 21:40:42 [2023-02-07T04:40:42.194+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 192.168.128.3:44378, Remote: 192.168.128.2:9200
I am a bit at a loss as to why I can't get it configured. Any help appreciated.

How can I set kafka ACLs for brokers?

I have some problems with configuring Kafka Acls brokers.
I'm using bitnami docker-compose-cluster.yml for my project and I want set authentication for each broker
I created kafka_jass.conf file with this content:
kafkabroker {
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="******";
};
and added these lines to docker compose for each broker:
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT,SASL_PLAINTEXT:PLAINTEXT
KAFKA_CFG_LISTENERS=CLIENT://:29092,EXTERNAL://0.0.0.0:9092,SASL_PLAINTEXT://broker1:9095
KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://broker1:29092,EXTERNAL://******:9092,SASL_PLAINTEXT://localhost:9095
security.inter.broker.protocol=SASL_PLAINTEXT
and this line to server.properties:
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
after starting the docker compose, I receive this error for each broker:
[2022-12-06 06:47:19,679] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
broker1 | org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
broker1 | at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:67)
broker1 | at kafka.server.KafkaServer$.zkClientConfigFromKafkaConfig(KafkaServer.scala:79)
broker1 | at kafka.server.KafkaServer.<init>(KafkaServer.scala:149)
broker1 | at kafka.Kafka$.buildServer(Kafka.scala:73)
broker1 | at kafka.Kafka$.main(Kafka.scala:87)
broker1 | at kafka.Kafka.main(Kafka.scala)
broker1 | Caused by: java.lang.SecurityException: java.io.IOException: Configuration Error:
broker1 | Line 2: expected [controlFlag]
Update question:
this is docker compose content:
version: "2"
services:
zookeeper:
image: dockerhub.******/bitnami/zookeeper:3.8
hostname: zookeeper,SASL_PLAINTEXT://localhost:9091
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- KAFKA_OPTS="-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Djava.security.auth.login.config=/opt/bitnami/kafka/config/zookeeper-server.jaas"
volumes:
- ./config:/opt/bitnami/kafka/config
kafka-0:
image: dockerhub.******/bitnami/kafka:3.2
hostname: broker1
container_name: broker1
ports:
- '9092:9092'
volumes:
- ./config/broker1:/bitnami
- ./config/broker1/kafka/config/server.properties:/bitnami/kafka/config/server.properties
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://0.0.0.0:29092,EXTERNAL://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://broker1:29092,EXTERNAL://*****:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_INTER_BROKER_LISTENER_NAME=SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required user_admin='admin-secret' user_producer='producer-secret' user_consumer='consu>
- KAFKA_CFG_LISTENER_NAME_CLIENT_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_CLIENT_PLAIN_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required user_broker='broker-secret' username='broker' password='*****';"
- KAFKA_CFG_SUPER_USERS="User:broker;User:admin"
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="false"
- KAFKA_CFG_ZOOKEEPER_SET_ACL="true"
- KAFKA_CFG_OPTS="-Djava.security.auth.login.config=/opt/bitnami/kafka/config/kafka-server.jaas"
depends_on:
- zookeeper
Your JAAS file is improperly formatted. The first two options are Kafka broker properties, not JAAS entries.
This is the minimum information you need, but you can add additional users here, too.
Refer. https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="******";
};
Regarding your container, you cannot edit the server properties directly; that's what the environment variables already do. Use KAFKA_CFG_AUTHORIZER_CLASS. Similarly, the inter broker protocol needs uppercased and prefixed appropriately for the bitnami container to use it.
Also, look at the README of the bitnami Kafka container and it already has ways to configure authentication and user accounts via extra environment variables.

Mercure keeps binding to port 80

I'm using the Mercure hub 0.13, everything works fine on my development machine, but on my test server the hub keeps on trying to bind on port 80, resulting in a error, as nginx is already running on port 80.
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm starting the hub with the following command:
MERCURE_PUBLISHER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_PUBLISHER_JWT_ALG=RS256 \
MERCURE_SUBSCRIBER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_SUBSCRIBER_JWT_ALG=RS256 \
./mercure run -config Caddyfile.dev
Caddyfile.dev is as follows:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:localhost:3000}
log
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins *
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I provider the SERVER_NAME as an environment variable, without a domain, SERVER_NAME=:3000, the hub actually starts on port 3000, but runs in http mode, which only allows for anonymous subscriptions and is not what I need.
Server:
Operating System: CentOS Stream 8
Kernel: Linux 4.18.0-383.el8.x86_64
Architecture: x86-64
Full output when trying to start the Mercure hub:
2022/05/10 04:50:29.605 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/05/10 04:50:29.606 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/05/10 04:50:29.609 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/05/10 04:50:29.610 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/05/10 04:50:29.610 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003d6150"}
2022/05/10 04:50:29.627 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/05/10 04:50:29.628 INFO tls finished cleaning storage units
2022/05/10 04:50:29.642 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2022/05/10 04:50:29.643 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc0003d6150"}
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm a bit late, but I hope that will help someone.
As mentionned here, you can specify the http_port manually in your caddy configuration file.

GCS Connector fails when enabling SASL, SSL, or SASL_SSL

I have been able to successfully connect the GCS connector without SASL or SSL enabled. When I enable SASL and SSL; connect-standalone does not seem to be able to communicate with the brokers.
The problem appears to be with the gcs-sink-license-manager. This is what I have found from the logs but they aren't super helpful for me to actually figuring out what the issue is....
LOGS
[2018-12-19 16:29:05,645] INFO [AdminClient clientId=gcs-sink-license-manager] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:238)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
[2018-12-19 16:29:05,647] ERROR WorkerConnector{id=gcs-sink} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:119)
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create.
at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:251)
at io.confluent.license.LicenseStore$1.run(LicenseStore.java:159)
at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
at io.confluent.license.LicenseStore.start(LicenseStore.java:187)
at io.confluent.license.LicenseManager.<init>(LicenseManager.java:42)
at io.confluent.connect.gcs.GcsSinkConnector.checkLicense(GcsSinkConnector.java:80)
at io.confluent.connect.gcs.GcsSinkConnector.start(GcsSinkConnector.java:67)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:241)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:297)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:206)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2018-12-19 16:29:05,649] INFO Finished creating connector gcs-sink (org.apache.kafka.connect.runtime.Worker:257)
[2018-12-19 16:29:05,650] INFO Skipping reconfiguration of connector gcs-sink since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:329)
[2018-12-19 16:29:05,652] INFO Created connector gcs-sink (org.apache.kafka.connect.cli.ConnectStandalone:104)
Connector Properties
connector.class="io.confluent.connect.gcs.GcsSinkConnector"
storage.class="io.confluent.connect.gcs.storage.GcsStorage"
bootstrap.servers=kafka1:19092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/share/java,/usr/share/confluent-hub-components
gcs.sasl.properties
#Connector
format.class=io.confluent.connect.gcs.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
flush.size=3
# confluent.license=
#GCS
name=gcs-sink
connector.class=io.confluent.connect.gcs.GcsSinkConnector
gcs.bucket.name=kafka-bucket-4c
gcs.part.size=5242880
gcs.credentials.path=/usr/share/assets/gcs-key.json
confluent.topic.bootstrap.servers=kafka1:19092
topics=sandbox
confluent.topic.replication.factor=1
#Storage
storage.class=io.confluent.connect.gcs.storage.GcsStorage
client.id=gcs-standalone-sink
# Sink authentication settings
consumer.log4j.root.loglevel=DEBUG
consumer.bootstrap.servers=kafka1:19092
consumer.sasl.mechanism=PLAIN
consumer.security.protocol=SASL_PLAINTEXT
consumer.ssl.endpoint.identification.algorithm=
Dockerfile
FROM confluentinc/cp-kafka-connect
ADD assets /usr/share/assets
# ENV CONNECT_OPTS "-Djava.security.auth.login.config=/usr/share/assets/kafka_admin_account.conf -Djavax.net.ssl.trustStore=/usr/share/assets/secrets/kafka.client.truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
ENV KAFKA_OPTS "-Djava.security.auth.login.config=/usr/share/assets/secrets/kafka_admin_account.conf -Djavax.net.debug=all"
ENV CONNECT_OPTS "-Djava.security.auth.login.config=/usr/share/assets/secrets/kafka_admin_account.conf -Djavax.net.debug=all"
COPY assets/secrets/cacerts /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts
CMD ["/bin/bash", "-c", "connect-standalone ${CONNECT_PROPS} ${GCS_PROPS}"]
docker-compose file
kafka1:
image: company-kafka-secure
# build: ./
depends_on:
- zookeeper
ports:
- 19091:19091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://kafka1:19092,EXT://localhost:19091
KAFKA_LISTENERS: SASL_PLAINTEXT://:19092,EXT://:19091
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,EXT:SASL_PLAINTEXT
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 6000
ZOOKEEPER_SASL_ENABLED: "false"
KAFKA_AUTHORIZER_CLASS_NAME: com.us.digital.kafka.security.authorization.KafkaAuthorizer
CONFLUENT_METRICS_ENABLE: "false"
volumes:
- ./secrets:/etc/kafka/secrets
networks:
- message_hub
kafka_gcs_connect:
build: ./kafka-connect
ports:
- 28082:28082
depends_on:
- kafka1
- kafka3
- kafka2
- zookeeper
environment:
CONNECT_PROPS: /usr/share/assets/connect-standalone.sasl.properties
CONNECT_REST_PORT: 28082
GCS_PROPS: /usr/share/assets/gcs.sasl.properties
networks:
- message_hub
CONNECT_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONFLUENT_TOPIC_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONFLUENT_LICENSE=
CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_CONFIG_STORAGE_TOPIC=connect-config
CONNECT_OFFSET_STORAGE_TOPIC=connect-offsets
CONNECT_STATUS_STORAGE_TOPIC=connect-status
CONNECT_REPLICATION_FACTOR=1
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
CONNECT_SECURITY_PROTOCOL=SASL_PLAINTEXT
CONNECT_SASL_MECHANISM=PLAIN
CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
CONNECT_CONSUMER_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONSUMER_SECURITY_PROTOCOL=SASL_PLAINTEXT
CONNECT_CONSUMER_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
CONNECT_CONSUMER_SASL_MECHANISM=PLAIN
CONNECT_GROUP_ID=gcs-kafka-connector
CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_PORT=28082
CONNECT_PLUGIN_PATH=/usr/share/java,/usr/share/confluent-hub-components
KAFKA_OPTS=-Djava.security.auth.login.config=/usr/share/assets/kafka_admin_account.conf
Here is all of the properties I found I needed to get SASL working with a gcs connector.

WildFly Swarm apps using an external ActiveMQ broker

I'm having a very hard time to get two WildFly swarm apps (based on 2017.9.5 version) communicate with each other over a standalone ActiveMQ 5.14.3 broker. All done using YAML config as I can't have a main method in my case.
after reading hundreds of outdated examples and inaccurate pages of documentation, I settled with following settings for both producer and consumer apps:
swarm:
messaging-activemq:
servers:
default:
jms-topics:
domain-events: {}
messaging:
remote:
name: remote-mq
host: localhost
port: 61616
jndi-name: java:/jms/remote-mq
remote: true
Now it seems that at least part of the setting is correct as the apps start except for following warning:
2017-09-16 14:20:04,385 WARN [org.jboss.activemq.artemis.wildfly.integration.recovery] (MSC service thread 1-2) AMQ122018: Could not start recovery discovery on XARecoveryConfig [transportConfiguration=[TransportConfiguration(name=, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&localAddress=::&host=localhost], discoveryConfiguration=null, username=null, password=****, JNDI_NAME=java:/jms/remote-mq], we will retry every recovery scan until the server is available
Also when producer tries to send messages it just times out and I get following exception (just the last part):
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:727)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:304)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:300)
at org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection.setup(ActiveMQRAManagedConnection.java:785)
... 127 more
Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119013: Timed out waiting to receive cluster topology. Group:null]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:797)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:724)
... 130 more
I suspect that the problem is ActiveMQ has security turned on, but I found no place to give username and password to swarm config.
The ActiveMQ instance is running using Docker and following compose file:
version: '2'
services:
activemq:
image: webcenter/activemq
environment:
- ACTIVEMQ_NAME=amqp-srv1
- ACTIVEMQ_REMOVE_DEFAULT_ACCOUNT=true
- ACTIVEMQ_ADMIN_LOGIN=admin
- ACTIVEMQ_ADMIN_PASSWORD=your_password
- ACTIVEMQ_WRITE_LOGIN=producer_login
- ACTIVEMQ_WRITE_PASSWORD=producer_password
- ACTIVEMQ_READ_LOGIN=consumer_login
- ACTIVEMQ_READ_PASSWORD=consumer_password
- ACTIVEMQ_JMX_LOGIN=jmx_login
- ACTIVEMQ_JMX_PASSWORD=jmx_password
- ACTIVEMQ_MIN_MEMORY=1024
- ACTIVEMQ_MAX_MEMORY=4096
- ACTIVEMQ_ENABLED_SCHEDULER=true
ports:
- "1883:1883"
- "5672:5672"
- "8161:8161"
- "61616:61616"
- "61613:61613"
- "61614:61614"
any idea what's going wrong?
I had bad times trying to get it working too. The following YML solved my problem:
swarm:
network:
socket-binding-groups:
standard-sockets:
outbound-socket-bindings:
myapp-socket-binding:
remote-host: localhost
remote-port: 61616
messaging-activemq:
servers:
default:
remote-connectors:
myapp-connector:
socket-binding: myapp-socket-binding
pooled-connection-factories:
myAppRemote:
user: username
password: password
connectors:
- myapp-connector
entries:
- 'java:/jms/remote-mq'