Adding custom Authentication jar to kafka - authentication

currently, I am using the PlainLoginModule to authenticate users. However, I now created a jar with the code listed here and want to use that instead of PlainLoginModule: https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers#KIP-86:ConfigurableSASLcallbackhandlers-sample_plainSampleCallbackHandlerforSASL/PLAIN.
I have placed the jar file into the ~/libs folder and added listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.synopsys.demo.DemoApplication
to my server.properties and my kafka_server_jaas.conf into:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
And when I start up my server, I get the error:
Part 1:
14:36:41.924 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with swe-analyticsdb-prod2/10.15.164.233
14:36:41.924 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with swe-analyticsdb-prod2/10.15.164.233
14:36:41.924 [Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to swe-analyticsdb-prod2:9093 (id: 1 rack: null) for sending state change requests
14:36:41.925 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer brokerId=1] Connection with swe-analyticsdb-prod2.internal.synopsys.com/10.15.164.233 disconnected
java.io.EOFException: null
>---at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:573)
>---at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94)
>---at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
>---at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
>---at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at kafka.network.Processor.poll(SocketServer.scala:830)
>---at kafka.network.Processor.run(SocketServer.scala:730)
>---at java.lang.Thread.run(Thread.java:748)
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-created:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-authentication:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-reauthentication:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-authentication-no-reauth:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name failed-authentication:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name failed-reauthentication:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name reauthentication-latency:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-received:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name select-time:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name io-time:
14:36:41.928 [main] WARN kafka.utils.CoreUtils$ - org.apache.kafka.common.requests.ControlledShutdownRequest$Builder.<init>(IJS)V
java.lang.NoSuchMethodError: org.apache.kafka.common.requests.ControlledShutdownRequest$Builder.<init>(IJS)V
>---at kafka.server.KafkaServer.doControlledShutdown$1(KafkaServer.scala:520)
>---at kafka.server.KafkaServer.controlledShutdown(KafkaServer.scala:563)
>---at kafka.server.KafkaServer.$anonfun$shutdown$2(KafkaServer.scala:585)
>---at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)
>---at kafka.server.KafkaServer.shutdown(KafkaServer.scala:585)
>---at kafka.server.KafkaServer.startup(KafkaServer.scala:342)
>---at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
>---at kafka.Kafka$.main(Kafka.scala:75)
>---at kafka.Kafka.main(Kafka.scala)
14:36:41.929 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down
14:36:41.929 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped
14:36:41.929 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed
14:36:41.930 [main] INFO kafka.network.SocketServer - [SocketServer brokerId=1] Stopping socket server request processors
14:36:41.931 [data-plane-kafka-socket-acceptor-ListenerName(SASL_SSL)-SASL_SSL-9093] DEBUG kafka.network.Acceptor - Closing server socket and selector.
14:36:41.933 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-0] DEBUG kafka.network.Processor - Closing selector - processor 0
14:36:41.934 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-0] DEBUG kafka.network.Processor - Closing selector connection 10.15.164.233:9093-10.15.164.233:44774-0
14:36:41.935 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
Part 2:
07:22:21.223 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler.
07:22:21.223 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node swe-analyticsdb-prod2:9093 (id: 1 rack: null) using address swe-analyticsdb-prod2/10.15.164.233
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Set SASL client state to SEND_APIVERSIONS_REQUEST
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Creating SaslClient: client=null;service=kafka;serviceHostname=swe-analyticsdb-prod2;mechs=[PLAIN]
07:22:21.255 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
java.net.ConnectException: Connection refused
>---at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>---at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>---at org.apache.kafka.common.network.SslTransportLayer.finishConnect(SslTransportLayer.java:119)
>---at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.255 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
07:22:21.255 [Controller-1-to-broker-1-send-thread] WARN org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Connection to node 1 (swe-analyticsdb-prod2/10.15.164.233:9093) could not be established. Broker may not be available.
07:22:21.256 [Controller-1-to-broker-1-send-thread] WARN kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1's connection to broker swe-analyticsdb-prod2:9093 (id: 1 rack: null) was unsuccessful
java.io.IOException: Connection to swe-analyticsdb-prod2:9093 (id: 1 rack: null) failed.
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node swe-analyticsdb-prod2:9093 (id: 1 rack: null) using address swe-analyticsdb-prod2/10.15.164.233
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Set SASL client state to SEND_APIVERSIONS_REQUEST
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Creating SaslClient: client=null;service=kafka;serviceHostname=swe-analyticsdb-prod2;mechs=[PLAIN]
07:22:21.357 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
java.net.ConnectException: Connection refused
>---at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>---at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>---at org.apache.kafka.common.network.SslTransportLayer.finishConnect(SslTransportLayer.java:119)
>---at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.357 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
07:22:21.357 [Controller-1-to-broker-1-send-thread] WARN org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Connection to node 1 (swe-analyticsdb-prod2/10.15.164.233:9093) could not be established. Broker may not be available.
UPDATE
I notice this behavior occurring even if I don't use the jar/class I made, but by just leave it inside the "../libs" directory. The error above will always occur, using built in or custom AuthenticateCallBackHandler classes.
Am I missing a step/steps? I know I have to add the jar to Kafka, so it can recognize and use it but I don't see any tutorials/documentation that explains how to use a custom call back handler with PLAIN. Anyone know how to do this?
I am using Kafka 2.2
My custom class code:
import org.apache.kafka.common.errors.AuthenticationException;
import org.apache.kafka.common.security.auth.AuthenticateCallbackHandler;
import org.apache.kafka.common.security.plain.PlainAuthenticateCallback;
import kafka.common.KafkaException;
import javax.naming.AuthenticationNotSupportedException;
import javax.naming.Context;
import javax.naming.NamingException;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.security.auth.callback.Callback;
import javax.security.auth.callback.NameCallback;
import javax.security.auth.callback.UnsupportedCallbackException;
import javax.security.auth.login.AppConfigurationEntry;
import java.io.IOException;
import java.util.Hashtable;
import java.util.List;
import java.util.Map;
public class CustomCallback implements AuthenticateCallbackHandler {
#Override
public void configure(Map<String, ?> configs, String mechanism, List<AppConfigurationEntry> jaasConfigEntries) {
}
#Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
String username = null;
for (Callback callback: callbacks) {
if (callback instanceof NameCallback)
username = ((NameCallback) callback).getDefaultName();
else if (callback instanceof PlainAuthenticateCallback) {
PlainAuthenticateCallback plainCallback = (PlainAuthenticateCallback) callback;
boolean authenticated = authenticate(username, plainCallback.password());
plainCallback.authenticated(authenticated);
} else
throw new UnsupportedCallbackException(callback);
}
}
protected boolean authenticate(String username, char[] password) throws IOException {
if (username == null)
return false;
else {
// Return true if password matches expected password
Hashtable<String, String> environment = new Hashtable<String, String>();
System.out.println("Custom class is being called");
environment.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
environment.put(Context.PROVIDER_URL, "ldap://adldap.internal.synopsys.com:389");
environment.put(Context.SECURITY_AUTHENTICATION, "simple");
environment.put(Context.SECURITY_PRINCIPAL, "CN=" + username+",CN=Users,DC=internal,DC=synopsys,DC=com");
environment.put(Context.SECURITY_CREDENTIALS, new String(password));
try
{
DirContext context = new InitialDirContext(environment);
context.getEnvironment();
context.close();
return true;
}
catch (AuthenticationNotSupportedException exception)
{
System.out.println("The authentication is not supported by the server");
return false;
}
catch (AuthenticationException exception)
{
System.out.println("Incorrect password or username");
return false;
}
catch (NamingException exception)
{
System.out.println("Error when trying to create the context");
return false;
}
}
}
#Override
public void close() throws KafkaException {
}
public static void main(String[] args) throws IOException {
char[] pass = new char[]{'P', '0', 'm', 'e', 'l', '0', '2', '0', '1', '9', '!'};
CustomCallback test = new CustomCallback();
System.out.println(test.authenticate("<username>",pass));
System.out.println(test.getClass().getName());
//SpringApplication.run(DemoApplication.class, args);
}
}
server.properties contents:
advertised.listeners=SASL_SSL://<machine name>:9093
ssl.endpoint.identification.algorithm=HTTPS
ssl.client.auth=required
ssl.truststore.location=/remote/sde108/kafka/kafka/SSL2/client/server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/remote/sde108/kafka/kafka/SSL2/client/server.keystore.jks
ssl.keystore.password=password
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
zookeeper.set.acl=false
listeners=SASL_SSL://<machine name>:9093
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
offsets.retention.minutes=1
#listener.name.sasl_sasl.plain.sasl.server.callback.handler.class=<package name>.CustomCallbackApplication
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
<groupId>synopsys</groupId>
<artifactId>synopsys</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>6</source>
<target>6</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
</dependency>
</dependencies>
the meta-inf was made using project structure and build artifacts

I think I reply you email two days ago. I custom SASL/PLAIN authentication mechanism by storing username/password in mysql instead file. I also find that KIP-86 very confusing because it provides different ways to do the same thing and do not tell the differences between them.
This is what I do and what works.
The interface I implemented is AuthenticateCallbackHandler
The generated jar should not be placed under ~/libs. There is a lib subdirectory where you install your Kafka.
I did not modify kafka_server_jaas.conf

Related

Error occurs when call CuratorCache.start: Unable to read additional data from server sessionid

As the title says, something goes wrong when calling CuratorCache.start. The problem occurs in my project, so I create a small test project to reproduce it.
Env
jdk17(or jdk11)
spring-cloud-starter-zookeeper-all 3.1.0 (with curator-recipes 5.1.0) or curator-recipes 5.2.0
zookeeper 3.6.X or 3.7.0
MacOS Monterey or CentOS7.4
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>curator-test</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-all</artifactId>
<version>3.1.0</version>
<exclusions>
<exclusion>
<artifactId>curator-recipes</artifactId>
<groupId>org.apache.curator</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>5.2.0</version>
</dependency>
</dependencies>
</project>
Preparing data
add some String to zookeeper path: /test/1
Test code
var curator = CuratorFrameworkFactory.builder()
.connectString("localhost:2181")
.retryPolicy(new ExponentialBackoffRetry(1000, 3))
.build();
curator.start();
var bytes = curator.getData().forPath("/test/1");
System.out.println("value for path /test/1 is " + new String(bytes));
var curatorCache = CuratorCache.builder(curator, "/test").build();
curatorCache.listenable().addListener(
CuratorCacheListener.builder()
.forCreates(node -> System.out.println(String.format("Node created: [%s]", node)))
.forChanges((oldNode, node) -> System.out.println(String.format("Node changed. Old: [%s] New: [%s]", oldNode, node)))
.forDeletes(oldNode -> System.out.println(String.format("Node deleted. Old value: [%s]", oldNode)))
.forInitialized(() -> System.out.println("Cache initialized"))
.build()
);
curatorCache.start();
The test codes mostly comes from CuratorCache Example from official, as the test code shows, I can read data from the zookeeper path, but when I call CuratorCache.start the exception is thrown:
2022-01-23 16:07:53.578 INFO 55099 --- [ main] org.apache.zookeeper.ClientCnxnSocket : jute.maxbuffer value is 1048575 Bytes
2022-01-23 16:07:53.579 INFO 55099 --- [ main] org.apache.zookeeper.ClientCnxn : zookeeper.request.timeout value is 0. feature enabled=false
2022-01-23 16:07:53.580 INFO 55099 --- [ main] o.a.c.f.imps.CuratorFrameworkImpl : Default schema
2022-01-23 16:07:53.586 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server 172.16.153.68/172.16.153.68:2181.
2022-01-23 16:07:53.586 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : SASL config status: Will not attempt to authenticate using SASL (unknown error)
2022-01-23 16:07:53.588 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /192.168.195.34:49599, server: 172.16.153.68/172.16.153.68:2181
2022-01-23 16:07:53.639 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server 172.16.153.68/172.16.153.68:2181, session id = 0x1007127ce8c071d, negotiated timeout = 40000
2022-01-23 16:07:53.640 INFO 55099 --- [ain-EventThread] o.a.c.f.state.ConnectionStateManager : State change: CONNECTED
2022-01-23 16:07:53.644 INFO 55099 --- [ain-EventThread] o.a.c.framework.imps.EnsembleTracker : New config event received: {}
2022-01-23 16:07:53.644 INFO 55099 --- [ain-EventThread] o.a.c.framework.imps.EnsembleTracker : New config event received: {}
2022-01-23 16:07:53.829 WARN 55099 --- [ main] iguration$LoadBalancerCaffeineWarnLogger : Spring Cloud LoadBalancer is currently working with the default cache. You can switch to using Caffeine cache, by adding it and org.springframework.cache.caffeine.CaffeineCacheManager to the classpath.
2022-01-23 16:07:53.909 INFO 55099 --- [ main] org.test.Application : Started Application in 1.966 seconds (JVM running for 3.09)
2022-01-23 16:07:53.917 INFO 55099 --- [tor-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl : backgroundOperationsLoop exiting
2022-01-23 16:07:53.929 WARN 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : An exception was thrown while closing send thread for session 0x1007127ce8c071d.
org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1007127ce8c071d, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) ~[zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) ~[zookeeper-3.6.3.jar:3.6.3]
2022-01-23 16:07:54.036 INFO 55099 --- [ain-EventThread] org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x1007127ce8c071d
2022-01-23 16:07:54.036 INFO 55099 --- [ionShutdownHook] org.apache.zookeeper.ZooKeeper : Session: 0x1007127ce8c071d closed
So is there anybody has some idea, thanks for your comments.

How do I configure HikariCP correctly to work with connections that have to stay active?

I am using Spring Boot 2.4.0 with Spring Boot Data JPA to connect to PostgreSQL and perform typical read and write operations with JPA based repositories. Since the database is also used by other services, I use the LISTEN/NOTIFY functionality (https://www.postgresql.org/docs/9.1/sql-listen.html) to be notified about changes from PostgeSQL. For this I use the driver com.impossibl.postgres.jdbc.PGDriver instead of the default driver and the following code to make Spring listen for changes to the database:
#Service
class PostgresChangeListener(
val dataSource: HikariDataSource,
#Qualifier("dbToPGReceiverQueue") val postgresQueue: RBlockingQueue<String>
) {
init {
listenToNotifyMessage()
}
final fun listenToNotifyMessage() {
val notificationListener = object:PGNotificationListener {
override fun notification(processId: Int, channelName: String, payload: String) {
log.info("Received change from PostgresQL: $processId, $channelName, $payload")
postgresQueue.add(payload)
}
override fun closed() {
log.debug("Connection to Postgres lost! Try to reconnect...")
listenToNotifyMessage()
}
}
try {
val connection = DataSourceUtils.getConnection(dataSource).unwrap(PGConnection::class.java)
connection.addNotificationListener(notificationListener)
connection.createStatement().use { statement -> statement.execute("LISTEN change_notifier;") }
} catch (e: SQLException) {
throw RuntimeException(e)
}
}
}
This is the Kotlin-like implementation of the listener discribed here: https://impossibl.github.io/pgjdbc-ng/docs/current/user-guide/#extensions-notifications
The listener works, however after one or more days I get the following error:
2021-03-03 06:33:00.185 WARN 1 --- [nio-8080-exec-8] o.s.b.a.jdbc.DataSourceHealthIndicator : DataSource health check failed
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30001ms.
...
To find the problem, i enabled logging from Hikari as recommended on https://github.com/brettwooldridge/HikariCP/issues/1111#issuecomment-569552070. Here is the output of an excerpt of the logs:
2021-03-02 21:31:59.055 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
...
2021-03-02 21:31:59.055 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2021-03-02 22:00:53.139 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#201ab69f: (connection has passed maxLifetime)
2021-03-02 22:00:53.162 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#f2ffd1ea
2021-03-02 22:00:54.709 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#3bb847ef: (connection has passed maxLifetime)
2021-03-02 22:00:54.730 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#fd5932d7
2021-03-02 22:00:59.110 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2021-03-02 22:00:59.111 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Fill pool skipped, pool is at sufficient level.
2021-03-02 22:01:04.782 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#1d081266: (connection has passed maxLifetime)
2021-03-02 22:01:04.803 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#e0b396bc
2021-03-02 22:01:09.295 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#a2b0bd29: (connection has passed maxLifetime)
2021-03-02 22:01:09.313 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#ca9c8226
2021-03-02 22:01:10.075 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#ec8746aa: (connection has passed maxLifetime)
2021-03-02 22:01:10.093 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#aff2bfd8
2021-03-02 22:01:12.820 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#a7e0fc39: (connection has passed maxLifetime)
2021-03-02 22:01:12.840 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#d637554
2021-03-02 22:01:15.099 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#dadcba66: (connection has passed maxLifetime)
2021-03-02 22:01:15.119 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#e29805ef
2021-03-02 22:01:21.558 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#762f0753: (connection has passed maxLifetime)
2021-03-02 22:01:21.576 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#d5b8d008
2021-03-02 22:01:23.351 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#5e4721b0: (connection has passed maxLifetime)
2021-03-02 22:01:23.370 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#a8606b56
2021-03-02 22:01:29.111 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
2021-03-02 22:01:29.111 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Fill pool skipped, pool is at sufficient level.
2021-03-02 22:01:59.112 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=1, idle=9, waiting=0)
...
For me the log looks correct but after a while the active connections increase more and more...
...
2021-03-03 06:31:29.664 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=9, idle=1, waiting=0)
2021-03-03 06:31:48.687 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Closing connection com.impossibl.postgres.jdbc.PGDirectConnection#4fa5ec41: (connection is dead)
2021-03-03 06:31:48.707 DEBUG 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.impossibl.postgres.jdbc.PGDirectConnection#693052fe
2021-03-03 06:31:48.709 DEBUG 1 --- [nnection closer] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Fill pool skipped, pool is at sufficient level.
2021-03-03 06:31:59.665 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=10, idle=0, waiting=1)
2021-03-03 06:31:59.665 DEBUG 1 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Fill pool skipped, pool is at sufficient level.
2021-03-03 06:32:20.199 DEBUG 1 --- [io-8080-exec-10] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Timeout failure stats (total=10, active=10, idle=0, waiting=2)
2021-03-03 06:32:20.208 WARN 1 --- [io-8080-exec-10] o.s.b.a.jdbc.DataSourceHealthIndicator : DataSource health check failed
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
...
... until it comes to the described error message.
I wonder how I need to configure Hikari correctly or change my code to avoid the errors described? I hope you can help.
Not the same issue, but I had a similar problem. When my database was restarted, Hikari couldn't close the active listener connection, and the whole notification stopped working.
I found a possible solution for this. The reason why Hikari can't close the connection when it's dead because you are unwrapping the connection from the proxied Connection here:
DataSourceUtils.getConnection(dataSource).unwrap(PGConnection::class.java)
After this you are attaching a notificationListener to the PGConnection, so it remains alive.
First thing first to avoid hikaripool leaking you should seperate the 2 connection, and after initializing the listener you should close the hikariConnection.
private hikariConnection: Connection;
...
hikariConnection = DataSourceUtils.getConnection(dataSource)
val pgConnection: PGConnection = hikariConnection.unwrap(PGConnection::class.java)
... init the listener
hikariConnection.close()
And in the PGNotificationListener.closed() you have to reinitalize the listener, get a new Connection from the datasource. But beware, getting new connection while the Hikaripool filling it's pool(because the database outage was only a few seconds), can block each other. We solved it by getting the new connection on a dedicated new thread.
override fun closed() {
... get a new PGConnection, and start listening for the notifications
}
Sorry if it's not correctly answering your question, but it may help to some.

Application runtime exceptions are not being sent to errorChannel or ServiceActivator not able to listen on to errorChannel

After listening on a kafka topic using #StreamListener, upon RuntimeException, global erroChannel or topic specific errorChannel (topic.group.errors) not receiving any error message. #ServiceActivator not receiving anything.
POM Dependencies : Greenwich.RELEASE
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
application.properties
spring.cloud.stream.bindings.input.destination=input
spring.cloud.stream.bindings.input.group=myGroup
spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.enableDlq=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName=input_deadletter
spring.cloud.stream.kafka.streams.bindings.input.consumer.autoCommitOnError=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.bindings.output.content-Type=application/*+avro
spring.cloud.stream.bindings.output.producer.useNativeEncoding=true
spring.cloud.stream.bindings.output.producer.errorChannelEnabled=true
spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.schemaRegistryClient.endpoint.schema.avro.schema-locations=classpath:avro/*.avsc
spring.cloud.stream.kafka.streams.binder.brokers=localhost
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url=http://localhost:8082
spring.cloud.stream.kafka.streams.binder.application-id=myGroup
spring.cloud.stream.kafka.streams.binder.serdeError=sendtodlq
I can see in the logs that service activator is registered and subscribed to the error Channels.
All the streams are stopped and going to shutdown mode once runtime exception occurs.
Registering beans for JMX exposure on startup
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel input.myGroup.errors
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name="input-myGroup.errors"': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name="input.myGroup.errors"] org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel errorChannel
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name=errorChannel': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name=errorChannel]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel nullChannel
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name=nullChannel': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name=nullChannel]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler errorLogger
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=errorLogger,bean=internal': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=errorLogger,bean=internal]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler myTopicListener.error.serviceActivator
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=myTopicListener.error.serviceActivator,bean=endpoint': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=myTopicListener.error.serviceActivator,bean=endpoint]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler myTopicListener.errorGlobal.serviceActivator
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=myTopicListener.errorGlobal.serviceActivator,bean=endpoint': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=myTopicListener.errorGlobal.serviceActivator,bean=endpoint]
org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor - No #KafkaListener annotations found on bean type: class org.springf
#SendTo(MyStreams.OUTPUT)
public KStream<Key, MyEntity> process(KStream<Key, Envelope> myStreamObject) {
return myStreamObject.mapValues(this::transform);
}
#ServiceActivator(inputChannel = "input.myGroup.errors") //channel name 'input.myGroup.errors'
public void error(Message<?> message) {
System.out.println("Handling ERROR: " + message);
}
#ServiceActivator(inputChannel = "errorChannel")
public void errorGlobal(Message<?> message) {
System.out.println("Handling ERROR: GLOBAL " + message);
}
The kafka streams binder is not based on MessageChannels so there is no Message<?> to send to the error channel.
The standard kafka binder is a MessageChannelBinder and supports the error channel.
With Kafka Streams you have to implement your own error handling.

RESTEASY003200: Could not find message body reader for type: class org.glassfish.jersey.media.multipart.FormDataMultiPart

I want to upload image and send some parameters to my java rest service.I have added jersey-media-multipart to my pom.xml and I setted neccessary configurations to ApplicationConfig class. I am using Wildfly 11 for application server.But I am continuosly getting this exception.
11:29:36,199 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-35) RESTEASY002010: Failed to execute: javax.ws.rs.NotSupportedException: RESTEASY003200: Could not find message body reader for type: class org.glassfish.jersey.media.multipart.FormDataMultiPart of content type: multipart/form-data;boundary=--------------------------291101341234694996301314
at org.jboss.resteasy.core.interception.ServerReaderInterceptorContext.throwReaderNotFound(ServerReaderInterceptorContext.java:53)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.getReader(AbstractReaderInterceptorContext.java:80)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53)
at org.jboss.resteasy.security.doseta.DigitalVerificationInterceptor.aroundReadFrom(DigitalVerificationInterceptor.java:36)
at org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:59)
at org.jboss.resteasy.core.MessageBodyParameterInjector.inject(MessageBodyParameterInjector.java:151)
at org.jboss.resteasy.core.MethodInjectorImpl.injectArguments(MethodInjectorImpl.java:92)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:115)
at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:406)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:213)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:228)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at com.ocpsoft.pretty.PrettyFilter.doFilter(PrettyFilter.java:145)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at com.mepsan.outra.global.LoginFilter.doFilter(LoginFilter.java:41)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:812)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ApplicationConfig.java
#javax.ws.rs.ApplicationPath("resources")
public class ApplicationConfig extends Application {
.....
#Override
public Map<String, Object> getProperties() {
Map<String, Object> props = new HashMap<>();
props.put("jersey.config.server.provider.classnames",
"org.glassfish.jersey.media.multipart.MultiPartFeature");
return props;
}
}
MobileSource .java
#Path("mobile")
public class MobileSource {
#POST
#Path("profile/upload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response execute(FormDataMultiPart multi) {
}
}
pom.xml
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-multipart</artifactId>
<version>2.19</version>
</dependency>
Another WAY
#POST
#Path("profile/upload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response execute(#FormDataParam("image") InputStream imgstream,
#FormDataParam("data") String s) {
//multi.getF
try {
System.out.println("S " + s);
byte[] imgdata = inputStreamToByte(imgstream);
//System.out.println("DATA STR " + data);
//System.out.println("THE PARAM " + mstr);
System.out.println("THE PARAM S " + imgdata.length);
} catch (IOException ex) {
Logger.getLogger(MobileSource.class.getName()).log(Level.SEVERE, null, ex);
}
return Response.ok().build();
}
When I use this way to handle request, I just see this obscure result in log.
13:07:09,911 INFO [stdout] (default task-41) S ----------------------------336065642279870055586849
13:07:09,911 INFO [stdout] (default task-41) Content-Disposition: form-data; name="data"
13:07:09,911 INFO [stdout] (default task-41)
13:07:09,911 INFO [stdout] (default task-41) test
13:07:09,911 INFO [stdout] (default task-41) ----------------------------336065642279870055586849
13:07:09,911 INFO [stdout] (default task-41) Content-Disposition: form-data; name="image"; filename="eagle.jpg"
13:07:09,911 INFO [stdout] (default task-41) Content-Type: image/jpeg
13:07:09,911 INFO [stdout] (default task-41)
13:07:09,911 INFO [stdout] (default task-41) ????.....
abstruse continue....
13:07:16,804 INFO [stdout] (default task-41) ----------------------------336065642279870055586849--
13:07:16,804 INFO [stdout] (default task-41)
13:07:16,804 INFO [stdout] (default task-41) THE PARAM S 0
FINISH
For people who face the same issue.This took my 2 days. We need to use jBoss RESTEasy's Multipart support because of our application server is JBOSS(I have learned it recently. By the way, I have used lots of Apache 3rd part library without using Apache Tomcat :p ). Anyway, we add our multipart provider dependency to our pom.xml
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-multipart-provider</artifactId>
<version>3.1.4.Final</version>
<scope>provided</scope>
</dependency>
We do nothing with ApplicationConfig class. And here is rest part
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("upload")
public Response up1(MultipartFormDataInput input) {
List<InputPart> jsonpart = input.getFormDataMap().get("jsondata");
List<InputPart> imgpart = input.getFormDataMap().get("image");
InputStream is2 = input.getFormDataPart("image", InputStream.class,null);
String jsonStr = input.getFormDataPart("jsondata", String.class,null);
}
That's all.

SimpleMessageListenerContainer Error Handling

I'm using a SimpleMessageListenerContainer as a basis for remoting over AMQP. Everything goes smooth provided that the RabbitMQ broker can be reached at process startup. However, if by any reason it can't be reached (network down, permissions problem, etc...) the container just keeps retrying to connect forever. How can I set up a retry behaviour in this case (for example, try at most 5 times with an exponential backoff and then abort, killing the process)? I've had a look at this, but it doesn't seem to work for me on container startup. Can anyone please shed some light?
At the very least, I'd like to be able to catch the exception and provide a log message, instead of printing the exception itself as is the default behaviour.
How can I set up a retry behaviour in this case
There is no sophisticated connection retry, just a simple recoveryInterval. The assumption is that the broker unavailability is temporary. Fatal errors (such as bad credentials) stop the container.
You could use some external process to try connectionFactory.createConnection() and stop() the SimpleMessageListenerContainer when you deem it's time to give up.
You could also subclass CachingConnectionFactory, override createBareConnection catch the exception and increment the recoveryInterval, then call stop() when you want.
EDIT
Since 1.5, you can now configure a backOff. Here's an example using Spring Boot...
#SpringBootApplication
public class RabbitBackOffApplication {
public static void main(String[] args) {
SpringApplication.run(RabbitBackOffApplication.class, args);
}
#Bean(name = "rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
BackOff recoveryBackOff = new FixedBackOff(5000, 3);
factory.setRecoveryBackOff(recoveryBackOff);
return factory;
}
#RabbitListener(queues = "foo")
public void listen(String in) {
}
}
and
2018-04-16 12:08:35.730 INFO 84850 --- [ main] com.example.RabbitBackOffApplication : Started RabbitBackOffApplication in 0.844 seconds (JVM running for 1.297)
2018-04-16 12:08:40.788 WARN 84850 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused (Connection refused)
2018-04-16 12:08:40.788 INFO 84850 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer#57abad67: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2018-04-16 12:08:40.789 INFO 84850 --- [cTaskExecutor-2] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:1234]
2018-04-16 12:08:45.851 WARN 84850 --- [cTaskExecutor-2] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused (Connection refused)
2018-04-16 12:08:45.852 INFO 84850 --- [cTaskExecutor-2] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer#3479ea: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2018-04-16 12:08:45.852 INFO 84850 --- [cTaskExecutor-3] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:1234]
2018-04-16 12:08:50.935 WARN 84850 --- [cTaskExecutor-3] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused (Connection refused)
2018-04-16 12:08:50.935 INFO 84850 --- [cTaskExecutor-3] o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer#2be60f67: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2018-04-16 12:08:50.936 INFO 84850 --- [cTaskExecutor-4] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:1234]
2018-04-16 12:08:50.938 WARN 84850 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerContainer : stopping container - restart recovery attempts exhausted