Related
I meet this exception when using jedis with spring-data-redis in multi threading environment:
org.springframework.data.redis.RedisSystemException: Unknown redis exception; nested exception is java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.getFallback(FallbackExceptionTranslationStrategy.java:48)
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:38)
at org.springframework.data.redis.connection.jedis.JedisConnection.convertJedisAccessException(JedisConnection.java:241)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1705)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:187)
at org.springframework.data.redis.core.DefaultListOperations$14.doInRedis(DefaultListOperations.java:184)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:207)
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:169)
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:91)
at org.springframework.data.redis.core.DefaultListOperations.rightPush(DefaultListOperations.java:184)
at XXXXXXXXXXXXXXX
Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long
at redis.clients.jedis.Connection.getIntegerReply(Connection.java:265)
at redis.clients.jedis.BinaryJedis.rpush(BinaryJedis.java:1053)
at org.springframework.data.redis.connection.jedis.JedisConnection.rPush(JedisConnection.java:1703)
... 19 common frames omitted
jedis version: 2.9.0
spring-data-redis version: 1.8.12.RELEASE
redis server version: 3.0.6
My Client Java Code:
// Init JedisConnectionFactory
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory();
jedisPoolConfig.setMaxTotal(maxActive);
jedisPoolConfig.setMaxIdle(maxIdle);
jedisPoolConfig.setMaxWaitMillis(maxWait);
jedisPoolConfig.setTestOnBorrow(true);
jedisConnectionFactory.setPoolConfig(jedisPoolConfig);
jedisConnectionFactory.setHostName(host);
jedisConnectionFactory.setPort(port);
jedisConnectionFactory.setTimeout(timeout);
jedisConnectionFactory.setPassword(password);
jedisConnectionFactory.afterPropertiesSet();
// Create RedisTemplate
redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory);
redisTemplate.setEnableTransactionSupport(true);
StringRedisSerializer serializer = new StringRedisSerializer();
redisTemplate.setKeySerializer(serializer);
redisTemplate.setValueSerializer(serializer);
redisTemplate.setHashKeySerializer(serializer);
redisTemplate.setHashValueSerializer(serializer);
redisTemplate.afterPropertiesSet();
Finally, I solved my problem by remove this line, after I read the source code of spring-data:
redisTemplate.setEnableTransactionSupport(true);
You should share the pool and get a different Jedis from it in every thread.
See more on GitHub
This is a recurring pattern on Samebug. Try to search with your stack trace.
Really appreciate if someone can help me out.
I have an ignite server written in Java, and have a client written in C#, the client can be connected to the server, and can get server's cache correctly.
once the server restarted, the client received the EVT_CLIENT_NODE_RECONNECTED event from server. But the cache cannot be used any more.
Server code:
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
cacheConfiguration.setReadThrough(false);
cacheConfiguration.setWriteThrough(true);
cacheConfiguration.setWriteBehindEnabled(true);
cacheConfiguration.setWriteBehindFlushFrequency(2000);
cacheConfiguration.setWriteBehindFlushThreadCount(2);
DriverManagerDataSource theDataSource = new DriverManagerDataSource();
theDataSource.setDriverClassName("org.postgresql.Driver");
theDataSource.setUrl("jdbc:postgresql://192.168.224.128:5432/sample");
theDataSource.setUsername("postgres");
theDataSource.setPassword("password");
CacheJdbcPojoStoreFactory jdbcPojoStoreFactory = new CacheJdbcPojoStoreFactory<Long, SampleModel>()
.setParallelLoadCacheMinimumThreshold(0)
.setMaximumPoolSize(1)
.setDataSource(theDataSource);
cacheConfiguration.setCacheStoreFactory(jdbcPojoStoreFactory);
Collection<JdbcType> jdbcTypes = new ArrayList<JdbcType>();
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName("Sample");
jdbcType.setDatabaseSchema("public");
jdbcType.setKeyType("java.lang.Long");
Collection<JdbcTypeField> keys = new ArrayList<JdbcTypeField>();
keys.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
Collection<JdbcTypeField> vals = new ArrayList<JdbcTypeField>();
jdbcType.setDatabaseTable("sample");
jdbcType.setValueType("com.nmf.SampleModel");
vals.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
vals.add(new JdbcTypeField(Types.VARCHAR, "name", String.class, "name"));
jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
jdbcTypes.add(jdbcType);
((CacheJdbcPojoStoreFactory)cacheConfiguration.getCacheStoreFactory()).setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
Ignite ignite = Ignition.start(icfg);
SampleModel:
public class SampleModel implements Serializable {
private long id;
private String Name;
public long getId() {
return id;
}
public void setId(long id) {
id = id;
}
public String getName() {
return Name;
}
public void setName(String name) {
Name = name;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof SampleModel)) return false;
SampleModel that = (SampleModel) o;
return id == that.id;
}
#Override
public int hashCode() {
return (int) (id ^ (id >>> 32));
}
}
Client Code:
ExecutorService executor = Executors.newSingleThreadExecutor(r -> new Thread(r, "worker"));
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setName("Sample");
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setBackups(0);
cacheConfiguration.setCopyOnRead(true);
cacheConfiguration.setStoreKeepBinary(false);
IgniteConfiguration icfg = new IgniteConfiguration();
icfg.setCacheConfiguration(cacheConfiguration);
icfg.setClientMode(true);
final Ignite ignite = Ignition.start(icfg);
ignite.events().localListen(new IgnitePredicate<Event>() {
public boolean apply(Event event) {
if (event.type() == EVT_CLIENT_NODE_RECONNECTED) {
System.out.println("Reconnected");
executor.submit(()-> {
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
System.out.println("Got the cache");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
});
}
return true;
}
}, EVT_CLIENT_NODE_RECONNECTED);
IgniteCache<Long, SampleModel> cache = ignite.getOrCreateCache("Sample");
SampleModel model = cache.get(1L);
System.out.println(model.getName());
Error log on Client:
SEVERE: Failed to reinitialize local partitions (preloading will be stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more
July 25, 2017 12:58:38 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=dea5f59b-bdda-47a1-b31d-1ecb08fc746f, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, Ares-W11/169.254.194.93:0, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:0, /192.168.6.15:0, windows10.microdone.cn/192.168.224.1:0, /192.168.80.1:0], discPort=0, order=2, intOrder=0, lastExchangeTime=1500958697559, loc=true, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=true], topVer=2, nodeId8=dea5f59b, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1500958718133]], crd=TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false, hash=842035444], init=false, lastVer=null, partReleaseFut=null, affChangeMsg=null, skipPreload=true, clientOnlyExchange=false, initTs=1500958718133, centralizedAff=false, changeGlobalStateE=null, exchangeOnChangeGlobalState=false, forcedRebFut=null, evtLatch=0, remaining=[247d2926-010d-429b-aef2-97a18fbb3b5d], srvNodes=[TcpDiscoveryNode [id=247d2926-010d-429b-aef2-97a18fbb3b5d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.224.1, 192.168.6.15, 192.168.80.1, 2001:0:9d38:90d7:c83:fac:98d7:5fc1], sockAddrs=[/192.168.6.15:47500, /2001:0:9d38:90d7:c83:fac:98d7:5fc1:47500, windows10.microdone.cn/192.168.224.1:47500, /192.168.80.1:47500, Ares-W11/169.254.194.93:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1500958718083, loc=false, ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false]], super=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=class o.a.i.IgniteCheckedException: Failed to start component: class o.a.i.IgniteException: Failed to initialize cache store (data source is not provided)., hash=1281081640]]
class org.apache.ignite.IgniteCheckedException: Failed to start component: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8726)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1486)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1931)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: Failed to initialize cache store (data source is not provided).
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.start(CacheAbstractJdbcStore.java:298)
at org.apache.ignite.internal.util.IgniteUtils.startLifecycleAware(IgniteUtils.java:8722)
... 9 more
Only server nodes store caches(except of Local caches), so, when you restarted server node, this cache was stopped. The problem here is that client node was reconnected to the cluster, but not joined it as new node. That's why cache was not created again.
I think it's wrong behavior and cache should be recreated at client reconnecting.
I've created an issue for that.
As a workaround, you can use Ignite.GetOrCreateCache("Sample") method instead of Ignite.GetCache("Sample")
Are you still having issues where Ignite.GetOrCreateCache("Sample") hangs? Make sure you aren't making that call from a thread in the System Pool. I was listening for the EVT_CLIENT_NODE_RECONNECTED event and calling Ignite.GetOrCreateCache("Sample") when I ran into a similar issue. For more information, see the answer to this question: Closures stuck in 2.0 when try to add an element into the queue
I am trying to connect to rabbitmq in my camel route using camel-amqp (version 2.17) component.
I have configured it as below :
#Bean
CachingConnectionFactory jmsCachingConnectionFactory(){
JmsConnectionFactory pool = new JmsConnectionFactory();
pool.setRemoteURI("amqp://127.0.0.1:5672");
pool.setUsername("guest");
pool.setPassword("guest");
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setTargetConnectionFactory(pool);
return cachingConnectionFactory;
}
#Bean
JmsConfiguration jmsConfig(){
JmsConfiguration configuration = new JmsConfiguration();
configuration.setConnectionFactory(jmsCachingConnectionFactory());
// configuration.setCacheLevelName("CACHE_CONSUMER");
return configuration;
}
#Bean
AMQPComponent amqp(){
AMQPComponent component = new AMQPComponent();
component.setConfiguration(jmsConfig());
return component;
}
The error I am getting is
javax.jms.JMSException: An existing connection was forcibly closed by
the remote host
at
org.apache.qpid.jms.exceptions.JmsExceptionSupport.create(JmsExceptionSupport.java:66)
~[qpid-jms-client-0.8.0.jar:0.8.0]
In my rabbitmq log I can see the below message which I am not able to understand
*
** Reason for termination ==
** {function_clause,
[{rabbit_amqp1_0_link_util,'-outcomes/1-lc$^0/1-0-',
[{list,
[{symbol,<<"amqp:accepted:list">>},
{symbol,<<"amqp:rejected:list">>},
{symbol,<<"amqp:released:list">>},
{symbol,<<"amqp:modified:list">>}]}],
[{file,"src/rabbit_amqp1_0_link_util.erl"},{line,49}]},
{rabbit_amqp1_0_link_util,outcomes,1,
[{file,"src/rabbit_amqp1_0_link_util.erl"},{line,49}]},
{rabbit_amqp1_0_outgoing_link,attach,3,
[{file,"src/rabbit_amqp1_0_outgoing_link.erl"},{line,41}]},
{rabbit_amqp1_0_session_process,with_disposable_channel,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,377}]},
{rabbit_amqp1_0_session_process,handle_control,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,197}]},
{rabbit_amqp1_0_session_process,handle_cast,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,134}]},
{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1049}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}
=ERROR REPORT==== 8-Jul-2016::17:09:27 ===
closing AMQP connection <0.29082.0> (127.0.0.1:55479 -> 127.0.0.1:5672):
{handshake_error,running,<0.29104.0>,
{{symbol,<<"amqp:internal-error">>},
"Session error: ~p~n~p~n",
[function_clause,
[{rabbit_amqp1_0_link_util,'-outcomes/1-lc$^0/1-0-',
[{list,
[{symbol,<<"amqp:accepted:list">>},
{symbol,<<"amqp:rejected:list">>},
{symbol,<<"amqp:released:list">>},
{symbol,<<"amqp:modified:list">>}]}],
[{file,"src/rabbit_amqp1_0_link_util.erl"},{line,49}]},
{rabbit_amqp1_0_link_util,outcomes,1,
[{file,"src/rabbit_amqp1_0_link_util.erl"},{line,49}]},
{rabbit_amqp1_0_outgoing_link,attach,3,
[{file,"src/rabbit_amqp1_0_outgoing_link.erl"},{line,41}]},
{rabbit_amqp1_0_session_process,with_disposable_channel,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,377}]},
{rabbit_amqp1_0_session_process,handle_control,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,197}]},
{rabbit_amqp1_0_session_process,handle_cast,2,
[{file,"src/rabbit_amqp1_0_session_process.erl"},{line,134}]},
{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1049}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]]}}
*
I have enabled amqp_1_0 plugin in rabbitmq.
Can someone help me resolve this.
This seems a bug in rabbitmq amqp 1.0 plugin. An issue has been logged with rabbitmq.
https://github.com/rabbitmq/rabbitmq-amqp1.0/issues/31
Redis version: 3.2.0
Jedis version: 2.8.1
Below is my java code for connecting to redis:
public class TestRedis {
public static void main(String[] args) {
String host = args[0];
int port = Integer.parseInt(args[1]);
try (Jedis jedis = new Jedis(host, port)) {
System.out.println("Connected to jedis " + jedis.ping());
} catch(Exception e){
e.printStackTrace();
}
}
}
I am running this program in the machine where redis is installed. This machine's ip address is 192.168.1.57
If I provide host="localhost" and port = "6379" as arguments, connection with redis successfully established.
However, If I give host="192.168.1.57" and port = "6379" in arguments, I end up with below exception:
redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused
at redis.clients.jedis.Connection.connect(Connection.java:164)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:80)
at redis.clients.jedis.Connection.sendCommand(Connection.java:100)
at redis.clients.jedis.Connection.sendCommand(Connection.java:95)
at redis.clients.jedis.BinaryClient.ping(BinaryClient.java:93)
at redis.clients.jedis.BinaryJedis.ping(BinaryJedis.java:105)
at TestRedis.main(TestRedis.java:14)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at redis.clients.jedis.Connection.connect(Connection.java:158)
... 6 more
Please help...
There are a few settings that would affect this: bind and protected-mode. They work together to provide a baseline of security with new installs.
Find the following in your redis.conf file and comment it out:
bind 127.0.0.1
By adding a # in front of it:
# bind 127.0.0.1
Or, if you would rather not comment it out, you can also add the IP of your eth0/em1 interface to it, like this:
bind 127.0.0.1 192.168.1.57
Also, unless you're using password security, you'll also have to turn off protected mode by changing:
protected-mode yes
To:
protected-mode no
Make sure that you read the relevant documentation and understand the security implications of both of these changes.
After making these changes, restart redis.
Trying the basic set operation on a redis server installed in red hat linux.
JedisPool pool = new JedisPool(new JedisPoolConfig(), HOST, PORT);
Jedis jedis = null;
try {
jedis = pool.getResource();
System.out.println(jedis.isConnected()); //prints true
jedis.set("status", "online"); //gets exception
} finally {
if (jedis != null) {
jedis.close();
}
}
pool.destroy();
Getting the following exception:
Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Connection reset
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:201)
at redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
at redis.clients.jedis.Protocol.process(Protocol.java:132)
at redis.clients.jedis.Protocol.read(Protocol.java:196)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:288)
at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:187)
at redis.clients.jedis.Jedis.set(Jedis.java:66)
at com.revechat.spring.redis_test.App.main(App.java:28)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:195)
... 7 more
How to resolve the issue ?
I had a similar issue. Our production Redis required an encrypted connection over TLS, whereas our test system did not. In production therefore the java.net.SocketException: Connection reset appeared when we tried to use the Jedis connection.
To fix it, use
JedisPool pool = new JedisPool(new JedisPoolConfig(), HOST, PORT, true);
for connections that require TLS.