Redis client Lettuce command timeout versus socket timeout - redis

We have defined Lettuce client connection factory to be able to connect to Redis defining custom socket and command timeout:
#Bean
LettuceConnectionFactory lettuceConnectionFactory() {
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(socketTimeout).build();
final ClientOptions clientOptions =
ClientOptions.builder().socketOptions(socketOptions).build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.commandTimeout(redisCommandTimeout)
.clientOptions(clientOptions).build();
RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration(redisHost,
redisPort);
final LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(serverConfig,
clientConfig);
lettuceConnectionFactory.setValidateConnection(true);
return new LettuceConnectionFactory(serverConfig, clientConfig);
}
Lettuce documentation define default values:
Default socket timeout is 10 seconds
Default command timeout is 60 seconds
If Redis service is down application must receive timeout in 300ms. Which value must be defined as the greatest value?
Github example project:
https://github.com/cristianprofile/spring-data-redis-lettuce

In socket options you specify connect timeout. This is a maximum time allowed for Redis client (Lettuce) to try to establish a TCP/IP connection to a Redis Server. This value should be relatively small (e.g. up to 1 minute).
If client could not establish connection to a server within 1 minute I guess it's safe to say server is not available (server is down, address/port is wrong, network security like firewalls prohibit connection etc).
The command timeout is completely different. Once connection is established, client can send commands to the server. It expects server to respond to those command. The timeout configures for how long client will be waiting for a response to a command from the server.
I think this timeout can be set to a bigger value (e.g a few minutes) in case client command sends a lot of data to the server and it takes time to transfer and store so much data.

Related

mina-sshd server-side heartbeat not working

I'm trying to implement an sshd server by mina-sshd framework, want to make each session has a keepalive heartbeat , but mina-sshd setSessionHeartbeat api doesn't work.
Server Code:
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setPort(3333);
sshd.setShellFactory(InteractiveProcessShellFactory.INSTANCE);
sshd.setSessionHeartbeat(SessionHeartbeatController.HeartbeatType.IGNORE, Duration.ofSeconds(5));
sshd.setKeyPairProvider(new ClassLoadableResourceKeyPairProvider(getClass().getClassLoader(), "rsa.key"));
sshd.setPasswordAuthenticator((username, password, session) -> username.equals(password));
sshd.start();
log.info("SSHD server started");
Client:
user/passwd = test
ssh test#127.0.0.1 -p3333

Testcontainers RabbitMq with SSL/TLS fails to wait for a container to start

I have a test using RabbitMq in Testcontainers. The test is working using HTTP
#Container private static final RabbitMQContainer RABBITMQ_CONTAINER =
new RabbitMQContainer()
.withLogConsumer(new Slf4jLogConsumer(LOG))
.withStartupTimeout(Duration.of(5, ChronoUnit.MINUTES))
.waitingFor(Wait.forHttp("/api/vhosts")
.forPort(15672)
.withBasicCredentials("guest", "guest"));
and fails when I switch to HTTPS
#Container private static final RabbitMQContainer RABBITMQ_CONTAINER =
new RabbitMQContainer()
.withLogConsumer(new Slf4jLogConsumer(LOG))
.withStartupTimeout(Duration.of(5, ChronoUnit.MINUTES))
.waitingFor(Wait.forHttp("/api/vhosts")
.usingTls()
.forPort(15671)
.withBasicCredentials("guest", "guest"))
.withSSL(forClasspathResource("/certs/server_key.pem", 0644),
forClasspathResource("/certs/server_certificate.pem", 0644),
forClasspathResource("/certs/ca_certificate.pem", 0644),
VERIFY_NONE,
false);
In logs I see that container can not start:
...
18:53:21.274 [main] INFO - /brave_swirles: Waiting for 60 seconds for URL: https://localhost:50062/api/vhosts (where port 50062 maps to container port 15671)
...
18:54:21.302 [main] ERROR - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (https://localhost:50062/api/vhosts should return HTTP 200)
What do I miss? I'd want at least Testcontainers' waiting strategy works.
Using RabbitMQContainer with custom SSL certificates makes it hard to also use HttpWaitStrategy. In this case, you probably need to configure the whole JVM to trust those SSL certificates.
Alternatively, just continue to use the default wait strategy (which will be a LogMessageWaitStrategy).
For further examples, just look at the RabbitMQContainer tests in Testcontainers itself:
https://github.com/testcontainers/testcontainers-java/blob/c3f53b3a63e6b0bc800a7f0fbce91ce95a8986b3/modules/rabbitmq/src/test/java/org/testcontainers/containers/RabbitMQContainerTest.java#L237-L264

org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable inside kubernetes environment

We have a setup wherein, one ignite server node serves 15 to 20 thick client nodes and 40 to 50 thin client nodes, thin client connection is singlton,
In operation, some times we get below error,
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
On the Server node, we are inserting data inside a third party store using CacheStoreAdapters
Don't know where it goes wrong since out of 100 operations one operation fails with the above error.
Also, let me know what can we do for this failure handling.
Apache Ignite version: 2.8
Edits: (Code Snippet)
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("host:port");
IgniteClient client = Ignition.startClient(cfg); // this client is singleton
client.getOrCreateCache("ABC_CACHE").put(key, val);
StatckTrace:
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:499)
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:491)
at org.apache.ignite.internal.client.thin.TcpClientChannel.access$100(TcpClientChannel.java:92)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:538)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
at org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:272)
at org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
at org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:187)
at org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:114)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:535)
... 36 more
You probably have network or NAT configured which will reset connections when not used, or even sporadically.
In this case, you will have to reconnect.
Another option, are you sure you are connecting to thin client port and not some other port?

Letuce redis brpop command

redisListCommands.brpop(0, queueName)
I have set timeout 0 (i.e. without timeout). And why this
command brings
io.lettuce.core.RedisCommandTimeoutException: Command timed out
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:62)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy113.brpop(Unknown Source)
Why did you ask that again? There is a Redis server timeout and your client (lettuce) connection timeout. When you build a connection, use withTimeout with very high value. Unfortunately, you can't set 0 here.
RedisURI.builder().withHost(...).withPort(...)
.withTimeout(Duration.ofDays(10000)).build();

ServiceStack.Redis.Sentinel Usage

I'm running a licensed version of ServiceStack and trying to get a sentinel cluster setup on Google Cloud Compute.
The cluster is basically GCE's click-to-deploy redis solution - 3 servers. Here is the code i'm using to initialize...
var hosts = Settings.Redis.Host.Split(';');
var sentinel = new ServiceStack.Redis.RedisSentinel(hosts, "master");
redis = sentinel.Setup();
container.Register<IRedisClientsManager>(redis);
container.Register<ICacheClient>(redis.GetCacheClient());
The client works fine - but once i shut down one of the redis instances everything craps the bed. The client complains about not being able to connect to the missing instance. Additionally, even when i bring the instance back up - it is in READ ONLY mode, so everything still fails. There doesn't seem to be a way to recover once you are in this state...
Am i doing something wrong? Is there some reason that RedisSentinal client doesn't figure out who the new master is? I feed it all 3 host IP addresses...
You should only be supplying the host of the Redis Sentinel Server to RedisSentinel as it gets the active list of other master/slave redis servers from the Sentinel host.
Some changes to RedisSentinel were recently added in the latest v4.0.37 that's now available on MyGet which includes extra logging and callbacks of Redis Sentinel events. The new v4.0.37 API looks like:
var sentinel = new RedisSentinel(sentinelHost, masterName);
Starting the RedisSentinel will connect to the Sentinel Host and return a pre-configured RedisClientManager (i.e. redis connection pool) with the active
var redisManager = sentinel.Start();
Which you can then register in the IOC with:
container.Register<IRedisClientsManager>(redisManager);
The RedisSentinel should then listen to master/slave changes from the Sentinel hosts and failover the redisManager accordingly. The existing connections in the pool are then disposed and replaced with a new pool for the newly configured hosts. Any active connections outside of the pool they'll throw connection exceptions if used again, the next time the RedisClient is retrieved from the pool it will be configured with the new hosts.
Callbacks and Logging
Here's an example of how you can use the new callbacks to introspect the RedisServer events:
var sentinel = new RedisSentinel(sentinelHost, masterName)
{
OnFailover = manager =>
{
"Redis Managers were Failed Over to new hosts".Print();
},
OnWorkerError = ex =>
{
"Worker error: {0}".Print(ex);
},
OnSentinelMessageReceived = (channel, msg) =>
{
"Received '{0}' on channel '{1}' from Sentinel".Print(channel, msg);
},
};
Logging of these events can also be enabled by configuring Logging in ServiceStack:
LogManager.LogFactory = new ConsoleLogFactory(debugEnabled:false);
There's also an additional explicit FailoverToSentinelHosts() that can be used to force RedisSentinel to re-lookup and failover to the latest master/slave hosts, e.g:
var sentinelInfo = sentinel.FailoverToSentinelHosts();
The new hosts are available in the returned sentinelInfo:
"Failed over to read/write: {0}, read-only: {1}".Print(
sentinelInfo.RedisMasters, sentinelInfo.RedisSlaves);