redisListCommands.brpop(0, queueName)
I have set timeout 0 (i.e. without timeout). And why this
command brings
io.lettuce.core.RedisCommandTimeoutException: Command timed out
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:62)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy113.brpop(Unknown Source)
Why did you ask that again? There is a Redis server timeout and your client (lettuce) connection timeout. When you build a connection, use withTimeout with very high value. Unfortunately, you can't set 0 here.
RedisURI.builder().withHost(...).withPort(...)
.withTimeout(Duration.ofDays(10000)).build();
Related
When I run pacman -Syu to update, it first shows no error, I normally update everything and after that, I run pacman -Syu again, it shows this, what is the reason and any solution?
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
error: failed retrieving file 'core.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5241 ms: Connection timed out
error: failed retrieving file 'extra.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
error: failed retrieving file 'community.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
warning: too many errors from mirror.erickochen.nl, skipping for the remainder of this transaction
:: Starting full system upgrade...
there is nothing to do
Sometimes mirrors go offline, it's recommended to have multiple mirrors so you don't have a single point of failure, as well as keeping mirrors updated. Using reflector is recommended since it also finds fast candidates based on your location.
For the time being, edit /etc/pacman.d/mirrorlist and uncomment a couple of mirrors, then try updating again.
Hello i am trying to connect to a Redis database from a ASP NET Core 3.1 application and i keep getting this error when i issue a command.
> 'No connection is active/available to service this operation: SET a; A
> blocking operation was interrupted by a call to
> WSACancelBlockingCall., mc: 1/1/0, mgr: 10 of 10 available,
> clientName: [ClientName], IOCP: (Busy=2,Free=998,Min=8,Max=1000),
> WORKER:
I think it has something to do with the library StackExchangeRedis since until now it worked, up until it stopped working randomly.I have updated to the last version, restarted pc, whatever and nothing.
I can connect to my local redis and issue commands with both the Redis-Cli and using telnet 127.0.0.1 6379 , so that is why i think the culprit is the library.
ConnectionString
localhost:6379,ssl=True,allowAdmin=True,abortConnect=False,defaultDatabase=0
How i use it:
var con=ConnectionMultiplexer.Connect(connectionString); //passes
con.GetDatabase().StringSet("a","a"); //throws
If just using it for localhost development purposes you can try disabling ssl : localhost:6379,**ssl=false**,allowAdmin=True,abortConnect=False,defaultDatabase=0
We have a setup wherein, one ignite server node serves 15 to 20 thick client nodes and 40 to 50 thin client nodes, thin client connection is singlton,
In operation, some times we get below error,
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
On the Server node, we are inserting data inside a third party store using CacheStoreAdapters
Don't know where it goes wrong since out of 100 operations one operation fails with the above error.
Also, let me know what can we do for this failure handling.
Apache Ignite version: 2.8
Edits: (Code Snippet)
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("host:port");
IgniteClient client = Ignition.startClient(cfg); // this client is singleton
client.getOrCreateCache("ABC_CACHE").put(key, val);
StatckTrace:
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:499)
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:491)
at org.apache.ignite.internal.client.thin.TcpClientChannel.access$100(TcpClientChannel.java:92)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:538)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
at org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:272)
at org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
at org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:187)
at org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:114)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:535)
... 36 more
You probably have network or NAT configured which will reset connections when not used, or even sporadically.
In this case, you will have to reconnect.
Another option, are you sure you are connecting to thin client port and not some other port?
We have defined Lettuce client connection factory to be able to connect to Redis defining custom socket and command timeout:
#Bean
LettuceConnectionFactory lettuceConnectionFactory() {
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(socketTimeout).build();
final ClientOptions clientOptions =
ClientOptions.builder().socketOptions(socketOptions).build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.commandTimeout(redisCommandTimeout)
.clientOptions(clientOptions).build();
RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration(redisHost,
redisPort);
final LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(serverConfig,
clientConfig);
lettuceConnectionFactory.setValidateConnection(true);
return new LettuceConnectionFactory(serverConfig, clientConfig);
}
Lettuce documentation define default values:
Default socket timeout is 10 seconds
Default command timeout is 60 seconds
If Redis service is down application must receive timeout in 300ms. Which value must be defined as the greatest value?
Github example project:
https://github.com/cristianprofile/spring-data-redis-lettuce
In socket options you specify connect timeout. This is a maximum time allowed for Redis client (Lettuce) to try to establish a TCP/IP connection to a Redis Server. This value should be relatively small (e.g. up to 1 minute).
If client could not establish connection to a server within 1 minute I guess it's safe to say server is not available (server is down, address/port is wrong, network security like firewalls prohibit connection etc).
The command timeout is completely different. Once connection is established, client can send commands to the server. It expects server to respond to those command. The timeout configures for how long client will be waiting for a response to a command from the server.
I think this timeout can be set to a bigger value (e.g a few minutes) in case client command sends a lot of data to the server and it takes time to transfer and store so much data.
I'm using monit and M/Monit to monitor my application infrastructure. But every once in a while, M/Monit will show a "No report" error from a server and mark it down. A few seconds later, the issue clears at the next check in for the server to M/Monit.
The monit logs on some of the servers have these events in them:
Oct 14 12:19:11 ip-10-203-51-199 monit[30307]: M/Monit: cannot open a
connection to http://example.com:8080/collector -- Connection timed out
Oct 14 12:20:16 ip-10-203-51-199 monit[30307]: M/Monit: cannot open a
connection to http://example.com:8080/collector -- Connection timed out
Oct 14 12:22:21 ip-10-203-51-199 monit[30307]: M/Monit: cannot open a
connection to http://example.com:8080/collector -- Connection timed out
What config do I need to tune to increase the threshold until M/Monit considers the server actually down?
Here is the config from the server that has the most trouble:
set httpd port 2812 and
allow xxx:xxx
set mailserver xxx.xxx.xxx port xxx username "xxx" password "xxx" using tlsv1 with timeout 15 seconds
set daemon 30
with start delay 120
set logfile syslog facility log_daemon
set alert xxx
set mail-format {
subject: $EVENT $SERVICE on $HOST
from: monit#$HOST
message: Monit $ACTION $SERVICE at $DATE on $HOST: $DESCRIPTION.
}
set mmonit http://xxx:xxx#example.com:8080/collector
There doesn't appear to be any problem with config file.
The intermittent problem you are experiencing is because monit is failing to open a socket on the port and timing out. See the source code for reference (handle_mmonit()):
http://fossies.org/linux/privat/monit-5.6.tar.gz:a/monit-5.6/src/collector.c
Search for the string "M/Monit: cannot open a connection to".
The timeout value appears to be fixed at 5 seconds in the code. But 5 seconds is ample time to open a socket connection on that port.
How often does monit post events to mmonit?
Had the same problem
[MST Apr 5 11:24:11] error : 'apache' failed protocol test [APACHESTATUS] at [phoenix.example.com]:80 [TCP/IP] -- APACHE-STATUS: error -- no scoreboard found
[MST Apr 5 11:24:16] error : Cannot create socket to [10x.xx.xx.x4]:8080 -- Connection timed out
We had another firewall on top of iptables. Opened up the 8080 in the input and the output side and it fixed it!