I have set the ttl and idleTime for cache, the cache can be remove when the app is running, however when I restart app the EvictionScheduler may not work for cache, the cache can not be clear when ttl approach.
error
#Bean
public CacheManager cacheManager(RedissonClient redissonClient) throws IOException {
Map<String, CacheConfig> config = new HashMap<String, CacheConfig>();
config.put(CacheNames.XinLianService_getAllAgentData, new CacheConfig(60 * 1000, 60 * 1000));
config.put(CacheNames.XinlianRoleService_getById, new CacheConfig(2 * 60 * 1000, 60 * 1000));
return new RedissonSpringCacheManager(redissonClient, config);
}
Related
I'm using Apache Ignite on Azure Kubernetes as a distributed cache. Also, I have a web API on Azure based on .NET6.
I connect to Ignite with IgniteClient class. I made it a singleton but the connection closes in 5 seconds after starting.
I've tried
ReconnectDisabled = false
AND
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
but both of them didn't work. How can I keep alive the connection all the time as a singleton?
Here is my configuration code of the Ignite;
public CacheManager()
{
ConnectIgnite();
}
public void ConnectIgnite()
{
_ignite = Ignition.StartClient(GetIgniteConfiguration());
}
public IgniteClientConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteEndpoints = appSettingsJson["AppSettings:IgniteEndpoint"];
var igniteUser = appSettingsJson["AppSettings:IgniteUser"];
var ignitePassword = appSettingsJson["AppSettings:IgnitePassword"];
var nodeList = igniteEndpoints.Split(",");
var config = new IgniteClientConfiguration
{
Endpoints = nodeList,
UserName = igniteUser,
Password = ignitePassword,
EnablePartitionAwareness = true,
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
};
return config;
}
I have a problem to connect to redis when my instance is just started.
I use:
runtime: java
env: flex
runtime_config:
jdk: openjdk8
i got following exception:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
java.net.SocketTimeoutException: connect timed out
after 2-3 min, it works smoothly
Do i need to add some check in my code or how i should fix it properly?
p.s.
also i use spring boot, with following configuration
#Value("${spring.redis.host}")
private String redisHost;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
in pom.xml
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>2.1.2.RELEASE</version>
I solved this problem as follows: in short, I added the “ping” method, which tries to set and get the value from Redis; if it's possible, then application is ready.
Implementation:
First, you need to update app.yaml add following:
readiness_check:
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
Second, in your rest controller:
#GetMapping("/readiness_check")
public ResponseEntity<?> readiness_check() {
if (!cacheConfig.ping()) {
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok().build();
}
Third, class CacheConfig:
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
P.S.
Also if somebody needs the full implementation of CacheConfig:
#Configuration
public class CacheConfig {
private static final Logger LOGGER = Logger.getLogger(CacheConfig.class.getName());
#Value("${spring.redis.host}")
private String redisHost;
private final RedisTemplate<String, Object> redisTemplate;
#Autowired
public CacheConfig(#Lazy RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#Bean
JedisConnectionFactory jedisConnectionFactory(
#Autowired JedisPoolConfig poolConfig
) {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
JedisClientConfiguration clientConfig = JedisClientConfiguration
.builder()
.usePooling()
.poolConfig(poolConfig)
.build();
return new JedisConnectionFactory(config, clientConfig);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
/**
* Example: https://github.com/PengliuIBM/pws_demo/blob/1becdca1bc19320c2742504baa1cada3260f8d93/redisData/src/main/java/com/pivotal/wangyu/study/springdataredis/config/RedisConfig.java
*/
#Bean
redis.clients.jedis.JedisPoolConfig jedisPoolConfig() {
final redis.clients.jedis.JedisPoolConfig poolConfig = new redis.clients.jedis.JedisPoolConfig();
// Maximum active connections to Redis instance
poolConfig.setMaxTotal(16);
// Number of connections to Redis that just sit there and do nothing
poolConfig.setMaxIdle(16);
// Minimum number of idle connections to Redis - these can be seen as always open and ready to serve
poolConfig.setMinIdle(8);
// Tests whether connection is dead when returning a connection to the pool
poolConfig.setTestOnBorrow(true);
// Tests whether connection is dead when connection retrieval method is called
poolConfig.setTestOnReturn(true);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
return poolConfig;
}
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
}
As Jedis documentation state that Jedis client is not thread-safe.
A single Jedis instance is not threadsafe!
So I am using JedisPool. I want to push data to browser's WebSocket client from server. For this I am using Redis's PubSub mechanism.
#ServerEndpoint(value = "/websocket/{channelName}", configurator = GetHttpSessionConfigurator.class)
public class WSEndpoint {
private WSJedisPubSub wsJedisPubSub;
private static JedisPool jedisPool = null;
#OnOpen
public void onOpen(Session session,
#PathParam("channelName") String channelName) throws IOException,
EncodeException {
// FIXME proper synchronization is required here
if (jedisPool == null) {
initPool();
}
wsJedisPubSub = new WSJedisPubSub(session);
try (Jedis redisClient = jedisPool.getResource()) {
redisClient.subscribe(wsJedisPubSub, channelName);
}
private void initPool() {
JedisPoolConfig jedisConfiguration = new JedisPoolConfig();
jedisPool = new JedisPool(jedisConfiguration, "localhost", 6379);
}
}
full code
My application can have thousands of websockets connected to it. I have doubts about following piece of code.
try (Jedis redisClient = jedisPool.getResource()) {
redisClient.subscribe(wsJedisPubSub, channelName);
}
This redisClient should get close after try-with-resouce block, but still it is working(getting subscribed events). How ?
By default, pool size is 8. I can set to n but eventually I will have n+1 web sockets. What is the best way to deal with this ? Should I have only one Jedis instance and do the routing of message by myself ?
If Jedis client gets disconnected, then what is the recommended way for reconnect here ?
I'm trying to test the AutomaticRecoveryEnabled property of the RabbitMQ ConnectionFactory. I'm connecting to a RabbitMQ instance on a local VM and on the client I'm publishing messages in a loop. The problem is if I intentionally break the connection, the client just waits forever and doesn't time out. How do I set the time out value? RequestedConnectionTimeout doesn't appear to have any effect.
I'm using the RabbitMQ client 3.5.4
Rudimentary publish loop:
// Client is a wrapper around the RabbitMQ client
for (var i = 0; i < 1000; ++i)
{
// Publish sequentially numbered messages
client.Publish("routingkey", GetContent(i)));
Thread.Sleep(100);
}
The Publish method inside the wrapper:
public bool Publish(string routingKey, byte[] body)
{
try
{
using (var channel = _connection.CreateModel())
{
var basicProps = new BasicProperties
{
Persistent = true,
};
channel.ExchangeDeclare(_exchange, _exchangeType);
channel.BasicPublish(_exchange, routingKey, basicProps, body);
return true;
}
}
catch (Exception e)
{
_logger.Log(e);
}
return false;
}
The connection and connection factory:
_connectionFactory = new ConnectionFactory
{
UserName = _userName,
Password = _password,
HostName = _hostName,
Port = _port,
Protocol = Protocols.DefaultProtocol,
VirtualHost = _virtualHost,
// Doesn't seem to have any effect on broken connections
RequestedConnectionTimeout = 2000,
// The behaviour appears to be the same with or without these included
// AutomaticRecoveryEnabled = true,
// NetworkRecoveryInterval = TimeSpan.FromSeconds(10),
};
_connection = _connectionFactory.CreateConnection();
It appears this is a bug in version 3.5.4. Version 3.6.3 does not wait indefinitely.
My case is rabbitmq server got out of space, just as below
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ramonubuntu--vg-root 6299376 5956336 0 100% /
The producer publishes message to server(the message needs to be persisted), and then will be blocked forever, it will keeping waiting the response of publishing. Sure we should avoid the situation of server out of space, but is there any timeout mechanism to let producer quit the waiting?
I have tried heartbeat and SO_TIMEOUT, they both don't work, as the network works fine. Below is my producer.
protected void publish(byte[] message) throws Exception {
// ConnectionFactory can be reused between threads.
ConnectionFactory factory = new SoTimeoutConnectionFactory();
factory.setHost(this.getHost());
factory.setVirtualHost("te");
factory.setPort(5672);
factory.setUsername("amqp");
factory.setPassword("amqp");
factory.setConnectionTimeout(10 * 1000);
// doesn't help if server got out of space
factory.setRequestedHeartbeat(1);
final Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
// declare a 'topic' type of exchange
channel.exchangeDeclare(this.exchangeName, "topic", true);
channel.addReturnListener(new ReturnListener() {
#Override
public void handleReturn(int replyCode, String replyText, String exchange, String routingKey,
AMQP.BasicProperties properties, byte[] body) throws IOException {
logger.warn("[X]Returned message(replyCode:" + replyCode + ",replyText:" + replyText
+ ",exchange:" + exchange + ",routingKey:" + routingKey + ",body:" + new String(body));
}
});
channel.confirmSelect();
channel.addConfirmListener(new ConfirmListener() {
#Override
public void handleAck(long deliveryTag, boolean multiple) throws IOException {
logger.info("Ack: " + deliveryTag);
// RabbitMessagePublishMain.this.release(connection);
}
#Override
public void handleNack(long deliveryTag, boolean multiple) throws IOException {
logger.info("Nack: " + deliveryTag);
// RabbitMessagePublishMain.this.release(connection);
}
});
channel.basicPublish(this.exchangeName, RabbitMessageConsumerMain.EXCHANGE_NAME + ".-1", true,
MessageProperties.PERSISTENT_BASIC, message);
channel.waitForConfirmsOrDie(10*1000);
// now we can close connection
connection.close();
}
It will block at 'channel.waitForConfirmsOrDie(10*1000);', and the SotimeoutConnectionFactory,
public class SoTimeoutConnectionFactory extends ConnectionFactory {
#Override
protected void configureSocket(Socket socket) throws IOException {
super.configureSocket(socket);
socket.setSoTimeout(10 * 1000);
}
}
Also I captured the network between producer and rabbimq,
Please help.
You need to implement Connection Block/Unblocked.
This is basically a way of notifying the publisher that the server is running out of resources. The advantage with this is that the publisher will also be notified once it is safe to publish again.
I would recommend that you take a look at this article. A simple way of implementing this is to have a flag that indicates if it is safe to publish, if it is not wait until it is.
As an example you can take a look on how I implemented this in one of my Python examples.