apache ignite service deploy cluster singleton - ignite

I try create spring java application with apache ignite inside.
When application startup it deploy service(cluster singleton) to cluster. (it`s ok)
But when i start second node which deploy this service too it is locked.
why this hapends ?
Apache Ignite 2.5.0
code of service class:
public class TaskLockWatcher implements Service {
#IgniteInstanceResource
private Ignite ignite;
private SchedulerFuture scheduler;
#Override
public void cancel(ServiceContext ctx) {
LOGGER.info("schedule service cancel");
if (scheduler != null){
scheduler.cancel();
}
}
#Override
public void init(ServiceContext ctx) throws Exception {
LOGGER.info("schedule service init");
}
#Override
public void execute(ServiceContext ctx) throws Exception {
LOGGER.info("schedule service execute");
scheduler = ignite.scheduler().scheduleLocal(() -> {
LOGGER.info("schedule service call");
List<Ticket> tickets = getTicketForUnlock();
if (!tickets.isEmpty()){
changeTicketStatus(tickets, TicketStatus.CREATED);
}
}, "* * * * *");
}
}
But the locking occurs even if I leave only the output of messages in all TaskLockWatcher methods.
publishing code:
Ignite ignite = connector.getClient();
IgniteServices svcs = ignite.services();
svcs.deployClusterSingleton("taskLockWatcher", new TaskLockWatcher());
ignite configuration code:
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
dataStorageConfiguration.setWalMode(WALMode.LOG_ONLY);
dataStorageConfiguration.getDefaultDataRegionConfiguration().setPersistenceEnabled(false);
dataStorageConfiguration.getDefaultDataRegionConfiguration().setMaxSize(128); // MB
List<DataRegionConfiguration> list = new ArrayList<>();
DataRegionConfiguration configuration = new DataRegionConfiguration();
configuration.setName("TASK_REGION");
configuration.setMaxSize(1024);
configuration.setPersistenceEnabled(true);
list.add(conf);
dataStorageConfiguration.setDataRegionConfigurations(list.toArray(new DataRegionConfiguration[0]));
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpCommunicationSpi tcpCommunicationSpi = new TcpCommunicationSpi();
tcpCommunicationSpi.setSlowClientQueueLimit(1000);
igniteConfiguration.setCommunicationSpi(tcpCommunicationSpi);
Map<String, String> attrs = Collections.singletonMap("group.node.filter", "CLUSTER_TASKS,CLUSTER_TICKETS");
igniteConfiguration.setUserAttributes(attrs);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryZookeeperIpFinder ipFinder = new TcpDiscoveryZookeeperIpFinder();
ipFinder.setBasePath("/datagrid");
ipFinder.setCurator(getCuratorFramework());
spi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(spi);
AtomicConfiguration atomicConfiguration = new AtomicConfiguration();
atomicConfiguration.setBackups(1);
atomicConfiguration.setCacheMode(CacheMode.REPLICATED);
igniteConfiguration.setAtomicConfiguration(atomicConfiguration);
igniteConfiguration.setIncludeEventTypes(EventType.EVTS_CACHE);
CacheConfiguration<UUID, Ticket> cacheConfiguration = new CacheConfiguration<>("TICKETS");
cacheConfiguration.setIndexedTypes(keyClass, UUID.class);
cacheConfiguration.setNodeFilter((ClusterNode clusterNode) -> {
String dataNode = clusterNode.attribute("group.node.filter");
return dataNode != null && dataNode.contains("CLUSTER_TICKETS");
});
cacheConfiguration.setStatisticsEnabled(true);
cacheConfiguration.setDataRegionName("TASK_REGION");
cacheConfiguration.setBackups(1);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
IgniteCache<UUID, Ticket> ticketCache = ignite.getOrCreateCache(cacheConfiguration);

Related

How can I keep alive the connection of Apache Ignite all the time as a singleton on AKS?

I'm using Apache Ignite on Azure Kubernetes as a distributed cache. Also, I have a web API on Azure based on .NET6.
I connect to Ignite with IgniteClient class. I made it a singleton but the connection closes in 5 seconds after starting.
I've tried
ReconnectDisabled = false
AND
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
but both of them didn't work. How can I keep alive the connection all the time as a singleton?
Here is my configuration code of the Ignite;
public CacheManager()
{
ConnectIgnite();
}
public void ConnectIgnite()
{
_ignite = Ignition.StartClient(GetIgniteConfiguration());
}
public IgniteClientConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteEndpoints = appSettingsJson["AppSettings:IgniteEndpoint"];
var igniteUser = appSettingsJson["AppSettings:IgniteUser"];
var ignitePassword = appSettingsJson["AppSettings:IgnitePassword"];
var nodeList = igniteEndpoints.Split(",");
var config = new IgniteClientConfiguration
{
Endpoints = nodeList,
UserName = igniteUser,
Password = ignitePassword,
EnablePartitionAwareness = true,
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
};
return config;
}

Redisson Connection Listener not getting invoked in version 3.7.0

I am trying to setup a connection listener for my redisson client. It is not getting invoked on both connect/disconnect.
Tried the code mentioned on the redisson github which is as per below:
public void createRedisClient(Handler<AsyncResult<Redis>> handler) {
ConfigRetriever configRetriever = UDSFBootStrapper.getInstance().getConfigRetriever();
configRetriever.getConfig(
config -> {
String redisUrl = config.result().getString("redisip");
redisUrl += ":";
redisUrl += config.result().getInteger("redisport");
Config rconfig = new Config();
rconfig.setTransportMode(TransportMode.EPOLL);
rconfig.useClusterServers()
.addNodeAddress(UdsfConstants.REDIS_CONNECTION_PREFIX + redisUrl);
rclient = Redisson.create(rconfig);
rclient.getNodesGroup().addConnectionListener(new ConnectionListener() {
//#Override
public void onConnect(InetSocketAddress inetSocketAddress) {
logger.info("Redis server connected");
}
//#Override
public void onDisconnect(InetSocketAddress inetSocketAddress) {
logger.info("Redis server disconnected");
}
});
});
}

Google Cloud Memory Store (Redis), can't connect to redis when instance is just started

I have a problem to connect to redis when my instance is just started.
I use:
runtime: java
env: flex
runtime_config:
jdk: openjdk8
i got following exception:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
java.net.SocketTimeoutException: connect timed out
after 2-3 min, it works smoothly
Do i need to add some check in my code or how i should fix it properly?
p.s.
also i use spring boot, with following configuration
#Value("${spring.redis.host}")
private String redisHost;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
in pom.xml
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>2.1.2.RELEASE</version>
I solved this problem as follows: in short, I added the “ping” method, which tries to set and get the value from Redis; if it's possible, then application is ready.
Implementation:
First, you need to update app.yaml add following:
readiness_check:
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
Second, in your rest controller:
#GetMapping("/readiness_check")
public ResponseEntity<?> readiness_check() {
if (!cacheConfig.ping()) {
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok().build();
}
Third, class CacheConfig:
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
P.S.
Also if somebody needs the full implementation of CacheConfig:
#Configuration
public class CacheConfig {
private static final Logger LOGGER = Logger.getLogger(CacheConfig.class.getName());
#Value("${spring.redis.host}")
private String redisHost;
private final RedisTemplate<String, Object> redisTemplate;
#Autowired
public CacheConfig(#Lazy RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#Bean
JedisConnectionFactory jedisConnectionFactory(
#Autowired JedisPoolConfig poolConfig
) {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
JedisClientConfiguration clientConfig = JedisClientConfiguration
.builder()
.usePooling()
.poolConfig(poolConfig)
.build();
return new JedisConnectionFactory(config, clientConfig);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
/**
* Example: https://github.com/PengliuIBM/pws_demo/blob/1becdca1bc19320c2742504baa1cada3260f8d93/redisData/src/main/java/com/pivotal/wangyu/study/springdataredis/config/RedisConfig.java
*/
#Bean
redis.clients.jedis.JedisPoolConfig jedisPoolConfig() {
final redis.clients.jedis.JedisPoolConfig poolConfig = new redis.clients.jedis.JedisPoolConfig();
// Maximum active connections to Redis instance
poolConfig.setMaxTotal(16);
// Number of connections to Redis that just sit there and do nothing
poolConfig.setMaxIdle(16);
// Minimum number of idle connections to Redis - these can be seen as always open and ready to serve
poolConfig.setMinIdle(8);
// Tests whether connection is dead when returning a connection to the pool
poolConfig.setTestOnBorrow(true);
// Tests whether connection is dead when connection retrieval method is called
poolConfig.setTestOnReturn(true);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
return poolConfig;
}
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
}

ActiveMQ fail-over of producer and consumer with a shared directory doesn't happen

We have two ActiveMQ(version 5.10.0) instances running and I am using the shared storage to achieve HA.
However I am unable to see failover happening for the producer and consumer(s).
ActiveMQ broker-1 runs on IP1 and broker-2 on IP2
And under the activemq.xml of configuration I have modified persistence adapter to use a shared directory which is present on IP1.
<persistenceAdapter>
<kahaDB directory="\\IP1\shared-directory\for activemq\data"/>
</persistenceAdapter>
Both in producer and consumer sides I am using following JNDI configurations to get the connections and build sessions,etc.
jndi.properties
java.naming.factory.initial = ..........ActiveMQInitialContextFactory
java.naming.provider.url = failover:(tcp://IP1:61616,tcp://IP2:61616)?randomize=false
connectionFactoryNames = myConnectionFactory
queue.requestQ = my.RequestQ
Interesting part is :
When I start this broker pair, I see that one of the brokers becomes master.
When I start the producer, which puts the message on the Q (say producer has put 100 messages on the Q). While my producer is still running; I shutdown master broker, hence slave broker acquires the file-lock and becomes master.When I open the webconsole I see that 100 messages are still there on the Q. Even though producer is running it no longer puts any messages on this Q.
Similar to this for the consumers also.
Consumer was picking messages from the Q, this Q has say 100 messages unconsumed when master failed, now master goes down, slave becomes master, I see 100 messages are still unconsumed, but the consumer does not pick any message from the Q.
I waited them to failover for a long time.(>10 mins.)
Can any one please suggest what configuration am I missing ?
I am copy pasting producer and consumer as is (I've copied this from ActiveMQ in action book with minor modifications).
Producer
public class Producer {
private static String brokerURL = "failover:(tcp://IP1:3389,tcp://IP2:3389)";
private static transient ConnectionFactory factory;
private transient Connection connection;
private transient Session session;
private transient MessageProducer producer;
private static int count = 10;
private static int total;
private static int id = 1000000;
private String jobs[] = new String[] { "suspend", "delete" };
public Producer() throws JMSException {
factory = new ActiveMQConnectionFactory(brokerURL);
connection = factory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(null);
}
public void close() throws JMSException {
if (connection != null) {
connection.close();
}
}
public static void main(String[] args) throws JMSException {
Producer producer = new Producer();
while (total < 1000) {
for (int i = 0; i < count; i++) {
producer.sendMessage();
}
total += count;
System.out.println("Sent '" + count + "' of '" + total
+ "' job messages");
try {
Thread.sleep(1000);
} catch (InterruptedException x) {
}
}
producer.close();
}
public void sendMessage() throws JMSException {
int idx = 0;
while (true) {
idx = (int) Math.round(jobs.length * Math.random());
if (idx < jobs.length) {
break;
}
}
String job = jobs[idx];
Destination destination = session.createQueue("JOBS." + job);
Message message = session.createObjectMessage(id++);
System.out.println("Sending: id: "
+ ((ObjectMessage) message).getObject() + " on queue: "
+ destination);
producer.send(destination, message);
}
}
Consumer
public class Consumer {
private static String brokerURL = "failover:(tcp://IP1:3389,tcp://IP2:3389)";
private static transient ConnectionFactory factory;
private transient Connection connection;
private transient Session session;
private String jobs[] = new String[] { "suspend", "delete" };
public Consumer() throws JMSException {
factory = new ActiveMQConnectionFactory(brokerURL);
connection = factory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
public void close() throws JMSException {
if (connection != null) {
connection.close();
}
}
public static void main(String[] args) throws JMSException {
Consumer consumer = new Consumer();
for (String job : consumer.jobs) {
Destination destination = consumer.getSession().createQueue(
"JOBS." + job);
MessageConsumer messageConsumer = consumer.getSession()
.createConsumer(destination);
messageConsumer.setMessageListener(new Listener(job));
}
}
public Session getSession() {
return session;
}
}
Just one more thing:
I am more interested in consumer failover than producer.
One more observation is : Consumer stops and comes to the command prompt abruptly.
Thank you.
-JE

What is the magic behind the scene of Java ee jms?

I've just started with Java ee 7, and here is something I couldnt get the idea of how it magically works.
I follow the example from the book Beginning Java EE 7 by Antonio Goncalves. I managed to compile and deploye the code of chapter 13 (about JMS) without any problem. Messages are sent and received as expected, but that make me confused.
The source code is composite of a consumer class, a producer class, a POJO and a MDB class.
here is the consumer:
public class OrderConsumer {
public static void main(String[] args) throws NamingException {
// Gets the JNDI context
Context jndiContext = new InitialContext();
// Looks up the administered objects
ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("jms/javaee7/ConnectionFactory");
Destination topic = (Destination) jndiContext.lookup("jms/javaee7/Topic");
// Loops to receive the messages
System.out.println("\nInfinite loop. Waiting for a message...");
try (JMSContext jmsContext = connectionFactory.createContext()) {
while (true) {
OrderDTO order = jmsContext.createConsumer(topic).receiveBody(OrderDTO.class);
System.out.println("Order received: " + order);
}
}
}
}
the producer:
public class OrderProducer {
public static void main(String[] args) throws NamingException {
if (args.length != 1) {
System.out.println("usage : enter an amount");
System.exit(0);
}
System.out.println("Sending message with amount = " + args[0]);
// Creates an orderDto with a total amount parameter
Float totalAmount = Float.valueOf(args[0]);
OrderDTO order = new OrderDTO(1234l, new Date(), "Serge Gainsbourg", totalAmount);
// Gets the JNDI context
Context jndiContext = new InitialContext();
// Looks up the administered objects
ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("jms/javaee7/ConnectionFactory");
Destination topic = (Destination) jndiContext.lookup("jms/javaee7/Topic");
try (JMSContext jmsContext = connectionFactory.createContext()) {
// Sends an object message to the topic
jmsContext.createProducer().setProperty("orderAmount", totalAmount).send(topic, order);
System.out.println("\nOrder sent : " + order.toString());
}
}
}
the MDB :
#MessageDriven(mappedName = "jms/javaee7/Topic", activationConfig = {
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "messageSelector", propertyValue = "orderAmount > 1000")
})
public class ExpensiveOrderMDB implements MessageListener {
public void onMessage(Message message) {
try {
OrderDTO order = message.getBody(OrderDTO.class);
System.out.println("Expensive order received: " + order.toString());
} catch (JMSException e) {
e.printStackTrace();
}
}
}
Content of msg encapsulated in a POJO object which implements Serializable interface
ExpensiveOrderMDB and the POJO is packaged in a .jar file and deploy in glassfish server running locally. Connection and desitination resouces are created by asadmin.
Question is: How can the consumer and producer know that the connection and destination are available on local glassfish server for it to make a connection and send/receive msg? (The lines that create connection and destination say nothing about local glassfish server)
Probably there is a jndi.properties file in which the connection to the glassfish is defined