Apache geode cachelistener not executing - gemfire

This is a newbie question, thanks for reading it. So I start a Geode server cache process with a replicated region like this:
gfsh>start locator --name=myLocator
and a server process
start server --cache-xml-file=D:\Geode\config\cache.xml --name=myGeode --locators=localhost[10334]
The cache.xml defined a replicated region called myRegion
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://geode.apache.org/schema/cache"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<cache-server/>
<region name="myRegion" refid="REPLICATE"/>
</cache>
and then I am using the Pivotal Native Client for .Net with which in another process I start up a client cache with a cache event listener as follows:
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory();
Cache cache = cacheFactory.SetSubscriptionEnabled(true).Create();
RegionFactory regionFactory = cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
IRegion<string, string> region = regionFactory.Create<string, string>("myRegion");
region.AttributesMutator.SetCacheListener(new MyEventHandler<string, string>());
The MyEventHandler is:
public class MyEventHandler<TKey, TVal> : ICacheListener<TKey, TVal>
{
public void AfterCreate(EntryEvent<TKey, TVal> ev)
{
Console.WriteLine("Received AfterCreate event for: {0}", ev.Key.ToString());
}
...
}
and then again in a third process I create another cache for that process to put some data into myRegion. It's the same setup as the second process without the listener:
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory();
Cache cache = cacheFactory.SetSubscriptionEnabled(true).Create();
RegionFactory regionFactory = cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
IRegion<string, string> region = regionFactory.Create<string, string>("myRegion");
region["testKey"] = "testValue";
The problem is after the third process puts the test data into myRegion (that I can see on the server so that's working) the listener in the second process doesn't see it. What am I missing?
Thanks...

On the client where you have the listener, you need to register interest in either all or a subset of key so that the server knows to send updates to the client.

Related

Apache ignite client node reconnect getting error org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to perform cache operation

I have started ignite server as well as app as client node using the following configuration
public IgniteConfigurer config() {
return cfg -> {
// The node will be started as a client node.
cfg.setClientMode(true);
// Classes of custom Java logic will be transferred over the wire from this app.
cfg.setPeerClassLoadingEnabled(false);
// Setting up an IP Finder to ensure the client can locate the servers.
final TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList(ip));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
// Cache Metrics log frequency. If 0 then log print disable.
cfg.setMetricsLogFrequency(Integer.parseInt(cacheMetricsLogFrequency));
// setting up storage configuration
final DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
// setting up data region for storage
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setWorkDirectory(cacheStorage);
final TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
// Sets message queue limit for incoming and outgoing messages
communicationSpi.setMessageQueueLimit(Integer.parseInt(cacheTcpCommunicationSpiMessageQueueLimit));
cfg.setCommunicationSpi(communicationSpi);
final CacheCheckpointSpi cpSpi = new CacheCheckpointSpi();
cfg.setCheckpointSpi(cpSpi);
final FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();
// Execute all jobs sequentially by setting parallel job number to 1.
colSpi.setParallelJobsNumber(Integer.parseInt(cacheParallelJobs));
cfg.setCollisionSpi(colSpi);
// set failure handler for auto connection if ignite server stop/starts.
cfg.setFailureHandler(new StopNodeFailureHandler());
};
}
everything working fine. Now I have stopped ignite server and again restart ignite server. After restarting ignite server When I do any cache operation on I am getting error like
Caused by: class org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to perform cache operation (cache is stopped): mycache1
... 63 more
When I see ignite server logs it shows me the client is connected. See below logs
[17:25:41] ^-- Baseline [id=0, size=1, online=1, offline=0]
[17:25:42] Topology snapshot [ver=2, locNode=ea964803, servers=1, clients=1, state=ACTIVE, CPUs=8, offheap=6.3GB, heap=4.5GB]
[17:25:42] ^-- Baseline [id=0, size=1, online=1, offline=0]
So why it not allowed to perform any cache operation through the application which is running as a client node?.
It looks like you are creating your "mycache1" inside the default data region which is not configured to be persistent.
I.e. you first define a default region to be persistent:
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
But further you are re-creating it without setPersistenceEnabled:
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
So you need to replace getDefaultDataRegionConfiguration().setPersistenceEnabled(true); with storageCfg.setDefaultDataRegionConfiguration(defaultRegion); to enable persistence and I think you won't have CacheStoppedException anymore.
As for in-memory configuration (which I think was applied here instead) and dynamically created caches, this is expected behavior. Because in this case, the server knows nothing about the previously created caches and you need to recreate them explicitly. Doing something like:
try{
...
}
catch(Exception exception) {
if (exception instanceof IgniteException) {
final Throwable rootCause = getRootCause(exception);
if(rootCause instanceof CacheStoppedException)
{
ignite.cache("mycache1");
mylogger.info("Connection re-estabilished with the cache.");
}
}

Jedis behave unexpectedly with multiple sentinel in redis

I am using Spring 2.1.1 and Redis 4.0.1. I have configured two node computers one with IP:192.168.20.40 with master configuration and other with IP:192.168.20.55 with slave configuration. I am running Springboot application using jedis (not using spring-jedis) on two systems, different conditions are occuring-
#Bean
public JedisSentinelPool jedisSentinelPool() {
Set<String> sentinels=new HashSet<>();
sentinels.add("192.168.20.40:26379");
sentinels.add("192.168.20.55:26379");
JedisSentinelPool jedisSentinelPool=new JedisSentinelPool("mymaster", sentinels);
return jedisSentinelPool;
}
When running application on master node(redis configured with master) data get entred in cache.
When running application on slave node(redis configured with slave),exception occured -
(i.) I am able to get the jedis object from sentinel pool but unable to store data into the redis with exception "redis.clients.jedis.exceptions.JedisDataException: READONLY You can't write against a read only slave."
When running application on another server(192.168.20.33), and redis server are hosted on "IP:192.168.20.40" & "IP:192.168.20.55" , then my application is unable to get the jedis object from sentinel pool-
public String addToCache(#PathVariable("cacheName") String cacheName, HttpEntity<String> httpEntity, #PathVariable("key") String key) {
try (Jedis jedis = jedisPool.getResource();) {
long dataToEnter = jedis.hset(cacheName.getBytes(), key.getBytes(), httpEntity.getBody().getBytes());
if (dataToEnter == 0)
log.info("data existed in cache {} get updated ",cacheName);
else
log.info("new data inserted in cache {}",cacheName);
} catch (Exception e) {
System.out.println(e);
}
return httpEntity.getBody();
}
any input would be appreciable.
Can you please check you redis configuration file (redis.conf). It should have read-only mode enabled by default. You need to change the read only mode to false.

S3A client and local S3 mock

To create end-to-end local tests of data workflow I utilize "mock S3" container (e.g adobe/S3Mock). Seems to work just fine. However, some parts of the system rely on S3A client. As far as I see, its format does not allow to point to particular nameserver or endpoint.
Is it possible to make S3A work in local environment?
you talking about the ASF Hadoop S3A Connector? Nobody has tested against S3 mock AFAIK (never seen it before!), but it does work with non-AWS endpoints
set fs.s3a.endpoint to the URL of your S3 connection. There's some settings about switching from https to http (fs.s3a.connection.ssl.enabled = false) and moving from virtual hosts to directories (fs.s3a.path.style.access = true) which will also be needed.
further reading
Like I said: nobody has done this. We developers just go against the main AWS endpoints with its problems (latency, inconsistency, error reporting, etc), precisely because its what you get in production. But for your local testing, it will simplify your life (and you can run it under jenkins without having to give it any secrets)
Answer by #stevel worked for me. Here is the code if someone wants to refer
class S3WriterTest {
private static S3Mock api;
private static AmazonS3 mockS3client;
#BeforeAll
public static void setUp() {
//start mock s3 service using findify
api = new S3Mock.Builder().withPort(8001).withInMemoryBackend().build();
api.start();
/* AWS S3 client setup.
* withPathStyleAccessEnabled(true) trick is required to overcome S3 default
* DNS-based bucket access scheme
* resulting in attempts to connect to addresses like "bucketname.localhost"
* which requires specific DNS setup.
*/
EndpointConfiguration endpoint = new EndpointConfiguration("http://localhost:8001", "us-west-2");
mockS3client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
.build();
mockS3client.createBucket("test-bucket");
}
#AfterAll
public static void tearDown() {
api.shutdown();
}
#Test
void unitTestForHadoopCodeWritingUsingS3A {
Configuration hadoopConfig = getTestConfiguration();
........
}
private static Configuration getTestConfiguration() {
Configuration config = new Configuration();
config.set("fs.s3a.endpoint", "http://127.0.0.1:8001");
config.set("fs.s3a.connection.ssl.enabled", "false");
config.set("fs.s3a.path.style.access", "true");
config.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
config.set("fs.s3a.access.key", "foo");
config.set("fs.s3a.secret.key", "bar");
return config;
}
}

Can't connect to a remote zookeeper from a Kafka producer

I've been playing with Apache Kafka for a few days, and here is my problem,
If I set up the local test described in the "quick start" section on the website, everything is fine, the kafka producer/ consumer, zookeeper server and kafka broker work perfectly.
Now if I run on a remote server (let's call it node2) :
- Zookeeper - port 2181
- Kafka Broker - port 9092
- kafka consumer
And then if I run from my local computer :
- kafka producer
Assuming that there is no firewall on node2.
The connection end up with a timeout.
Here is the error log :
/etc/java/jdk1.6.0_41/bin/java -Didea.launcher.port=7533 -Didea.launcher.bin.path=/home/kevin/Documents/idea-IU-123.169/bin -Dfile.encoding=UTF-8 -classpath /etc/java/jdk1.6.0_41/lib/dt.jar:/etc/java/jdk1.6.0_41/lib/tools.jar:/etc/java/jdk1.6.0_41/lib/jconsole.jar:/etc/java/jdk1.6.0_41/lib/htmlconverter.jar:/etc/java/jdk1.6.0_41/lib/sa-jdi.jar:/home/kevin/Desktop/kafka-0.7.2/examples/target/scala_2.8.0/classes:/home/kevin/Desktop/kafka-0.7.2/project/boot/scala-2.8.0/lib/scala-compiler.jar:/home/kevin/Desktop/kafka-0.7.2/project/boot/scala-2.8.0/lib/scala-library.jar:/home/kevin/Desktop/kafka-0.7.2/core/target/scala_2.8.0/classes:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/jopt-simple-3.2.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/log4j-1.2.15.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/zookeeper-3.3.4.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/zkclient-0.1.jar:/home/kevin/Desktop/kafka-0.7.2/core/lib_managed/scala_2.8.0/compile/snappy-java-1.0.4.1.jar:/home/kevin/Desktop/kafka-0.7.2/examples/lib_managed/scala_2.8.0/compile/jopt-simple-3.2.jar:/home/kevin/Desktop/kafka-0.7.2/examples/lib_managed/scala_2.8.0/compile/log4j-1.2.15.jar:/home/kevin/Documents/idea-IU-123.169/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain kafka.examples.KafkaConsumerProducerDemo
log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkConnection).
log4j:WARN Please initialize the log4j system properly.
Exception in thread "Thread-0" java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect(Native Method)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:532)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:173)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:196)
at kafka.producer.SyncProducer.send(SyncProducer.scala:92)
at kafka.producer.SyncProducer.send(SyncProducer.scala:125)
at kafka.producer.ProducerPool$$anonfun$send$1.apply$mcVI$sp(ProducerPool.scala:114)
at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100)
at kafka.producer.ProducerPool$$anonfun$send$1.apply(ProducerPool.scala:100)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
at kafka.producer.ProducerPool.send(ProducerPool.scala:100)
at kafka.producer.Producer.zkSend(Producer.scala:137)
at kafka.producer.Producer.send(Producer.scala:99)
at kafka.javaapi.producer.Producer.send(Producer.scala:103)
at kafka.examples.Producer.run(Producer.java:53)
Process finished with exit code 0
And here is my Producer's code :
import java.util.Properties;
import kafka.javaapi.producer.ProducerData;
import kafka.producer.ProducerConfig;
public class Producer extends Thread{
private final kafka.javaapi.producer.Producer<String, String> producer;
private final String topic;
private final Properties props = new Properties();
public Producer(String topic)
{
props.put("zk.connect", "node2:2181");
props.put("connect.timeout.ms", "5000");
props.put("socket.timeout.ms", "30000");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("producer.type", "sync");
props.put("conpression.codec", "0");
producer = new kafka.javaapi.producer.Producer<String, String>(new ProducerConfig(props));
this.topic = topic;
}
public void run() {
String messageStr = new String("Message_test");
producer.send(new ProducerData<String, String>(topic, messageStr));
}
}
**So I also tested to switch
props.put("zk.connect", "node2:2181");
by
props.put("broker.list", "0:node2:9082");
And in that case I can connect successfully.**
See item #3 in http://kafka.apache.org/faq.html
The workaround is to explicitly set hostname property in server.properties of Kafka
You can verify this by using Zookeeper. If you are using kafka 0.7*, open ZkCli console and do get /brokers/ids/0 and you should get all the brokers metadata. Make sure the IP address/hostnames here matches the Zk connect string you are using in producer code -
props.put("zk.connect", "node2:2181");
In my case, I was using a producer running on my local machine connecting to a ubuntu VM (same box, different IP) and this workaround helped.

JBossCache eviction listener

I am new on JBossCache. Reading the user documentation it says that a listener could be added to the Eviction class used, but I wasn't able to found how to do add one to the configuration file, or how that should be added.
I have tried to add an #CacheListener with a method #NodeEvicted, but that method
#CacheListener
public class EvictionListener {
#NodeEvicted
public void nodeEvicted(NodeEvent ne) {
System.out.println("Se borro el nodo");
}
}
and add it to the cache instance
CacheFactory factory = new DefaultCacheFactory();
this.cache = factory.createCache();
EvictionListener listener = new EvictionListener();
this.cache.create();
this.cache.addCacheListener(listener);
but the sysout isn't executed. For testing it, I am just running a simple Main value.
This is the configuration value I am using:
<jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:jboss:jbosscache-core:config:3.2">
<transaction transactionManagerLookupClass="org.jboss.cache.transaction.GenericTransactionManagerLookup"/>
<eviction wakeUpInterval="20">
<default algorithmClass="org.jboss.cache.eviction.FIFOAlgorithm" wakeUpInterval="20">
<property name="maxNodes" value="20" />
</default>
</eviction>
</jbosscache>
The problem was solved because I wasn't reading the XML configuration file.
I was missing:
factory.createCache(file);