I have a client/server test that uses a custom CacheStore implementation. I want the custom CacheStore on the server nodes but not on the client nodes, but Ignite is trying to load the custom implementation on the client. Is there a way to avoid this?
Server code:
IgniteConfiguration igniteCfg = new IgniteConfiguration();
Ignite ignite = Ignition.start(igniteCfg);
CacheConfiguration<String, String> cacheCfg = new CacheConfiguration<>("test");
cacheCfg.setReadThrough(true);
cacheCfg.setCacheStoreFactory(new CacheStoreFactory());
IgniteCache<String, String> cache = ignite.getOrCreateCache(cacheCfg);
CacheStore:
public class CacheStoreFactory implements Factory<CacheStore<? super String, ? super String>> {
#Override
public CacheStore<? super String, ? super String> create() {
return new CacheStoreAdapter<String, String>() {
#Override
public String load(String key) throws CacheLoaderException {
System.out.println("load: key=" + key);
return key;
}
#Override
public void write(Entry<? extends String, ? extends String> entry) throws CacheWriterException {
}
#Override
public void delete(Object key) throws CacheWriterException {
}
};
}
}
Client:
IgniteConfiguration igniteCfg = new IgniteConfiguration();
igniteCfg.setClientMode(true);
Ignite ignite = Ignition.start(igniteCfg);
CacheConfiguration<String, String> cacheCfg = new CacheConfiguration<>("test");
IgniteCache<String, String> cache = ignite.getOrCreateCache(cacheCfg);
String value = cache.get("someKey");
The CacheStore is called correctly on the server, and the client appears to work, but the client logs this error:
Caused by: java.lang.ClassNotFoundException: org.dalsing.ignite.server.test.CacheStoreFactory
The cache store on the client is required for TRANSACTIONAL caches. For ATOMIC caches (this is the default mode), it's not needed, so you can safely ignore the exception.
Related
I want to store my WebSession in Redis. There is no problem at put operation, but it throws exception when retrieving stored record. Here is my example stack trace
Caused by:
com.fasterxml.jackson.databind.exc.MismatchedInputException:
Unexpected token (START_OBJECT), expected START_ARRAY: need JSON Array
to contain As.WRAPPER_ARRAY type information for class
java.lang.Object at [Source:
(byte[])"{"attributes":["org.springframework.session.web.server.session.SpringSessionWebSessionStore$SpringSessionMap",{}],"id":"2a5c3d9b-3557-4bd6-bca8-9e221c3a5b41","lastAccessTime":{"nano":800305900,"epochSecond":1605268779},"creationTime":{"nano":800305900,"epochSecond":1605268779},"expired":false,"maxIdleTime":{"seconds":5400,"nano":0,"negative":false,"zero":false,"units":["java.util.ImmutableCollections$List12",[["java.time.temporal.ChronoUnit","SECONDS"],["java.time.temporal.ChronoUnit","NANOS"]]]"[truncated
18 bytes]; line: 1, column: 1]
How could I solve this problem? I don't understand why it is happening? Thanks.
Here is my session service.
#Component
#RequiredArgsConstructor
#Slf4j
public class SessionMappingStorage {
private static final String USR_TO_SESSION___KEY = "USR_SESSION_MAP";
private final ReactiveHashOperations<String, String, Object> hashOps;
public Mono<Boolean> addSession(String username, WebSession session) {
return hashOps.put(USR_TO_SESSION___KEY, username, session);
}
public Mono<WebSession> getSessionByUserId(String username) {
return hashOps.get(USR_TO_SESSION___KEY, username).cast(WebSession.class);
}
}
Here is my redis configuration.
#Bean
public ReactiveRedisTemplate<String, String> reactiveRedisTemplate() {
RedisSerializer keySerializer, hashKeySerializer, hashValueSerializer;
keySerializer = hashKeySerializer = new StringRedisSerializer();
hashValueSerializer = new GenericJackson2JsonRedisSerializer(objectMapper());
RedisSerializationContext.RedisSerializationContextBuilder<String, String> builder =
RedisSerializationContext.newSerializationContext(keySerializer);
RedisSerializationContext<String, String> context =
builder.hashKey(hashKeySerializer).hashValue(hashValueSerializer).build();
return new ReactiveRedisTemplate<>(reactiveRedisConnectionFactory(), context);
}
#Bean
public ReactiveHashOperations<String, String, Object> hashOps() {
return reactiveRedisTemplate().opsForHash();
}
private ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
mapper.enableDefaultTyping();
return mapper;
}
I have the below Spring boot code to receive values whenever a Redis stream is appended with new record. The problem is receiver never receives any message, also, the subscriber, when checked with subscriber.isActive(), is always inactive. Whats wrong in this code? What did I miss? Doc for reference.
On spring boot start, initialize the necessary redis resources
Lettuce connection factory
#Bean
public RedisConnectionFactory redisConnectionFactory() {
return new LettuceConnectionFactory("127.0.0.1", 6379);
}
RedisTemplate from the connection factory
#Bean
public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<String, String> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(connectionFactory);
return redisTemplate;
}
Rest controller to append data to redis stream
#PutMapping("/{name}")
public String post(#PathVariable String name) {
return redisTemplate.opsForStream().add(StreamRecords.newRecord().in("streamx").ofObject(name)).getValue();
}
JMS style imperative message listener
#Component
public class MyStreamListener implements StreamListener<String, MapRecord<String, String, String>> {
#Override
public void onMessage(MapRecord<String, String, String> message) {
System.out.println("message received: " + message.getValue());
}
}
Initialize the listener
#Bean
public Subscription listener(MyStreamListener streamListener, RedisConnectionFactory redisConnectionFactory) throws InterruptedException {
StreamMessageListenerContainer<String, MapRecord<String, String, String>> container = StreamMessageListenerContainer
.create(redisConnectionFactory);
Subscription subscription = container.receive(Consumer.from("my-group-1", "consumer-1"),
StreamOffset.create("streamx", ReadOffset.latest())), streamListener);
System.out.println(subscription.isActive()); // always false
return subscription;
}
Though, I am able to append to the stream through api.
The important step is, start the StreamMessageListenerContainer after the subscription is done.
container.start();
My code like this:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(500);
DataStream<String> stream = env.addSource(getConsumer(TOPIC_1));
Jedis jedis = new Jedis("master1");
stream.map(new RichMapFunction<String, String>() {
#Override
public String map(String value) throws Exception {
String result = jedis.hget("rtc", value);
return result;
}
});
I want to get some data from Redis in map(), but it cannot run,because Jedis.class is not serializable.
How to use not serializable class in map(),such as ZkClient,Jedis?
All rich functions like the RichMapFunction have an open(Configuration) and close call which you can override. These lifecycle methods are called once the function has been deployed to a TaskManager where it is executed.
class MyMapFunction extends RichMapFunction<String, String> {
private transient Jedis jedis;
#Override
public void open(Configuration parameters) {
// open connection to Redis, for example
jedis = new Jedis("master1");
}
#Override
public void close() {
// close connection to Redis
jedis.close();
}
}
I have an app that is gonna listen to multiple queues, which are declared on different vhost. I used a SimpleRoutingConnectionFactory to store a connectionFactoryMap, and I hope to set up my listener with #RabbitListener.
According to Spring AMQP doc:
Also starting with version 1.4, you can configure a routing connection
factory in a SimpleMessageListenerContainer. In that case, the list of
queue names is used as the lookup key. For example, if you configure
the container with setQueueNames("foo, bar"), the lookup key will be
"[foo,bar]" (no spaces).
I used #RabbitListener(queues = "some-key"). Unfortunately, spring complained "lookup key [null]". See below.
18:52:44.528 WARN --- [cTaskExecutor-1]
o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception,
processing can restart if the connection factory supports it
java.lang.IllegalStateException: Cannot determine target
ConnectionFactory for lookup key [null] at
org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.determineTargetConnectionFactory(AbstractRoutingConnectionFactory.java:119)
at
org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.createConnection(AbstractRoutingConnectionFactory.java:97)
at
org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createConnection(ConnectionFactoryUtils.java:90)
at
org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:140)
at
org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:76)
at
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:472)
at
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1306)
at java.lang.Thread.run(Thread.java:745)
Did I do something wrong? If queues attribute is used as lookup key (for connection factory lookup), what am I supposed to use to specify which queue I'd like to listen to?
Ultimately, I hope to do programmatic/dynamic listener setup. If I use "Programmatic Endpoint Registration", am I supposed to drop "Annotation-driven listener endpoints"? I love "Annotation-driven listener endpoints", because a listener could have multiple message handles with different incoming data type as argument, which is very clean and tidy. If I use Programmatic Endpoint Registration, I would have to parse the Message input variable, and call my a particular custom message handler based on the message type/content.
EDIT:
Hi Gary,
I modified your code #2 a little bit, so that it uses Jackson2JsonMessageConverter to serialize class objects (in RabbitTemplate bean), and use it to un-serialize them back to objects (in inboundAdapter). I also removed #RabbitListener because all listeners would be added at runtime in my case. Now the fooBean can receive integer, string and TestData message without any problem! The only issue left behind is that the program constantly report warning:
"[erContainer#0-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
java.lang.IllegalStateException: Cannot determine target ConnectionFactory for lookup key [null]". For the full stacktrace, please see the bottom.
Did I miss anything?
#SpringBootApplication
public class App2 implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(App2.class, args);
}
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ConnectionFactory routingCf;
#Autowired
private RabbitTemplate template;
#Override
public void run(String... args) throws Exception {
// dynamically add a listener for queue qux
IntegrationFlow flow = IntegrationFlows.from(Amqp.inboundAdapter(this.routingCf, "qux").messageConverter(new Jackson2JsonMessageConverter()))
.handle(fooBean())
.get();
this.flowContext.registration(flow).register();
// now test it
SimpleResourceHolder.bind(this.routingCf, "[qux]");
this.template.convertAndSend("qux", 42);
this.template.convertAndSend("qux", "fizbuz");
this.template.convertAndSend("qux", new TestData(1, "test"));
SimpleResourceHolder.unbind(this.routingCf);
}
#Bean
RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(routingCf);
template.setMessageConverter(new Jackson2JsonMessageConverter());
return template;
}
#Bean
#Primary
public ConnectionFactory routingCf() {
SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
Map<Object, ConnectionFactory> map = new HashMap<>();
map.put("[foo,bar]", routedCf());
map.put("[baz]", routedCf());
map.put("[qux]", routedCf());
rcf.setTargetConnectionFactories(map);
return rcf;
}
#Bean
public ConnectionFactory routedCf() {
return new CachingConnectionFactory("127.0.0.1");
}
#Bean
public Foo fooBean() {
return new Foo();
}
public static class Foo {
#ServiceActivator
public void handleInteger(Integer in) {
System.out.println("int: " + in);
}
#ServiceActivator
public void handleString(String in) {
System.out.println("str: " + in);
}
#ServiceActivator
public void handleData(TestData data) {
System.out.println("TestData: " + data);
}
}
}
Full stack trace:
2017-03-15 21:43:06.413 INFO 1003 --- [ main] hello.App2 : Started App2 in 3.003 seconds (JVM running for 3.69)
2017-03-15 21:43:11.415 WARN 1003 --- [erContainer#0-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
java.lang.IllegalStateException: Cannot determine target ConnectionFactory for lookup key [null]
at org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.determineTargetConnectionFactory(AbstractRoutingConnectionFactory.java:119) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.createConnection(AbstractRoutingConnectionFactory.java:97) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1430) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1411) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1387) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitAdmin.initialize(RabbitAdmin.java:500) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.core.RabbitAdmin$11.onCreate(RabbitAdmin.java:419) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.CompositeConnectionListener.onCreate(CompositeConnectionListener.java:33) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:571) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createConnection(ConnectionFactoryUtils.java:90) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:140) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:76) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:505) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1382) ~[spring-rabbit-1.7.1.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Please show your configuration - it works fine for me...
#SpringBootApplication
public class So42784471Application {
public static void main(String[] args) {
SpringApplication.run(So42784471Application.class, args);
}
#Bean
#Primary
public ConnectionFactory routing() {
SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
Map<Object, ConnectionFactory> map = new HashMap<>();
map.put("[foo,bar]", routedCf());
map.put("[baz]", routedCf());
rcf.setTargetConnectionFactories(map);
return rcf;
}
#Bean
public ConnectionFactory routedCf() {
return new CachingConnectionFactory("10.0.0.3");
}
#RabbitListener(queues = { "foo" , "bar" })
public void foobar(String in) {
System.out.println(in);
}
#RabbitListener(queues = "baz")
public void bazzer(String in) {
System.out.println(in);
}
}
Regarding your second question, you could build the endpoint manually but it's quite involved. It's probably easier to use a similar feature in a Spring Integration #ServiceActivator.
I will update this answer with details shortly.
EDIT
And here's the update using Spring Integration techniques to dynamically add a multi-method listener at runtime...
#SpringBootApplication
public class So42784471Application implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(So42784471Application.class, args);
}
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ConnectionFactory routingCf;
#Autowired
private RabbitTemplate template;
#Override
public void run(String... args) throws Exception {
// dynamically add a listener for queue qux
IntegrationFlow flow = IntegrationFlows.from(Amqp.inboundAdapter(this.routingCf, "qux"))
.handle(fooBean())
.get();
this.flowContext.registration(flow).register();
// now test it
SimpleResourceHolder.bind(this.routingCf, "[qux]");
this.template.convertAndSend("qux", 42);
this.template.convertAndSend("qux", "fizbuz");
SimpleResourceHolder.unbind(this.routingCf);
}
#Bean
#Primary
public ConnectionFactory routingCf() {
SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
Map<Object, ConnectionFactory> map = new HashMap<>();
map.put("[foo,bar]", routedCf());
map.put("[baz]", routedCf());
map.put("[qux]", routedCf());
rcf.setTargetConnectionFactories(map);
return rcf;
}
#Bean
public ConnectionFactory routedCf() {
return new CachingConnectionFactory("10.0.0.3");
}
#RabbitListener(queues = { "foo" , "bar" })
public void foobar(String in) {
System.out.println(in);
}
#RabbitListener(queues = "baz")
public void bazzer(String in) {
System.out.println(in);
}
#Bean
public Foo fooBean() {
return new Foo();
}
public static class Foo {
#ServiceActivator
public void handleInteger(Integer in) {
System.out.println("int: " + in);
}
#ServiceActivator
public void handleString(String in) {
System.out.println("str: " + in);
}
}
}
I would like to create a Gemfire region in a Spring Boot application. Following this sample, it works well wihout adding database support. If I add database, it will shows error like " Error creating bean with name 'dataSource'". However, default gemfire cache bean works well with datasource integration.
#EnableAutoConfiguration
// Sprint Boot Auto Configuration
#ComponentScan(basePackages = "napo.demo")
#EnableCaching
#SuppressWarnings("unused")
public class Application extends SpringBootServletInitializer {
private static final Class<Application> applicationClass = Application.class;
private static final Logger log = LoggerFactory.getLogger(applicationClass);
public static void main(String[] args) {
SpringApplication.run(applicationClass, args);
}
/* **The commented code works well with database.**
#Bean
CacheFactoryBean cacheFactoryBean() {
return new CacheFactoryBean();
}
#Bean
ReplicatedRegionFactoryBean<Integer, Integer> replicatedRegionFactoryBean(final Cache cache) {
ReplicatedRegionFactoryBean<Integer, Integer> region= new ReplicatedRegionFactoryBean<Integer, Integer>() {{
setCache(cache);
setName("demo");
}};
return region;
} */
// This configuration will cause issue as beow
//
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceAutoConfiguration$NonEmbeddedConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [javax.sql.DataSource]: Factory method 'dataSource' threw exception; nested exception is java.lang.NullPointerException
#Bean
GemfireCacheManager cacheManager(final Cache gemfireCache) {
return new GemfireCacheManager() {
{
setCache(gemfireCache);
}
};
}
// NOTE ideally, "placeholder" properties used by Spring's PropertyPlaceholderConfigurer would be externalized
// in order to avoid re-compilation on property value changes (so... this is just an example)!
#Bean
public Properties placeholderProperties() {
Properties placeholders = new Properties();
placeholders.setProperty("app.gemfire.region.eviction.action", "LOCAL_DESTROY");
placeholders.setProperty("app.gemfire.region.eviction.policy-type", "MEMORY_SIZE");
placeholders.setProperty("app.gemfire.region.eviction.threshold", "4096");
placeholders.setProperty("app.gemfire.region.expiration.entry.tti.action", "INVALIDATE");
placeholders.setProperty("app.gemfire.region.expiration.entry.tti.timeout", "300");
placeholders.setProperty("app.gemfire.region.expiration.entry.ttl.action", "DESTROY");
placeholders.setProperty("app.gemfire.region.expiration.entry.ttl.timeout", "60");
placeholders.setProperty("app.gemfire.region.partition.local-max-memory", "16384");
placeholders.setProperty("app.gemfire.region.partition.redundant-copies", "1");
placeholders.setProperty("app.gemfire.region.partition.total-max-memory", "32768");
return placeholders;
}
#Bean
public PropertyPlaceholderConfigurer propertyPlaceholderConfigurer(
#Qualifier("placeholderProperties") Properties placeholders) {
PropertyPlaceholderConfigurer propertyPlaceholderConfigurer = new PropertyPlaceholderConfigurer();
propertyPlaceholderConfigurer.setProperties(placeholders);
return propertyPlaceholderConfigurer;
}
#Bean
public Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", "SpringGemFireJavaConfigTest");
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "config");
return gemfireProperties;
}
#Bean
#Autowired
public CacheFactoryBean gemfireCache(#Qualifier("gemfireProperties") Properties gemfireProperties) throws Exception {
CacheFactoryBean cacheFactory = new CacheFactoryBean();
cacheFactory.setProperties(gemfireProperties);
return cacheFactory;
}
#Bean(name = "ExamplePartition")
#Autowired
public ReplicatedRegionFactoryBean<Object, Object> examplePartitionRegion(Cache gemfireCache,
#Qualifier("partitionRegionAttributes") RegionAttributes<Object, Object> regionAttributes) throws Exception {
ReplicatedRegionFactoryBean<Object, Object> examplePartitionRegion =
new ReplicatedRegionFactoryBean<Object, Object>();
examplePartitionRegion.setAttributes(regionAttributes);
examplePartitionRegion.setCache(gemfireCache);
examplePartitionRegion.setName("demo");
return examplePartitionRegion;
}
#Bean
#Autowired
public RegionAttributesFactoryBean partitionRegionAttributes(
EvictionAttributes evictionAttributes,
#Qualifier("entryTtiExpirationAttributes") ExpirationAttributes entryTti,
#Qualifier("entryTtlExpirationAttributes") ExpirationAttributes entryTtl) {
RegionAttributesFactoryBean regionAttributes = new RegionAttributesFactoryBean();
regionAttributes.setEvictionAttributes(evictionAttributes);
regionAttributes.setEntryIdleTimeout(entryTti);
regionAttributes.setEntryTimeToLive(entryTtl);
return regionAttributes;
}
#Bean
public EvictionAttributesFactoryBean defaultEvictionAttributes(
#Value("${app.gemfire.region.eviction.action}") String action,
#Value("${app.gemfire.region.eviction.policy-type}") String policyType,
#Value("${app.gemfire.region.eviction.threshold}") int threshold) {
EvictionAttributesFactoryBean evictionAttributes = new EvictionAttributesFactoryBean();
evictionAttributes.setAction(EvictionActionType.valueOfIgnoreCase(action).getEvictionAction());
evictionAttributes.setThreshold(threshold);
evictionAttributes.setType(EvictionPolicyType.valueOfIgnoreCase(policyType));
return evictionAttributes;
}
#Bean
public ExpirationAttributesFactoryBean entryTtiExpirationAttributes(
#Value("${app.gemfire.region.expiration.entry.tti.action}") String action,
#Value("${app.gemfire.region.expiration.entry.tti.timeout}") int timeout) {
ExpirationAttributesFactoryBean expirationAttributes = new ExpirationAttributesFactoryBean();
expirationAttributes.setAction(ExpirationActionType.valueOfIgnoreCase(action).getExpirationAction());
expirationAttributes.setTimeout(timeout);
return expirationAttributes;
}
#Bean
public ExpirationAttributesFactoryBean entryTtlExpirationAttributes(
#Value("${app.gemfire.region.expiration.entry.ttl.action}") String action,
#Value("${app.gemfire.region.expiration.entry.ttl.timeout}") int timeout) {
ExpirationAttributesFactoryBean expirationAttributes = new ExpirationAttributesFactoryBean();
expirationAttributes.setAction(ExpirationActionType.valueOfIgnoreCase(action).getExpirationAction());
expirationAttributes.setTimeout(timeout);
return expirationAttributes;
}
#Bean
public PartitionAttributesFactoryBean defaultPartitionAttributes(
#Value("${app.gemfire.region.partition.local-max-memory}") int localMaxMemory,
#Value("${app.gemfire.region.partition.redundant-copies}") int redundantCopies,
#Value("${app.gemfire.region.partition.total-max-memory}") int totalMaxMemory) {
PartitionAttributesFactoryBean partitionAttributes = new PartitionAttributesFactoryBean();
partitionAttributes.setLocalMaxMemory(localMaxMemory);
partitionAttributes.setRedundantCopies(redundantCopies);
partitionAttributes.setTotalMaxMemory(totalMaxMemory);
return partitionAttributes;
}
#Override
protected SpringApplicationBuilder configure(
SpringApplicationBuilder application) {
return application.sources(applicationClass);
}}
demoService java code:
#Service
public class demoService {
#Autowired
private demoMapper demoMapper;
#Cacheable("demo")
public Fund getDemo(String code) {
Demo demo= demoMapper.getDemo(Code);
return demo;
}
Here is an example of setting entry-ttl among other attributes: https://github.com/spring-projects/spring-gemfire-examples/blob/master/basic/java-config/src/main/java/org/springframework/data/gemfire/example/SpringJavaBasedContainerGemFireConfiguration.java