Glassfish 3.1.2 with Infinispan 5.1.2 - glassfish

I'm using Glassfish 3.1.2 and Infinispan 5.1.2.
I want to override the config of the default cache manager and start some named caches in advance. Then I want to inject that cache manager into a stateless EJB and use name caches.
Code that I have:
Config.java:
#Produces
#ApplicationScoped
public EmbeddedCacheManager defaultEmbeddedCacheManager() {
DefaultCacheManager returnCacheManager;
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.eviction().strategy(EvictionStrategy.NONE)
.maxEntries(Integer.MAX_VALUE)
.transaction().transactionManagerLookup(new GenericTransactionManagerLookup())
.expiration().lifespan(Long.valueOf(-1)).maxIdle(Long.valueOf(-1));
Configuration config = builder.build();
returnCacheManager = new DefaultCacheManager(config);
String cacheNames[] = new String[2];
cacheNames[0] = "cache1";
cacheNames[1] = "cache2";
returnCacheManager.startCaches(cacheNames);
return returnCacheManager;
}
CacheBean.java:
#Stateless
public class CacheBean implements CacheBeanLocal {
public static final String _rev = "$Id";
#Inject
private EmbeddedCacheManager defaultCacheManager;
#Override
public void addToCache1(String key, String value) {
Cache<String, String> cache1 = defaultCacheManager.getCache("cache1", true);
cache1.put(key, value);
}
}
When I deploy the EAR to Glassfish server I get this:
org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [CacheContainer] with qualifiers [#Default] at injection point [[field] #Inject private org.infinispan.cdi.AdvancedCacheProducer.defaultCacheContainer]
at org.jboss.weld.bootstrap.Validator.validateInjectionPoint(Validator.java:274)
at org.jboss.weld.bootstrap.Validator.validateInjectionPoint(Validator.java:243)
at org.jboss.weld.bootstrap.Validator.validateBean(Validator.java:106)
at org.jboss.weld.bootstrap.Validator.validateBeans(Validator.java:347)
at org.jboss.weld.bootstrap.Validator.validateDeployment(Validator.java:330)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:366)
at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:199)
at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:128)
at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:313)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259)
at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:461)
at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:212)
at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179)
at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117)
at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:722)
What am I doing wrong?

Related

ActiveMQ Artemis 2.10.1 ignoring retry settings in TomEE

I downloaded latest ActiveMQ Artemis 2.10.1 (Windows 10, JDK 8) and can't get the address-settings to take affect. Reading the documentation (not much) on line I edited the broker.xml by adding:
<address-settings>
<address-setting match="BETATESTQ">
<dead-letter-address>BETATESTQ_DLQ</dead-letter-address>
<expiry-address>BETATESTQ_EXPIRY</expiry-address>
<redelivery-delay>30000</redelivery-delay>
<redelivery-delay-multiplier>1.5</redelivery-delay-multiplier>
<redelivery-collision-avoidance-factor>0.15</redelivery-collision-avoidance-factor>
<max-redelivery-delay>100000</max-redelivery-delay>
<max-delivery-attempts>999</max-delivery-attempts>
</address-setting>
</address-settings>
<addresses>
<address name="BETATESTQ_DLQ">
<anycast>
<queue name="BETATESTQ_DLQ" />
</anycast>
</address>
<address name="BETATESTQ_EXPIRY">
<anycast>
<queue name="BETATESTQ_EXPIRY" />
</anycast>
</address>
<address name="BETATESTQ">
<anycast>
<queue name="BETATESTQ" />
</anycast>
</address>
</addresses>
Everything else is default broker.xml when create a broker.
It seems to always use default values for redelivery-delay, max-redelivery-delay, max-delivery-attempts. It does read the dead letter and expiry values correctly. I can see the message getting retried in TomEE logs and shows up in Artemis console and moves to dead letter queue when done.
I am not passing in and delivery or retry data when put it on the queue initially (using bare minimal Java J2EE code).
How can I get it to not ignore redelivery-delay, max-redelivery-delay, max-delivery-attempts?
I connected to the queue with Apache TomEE (not using built-in ActiveMQ 5.x in TomEE but pointing it to ActiveMQ Artemis)
JMS listener:
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.ejb.TransactionAttribute;
import javax.ejb.TransactionAttributeType;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.*;
#MessageDriven(name = "BETATESTQ", activationConfig = {
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "BETATESTQ"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge")
})
public class BetaTestQueueListener implements MessageListener, java.io.Serializable {
private static final long serialVersionUID = 1L;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void onMessage(Message rcvMessage) {
System.out.println("omMessage throw runtime exception");
throw new RuntimeException("trigger retry");
}
}
12/20/19 - Working values I found for TomEE 8.0.0:
tomee.xml:
<?xml version="1.0" encoding="UTF-8"?>
<tomee>
<Resource id="artemis" class-name="org.apache.activemq.artemis.ra.ActiveMQResourceAdapter">
ConnectorClassName=org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory
ConnectionParameters=host=127.0.0.1;port=61617;needClientAuth=false;sslEnabled=true;keyStorePath=../ssl/server_jks_keystore.jks;keyStorePassword=mypassword;trustStorePath=../ssl/client_jks_truststore.jks;trustStorePassword=mypassword;trustAll=true;verifyHost=false;wantClientAuth=false;needClientAuth=false;keyStoreProvider=JKS;trustSToreProvider=JKS
UserName=admin
Password=admin
JndiParams=java.naming.factory.initial=org.apache.openejb.core.OpenEJBInitialContextFactory
</Resource>
<Resource id="MyJmsConnectionFactory"
class-name="org.apache.activemq.artemis.jms.client.ActiveMQXAConnectionFactory"
constructor="uri,username,password"
type="javax.jms.ConnectionFactory">
uri=tcp://localhost:61617?needClientAuth=false;sslEnabled=true;keyStorePath=C:/apache-tomee-plus-8.0.0/ssl/server_jks_keystore.jks;keyStorePassword=mypassword;trustStorePath=C:/apache-tomee-plus-8.0.0/ssl/client_jks_truststore.jks;trustStorePassword=mypassword;trustAll=true;verifyHost=false;wantClientAuth=false;needClientAuth=false;keyStoreProvider=JKS;trustSToreProvider=JKS
username=admin
password=admin
TransactionSupport=xa
PoolMaxSize=20
PoolMinSize=0
</Resource>
<Resource id="BETATESTQ_DLQ"
class-name="org.apache.activemq.artemis.api.jms.ActiveMQJMSClient"
constructor="name"
factory-name="createQueue"
type="javax.jms.Queue">
name=BETATESTQ_DLQ
</Resource>
<Resource id="BETATESTQ"
class-name="org.apache.activemq.artemis.api.jms.ActiveMQJMSClient"
constructor="name"
factory-name="createQueue"
type="javax.jms.Queue">
name=BETATESTQ
</Resource>
<Container id="mdb" type="MESSAGE">
InstanceLimit = -1
ResourceAdapter = artemis
ActivationSpecClass = org.apache.activemq.artemis.ra.inflow.ActiveMQActivationSpec
</Container>
</tomee>
Java EE class to send JMS message:
import java.util.*;
import javax.jms.*;
import org.slf4j.*;
public final class JmsPublisherInstance2 implements java.io.Serializable {
private static final long serialVersionUID = 1L;
private static final Logger LOG = LoggerFactory.getLogger(JmsPublisherInstance2.class);
public void send( String msg,
ConnectionFactory connectionFactory,
Queue queue ) throws Exception {
Session session = null;
MessageProducer producer = null;
TextMessage message = null;
Connection connection = null;
try {
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(true, Session.SESSION_TRANSACTED);
producer = session.createProducer(queue);
message = session.createTextMessage(msg);
producer.send(message);
session.commit();
} catch (Exception e) {
session.rollback();
LOG.error(e.getMessage(), e);
throw e;
} finally {
if (session!=null ) {
session.close();
}
if (connection!=null ) {
connection.close();
}
}
}
}
Java EE listener:
import java.io.*;
import javax.annotation.*;
import javax.ejb.*;
import javax.jms.*;
#MessageDriven ( name = "BetaTESTQMDB" , activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue = "javax.jms.Queue") ,
#ActivationConfigProperty(propertyName="destination", propertyValue = "BetaTESTQ"),
#ActivationConfigProperty(propertyName="maxSession", propertyValue = "5"),
#ActivationConfigProperty(propertyName="acknowledgeMode", propertyValue = "Auto-acknowledge")
})
public class BetaTestQueueListener implements MessageListener, java.io.Serializable {
private static final long serialVersionUID = 1L;
#Resource(name="MyJmsConnectionFactory")
private ConnectionFactory connectionFactory;
#Resource
private MessageDrivenContext mdbContext;
#Resource(name = "BETATESTQ")
private javax.jms.Queue betaTestQ;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void onMessage(Message rcvMessage) {
try {
jmsInstance.send("test message", connectionFactory, betaTestQ);
} catch (Throwable t) {
t.printStackTrace();
mdbContext.setRollbackOnly();
}
}
}
It seems likely that you're using the OpenWire JMS client. This complicates matters because, as I understand it, the OpenWire JMS client implements redelivery in the client itself and those redelivery semantics are configured on the client side not the broker side. The broker doesn't have a chance to apply any kind of redelivery policy because the OpenWire client handles everything and doesn't inform the broker of the delivery failures. This documentation may help you configure redelivery for your OpenWire client.
That said, I recommend following this tutorial. It demonstrates how to integrate TomEE and ActiveMQ Artemis without building/deploying the JCA RA. This will allow you to use the Core JMS client rather than the OpenWire JMS client.
If you want to go the RA route you can build the ActiveMQ Artemis JCA RA by running mvn verify from the examples/features/sub-modules/artemis-ra-rar/ directory. The RA will be in the target directory named artemis-rar-<version>.rar.

Google Cloud Memory Store (Redis), can't connect to redis when instance is just started

I have a problem to connect to redis when my instance is just started.
I use:
runtime: java
env: flex
runtime_config:
jdk: openjdk8
i got following exception:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
java.net.SocketTimeoutException: connect timed out
after 2-3 min, it works smoothly
Do i need to add some check in my code or how i should fix it properly?
p.s.
also i use spring boot, with following configuration
#Value("${spring.redis.host}")
private String redisHost;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
in pom.xml
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>2.1.2.RELEASE</version>
I solved this problem as follows: in short, I added the “ping” method, which tries to set and get the value from Redis; if it's possible, then application is ready.
Implementation:
First, you need to update app.yaml add following:
readiness_check:
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
Second, in your rest controller:
#GetMapping("/readiness_check")
public ResponseEntity<?> readiness_check() {
if (!cacheConfig.ping()) {
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok().build();
}
Third, class CacheConfig:
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
P.S.
Also if somebody needs the full implementation of CacheConfig:
#Configuration
public class CacheConfig {
private static final Logger LOGGER = Logger.getLogger(CacheConfig.class.getName());
#Value("${spring.redis.host}")
private String redisHost;
private final RedisTemplate<String, Object> redisTemplate;
#Autowired
public CacheConfig(#Lazy RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#Bean
JedisConnectionFactory jedisConnectionFactory(
#Autowired JedisPoolConfig poolConfig
) {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
JedisClientConfiguration clientConfig = JedisClientConfiguration
.builder()
.usePooling()
.poolConfig(poolConfig)
.build();
return new JedisConnectionFactory(config, clientConfig);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
/**
* Example: https://github.com/PengliuIBM/pws_demo/blob/1becdca1bc19320c2742504baa1cada3260f8d93/redisData/src/main/java/com/pivotal/wangyu/study/springdataredis/config/RedisConfig.java
*/
#Bean
redis.clients.jedis.JedisPoolConfig jedisPoolConfig() {
final redis.clients.jedis.JedisPoolConfig poolConfig = new redis.clients.jedis.JedisPoolConfig();
// Maximum active connections to Redis instance
poolConfig.setMaxTotal(16);
// Number of connections to Redis that just sit there and do nothing
poolConfig.setMaxIdle(16);
// Minimum number of idle connections to Redis - these can be seen as always open and ready to serve
poolConfig.setMinIdle(8);
// Tests whether connection is dead when returning a connection to the pool
poolConfig.setTestOnBorrow(true);
// Tests whether connection is dead when connection retrieval method is called
poolConfig.setTestOnReturn(true);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
return poolConfig;
}
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
}

apache ignite service deploy cluster singleton

I try create spring java application with apache ignite inside.
When application startup it deploy service(cluster singleton) to cluster. (it`s ok)
But when i start second node which deploy this service too it is locked.
why this hapends ?
Apache Ignite 2.5.0
code of service class:
public class TaskLockWatcher implements Service {
#IgniteInstanceResource
private Ignite ignite;
private SchedulerFuture scheduler;
#Override
public void cancel(ServiceContext ctx) {
LOGGER.info("schedule service cancel");
if (scheduler != null){
scheduler.cancel();
}
}
#Override
public void init(ServiceContext ctx) throws Exception {
LOGGER.info("schedule service init");
}
#Override
public void execute(ServiceContext ctx) throws Exception {
LOGGER.info("schedule service execute");
scheduler = ignite.scheduler().scheduleLocal(() -> {
LOGGER.info("schedule service call");
List<Ticket> tickets = getTicketForUnlock();
if (!tickets.isEmpty()){
changeTicketStatus(tickets, TicketStatus.CREATED);
}
}, "* * * * *");
}
}
But the locking occurs even if I leave only the output of messages in all TaskLockWatcher methods.
publishing code:
Ignite ignite = connector.getClient();
IgniteServices svcs = ignite.services();
svcs.deployClusterSingleton("taskLockWatcher", new TaskLockWatcher());
ignite configuration code:
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
dataStorageConfiguration.setWalMode(WALMode.LOG_ONLY);
dataStorageConfiguration.getDefaultDataRegionConfiguration().setPersistenceEnabled(false);
dataStorageConfiguration.getDefaultDataRegionConfiguration().setMaxSize(128); // MB
List<DataRegionConfiguration> list = new ArrayList<>();
DataRegionConfiguration configuration = new DataRegionConfiguration();
configuration.setName("TASK_REGION");
configuration.setMaxSize(1024);
configuration.setPersistenceEnabled(true);
list.add(conf);
dataStorageConfiguration.setDataRegionConfigurations(list.toArray(new DataRegionConfiguration[0]));
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpCommunicationSpi tcpCommunicationSpi = new TcpCommunicationSpi();
tcpCommunicationSpi.setSlowClientQueueLimit(1000);
igniteConfiguration.setCommunicationSpi(tcpCommunicationSpi);
Map<String, String> attrs = Collections.singletonMap("group.node.filter", "CLUSTER_TASKS,CLUSTER_TICKETS");
igniteConfiguration.setUserAttributes(attrs);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryZookeeperIpFinder ipFinder = new TcpDiscoveryZookeeperIpFinder();
ipFinder.setBasePath("/datagrid");
ipFinder.setCurator(getCuratorFramework());
spi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(spi);
AtomicConfiguration atomicConfiguration = new AtomicConfiguration();
atomicConfiguration.setBackups(1);
atomicConfiguration.setCacheMode(CacheMode.REPLICATED);
igniteConfiguration.setAtomicConfiguration(atomicConfiguration);
igniteConfiguration.setIncludeEventTypes(EventType.EVTS_CACHE);
CacheConfiguration<UUID, Ticket> cacheConfiguration = new CacheConfiguration<>("TICKETS");
cacheConfiguration.setIndexedTypes(keyClass, UUID.class);
cacheConfiguration.setNodeFilter((ClusterNode clusterNode) -> {
String dataNode = clusterNode.attribute("group.node.filter");
return dataNode != null && dataNode.contains("CLUSTER_TICKETS");
});
cacheConfiguration.setStatisticsEnabled(true);
cacheConfiguration.setDataRegionName("TASK_REGION");
cacheConfiguration.setBackups(1);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
IgniteCache<UUID, Ticket> ticketCache = ignite.getOrCreateCache(cacheConfiguration);

Error while running documentum code

I am getting an error while running the documentum code at:
config.setString("primary_host", docbroker);
in the below code:
IDfClient client = DfClient.getLocalClient();
// getting the config object of local client
IDfTypedObject config = client.getClientConfig();
config.setString("primary_host", docbroker);
IDfLoginInfo li = new DfLoginInfo();
and the error I was getting is below:
Error:
java.lang.IllegalStateException: Reference count is already zero
at com.documentum.fc.impl.util.ReferenceCountManager.decrement(ReferenceCountManager.java:47)
at com.documentum.fc.client.impl.docbroker.DocbrokerMapUnion.decrementReferenceCount(DocbrokerMapUnion.java:43)
at com.documentum.fc.client.impl.docbroker.DocbrokerMapUnion.removeEntry(DocbrokerMapUnion.java:37)
at com.documentum.fc.client.impl.docbroker.DocbrokerMap.removeEntries(DocbrokerMap.java:176)
at com.documentum.fc.client.impl.docbroker.DocbrokerClient$PreferencesObserver.update(DocbrokerClient.java:251)
at com.documentum.fc.common.impl.preferences.TypedPreferences.notifyObservers(TypedPreferences.java:559)
at com.documentum.fc.common.impl.preferences.TypedPreferences.setString(TypedPreferences.java:168)
at com.gsk.rd.datacoe.dataspider.FetchDocumentumStats.connectToDocumentum(FetchDocumentumStats.java:357)
at com.gsk.rd.datacoe.dataspider.FetchDocumentumStats.run(FetchDocumentumStats.java:92)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Please can anyone help me. I am new to documentum.
From my experience, it is best to use the IDfClientX interface and work your way from there. Perhaps this example will help you:
// This is just an example. You might want to encapsulate this functionality in a class of your own
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.IDfClient;
import com.documentum.fc.client.IDfSession;
import com.documentum.fc.client.IDfSessionManager;
import com.documentum.fc.client.IDfTypedObject;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfLoginInfo;
public class Main {
public static void main(String[] args) throws DfException {
IDfClientX clientX = new DfClientX();
IDfClient client = clientX.getLocalClient();
IDfTypedObject clientConfig = client.getClientConfig();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
IDfSessionManager sessionManager = client.newSessionManager();
IDfSession session = connect(clientConfig, loginInfo, sessionManager, "<your host here>", 1489, "<your docbase here>", "<your username here>", "<your password here>");
// Do something with session
}
public static IDfSession connect(IDfTypedObject clientConfig, IDfLoginInfo loginInfo, IDfSessionManager sessionManager, String host, int port, String docbase, String user, String password) throws DfException {
clientConfig.setString("primary_host", host);
clientConfig.setInt("primary_port", port);
loginInfo.setUser(user);
loginInfo.setPassword(password);
sessionManager.clearIdentities();
sessionManager.setIdentity(docbase, loginInfo);
return sessionManager.getSession(docbase);
}
}

Refreshing RedisTemplate

I am using spring quartz job to connect to Redis to perform a operation. I have configured the RedisTemplate to connect to only the master node.
When failover happens in Redis and new master is elected I am notified via client-reconfig-script in sentinel.conf. After this I am trying to rewire my RedisTemplate to talk to the new master. This rewire part is not working.
Spring boot config for quartz job and RedisTemplate:
package com.XXX.XXX;
import org.quartz.JobDetail;
import org.quartz.SimpleTrigger;
import org.quartz.Trigger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.ClassPathResource;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.scheduling.quartz.JobDetailFactoryBean;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
import org.springframework.scheduling.quartz.SimpleTriggerFactoryBean;
import org.springframework.scheduling.quartz.SpringBeanJobFactory;
import de.chandre.quartz.spring.AutowiringSpringBeanJobFactory;
import de.chandre.quartz.spring.QuartzSchedulerAutoConfiguration;
#SpringBootApplication(exclude = { QuartzSchedulerAutoConfiguration.class })
public class OrderProcessorApplication {
public static void main(String[] args) {
SpringApplication.run(OrderProcessorApplication.class, args);
}
#Bean
public JobDetailFactoryBean jobDetail() {
JobDetailFactoryBean jobDetailFactory = new JobDetailFactoryBean();
jobDetailFactory.setJobClass(OrderProcessorJob.class);
jobDetailFactory.setDurability(true);
return jobDetailFactory;
}
#Value("${job.interval}")
private int interval;
#Bean
public SimpleTriggerFactoryBean trigger(JobDetail job) {
SimpleTriggerFactoryBean trigger = new SimpleTriggerFactoryBean();
trigger.setJobDetail(job);
trigger.setRepeatInterval(interval);
trigger.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY);
return trigger;
}
#Bean
public SchedulerFactoryBean scheduler(Trigger trigger, JobDetail job) {
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setConfigLocation(new ClassPathResource("quartz.properties"));
schedulerFactory.setJobFactory(springBeanJobFactory());
schedulerFactory.setJobDetails(job);
schedulerFactory.setTriggers(trigger);
return schedulerFactory;
}
#Autowired
private ApplicationContext ap;
#Bean
#Primary
public SpringBeanJobFactory springBeanJobFactory() {
AutowiringSpringBeanJobFactory jobFactory = new AutowiringSpringBeanJobFactory();
jobFactory.setApplicationContext(ap);
return jobFactory;
}
// #Bean
// #Qualifier("sentinel")
// public RedisTemplate<String, Object> createRedisTemplate() {
// RedisSentinelConfiguration sc = new RedisSentinelConfiguration()
// .master("redis-cluster")
// .sentinel("127.0.0.1", 26179)
// .sentinel("127.0.0.1", 26381);
//
// JedisConnectionFactory jcf = new JedisConnectionFactory(sc);
// jcf.setUsePool(true);
// jcf.afterPropertiesSet();
//
// RedisTemplate rt = new RedisTemplate();
// rt.setConnectionFactory(jcf);
// rt.afterPropertiesSet();
// return rt;
// }
#Bean
#Primary
public RedisTemplate<String, Object> createMasterOnlyRedisTemplate() {
JedisConnectionFactory jcf = new JedisConnectionFactory();
jcf.setHostName("127.0.0.1");
jcf.setPort(6379);
jcf.setUsePool(true);
jcf.afterPropertiesSet();
RedisTemplate rt = new RedisTemplate();
rt.setConnectionFactory(jcf);
rt.afterPropertiesSet();
return rt;
}
}
Refresh REST service invoked by sentinel:
package com.store.platform;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.kohls.store.platform.common.model.ui.UpdateRedisMaster;
#RestController
#RequestMapping(path = "/refresh", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE)
public class Test {
#Autowired
private RedisTemplate<String, Object> rt;
#RequestMapping(method = RequestMethod.POST)
public void refresh(#RequestBody UpdateRedisMaster um) {
JedisConnectionFactory jcf = (JedisConnectionFactory) rt.getConnectionFactory();
jcf.setHostName(um.getIp());
jcf.setPort(um.getPort());
jcf.setShardInfo(null);
jcf.afterPropertiesSet();
}
}
After successfully invoking refresh, next the quartz job tries to connect to Redis I get following exception:
org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:204) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:348) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:129) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:92) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:79) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:194) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:169) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:91) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at org.springframework.data.redis.core.DefaultValueOperations.set(DefaultValueOperations.java:182) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
at com.XXX.platform.OrderProcessorJob.execute(OrderProcessorJob.java:32) ~[classes/:na]
at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.3.0.jar:na]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.3.0.jar:na]
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.util.Pool.getResource(Pool.java:53) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:16) ~[jedis-2.9.0.jar:na]
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:194) ~[spring-data-redis-1.8.4.RELEASE.jar:na]
... 11 common frames omitted
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused (Connection refused)
at redis.clients.jedis.Connection.connect(Connection.java:207) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:93) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1767) ~[jedis-2.9.0.jar:na]
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:106) ~[jedis-2.9.0.jar:na]
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:868) ~[commons-pool2-2.4.2.jar:2.4.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435) ~[commons-pool2-2.4.2.jar:2.4.2]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) ~[commons-pool2-2.4.2.jar:2.4.2]
at redis.clients.util.Pool.getResource(Pool.java:49) ~[jedis-2.9.0.jar:na]
... 14 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at redis.clients.jedis.Connection.connect(Connection.java:184) ~[jedis-2.9.0.jar:na]
... 21 common frames omitted
I want to get consistent data, thats why I want to use master for read and write, and slave for redundancy and failover.
I am struck on this issue for two days. Any help is highly appreciated.
I was able to resolve my issue by creating my own RedisConnectionFactory.
public class SPRedisConnectionFactory implements InitializingBean, RedisConnectionFactory, DisposableBean {
private Jedis jedis;
private String[] nodes;
private Map<String, String[]> nodeMap;
#Value("${master.node}")
private String masterNode;
#Override
public void afterPropertiesSet() throws Exception {
nodeMap = Arrays.stream(this.nodes)
.map(n -> n.split(":"))
.collect(toMap(n -> n[0] + ":" + n[1], n -> n));
System.out.println(nodeMap);
}
#Override
public RedisConnection getConnection() {
String[] master = nodeMap.get(masterNode);
jedis = new Jedis(master[0], parseInt(master[1]), 2000);
jedis.connect();
JedisConnection c = new JedisConnection(jedis);
return c;
}
#Override
public RedisClusterConnection getClusterConnection() {
throw new UnsupportedOperationException();
}
#Override
public boolean getConvertPipelineAndTxResults() {
return false;
}
#Override
public RedisSentinelConnection getSentinelConnection() {
throw new UnsupportedOperationException();
}
#Override
public DataAccessException translateExceptionIfPossible(RuntimeException e) {
return null;
}
#Override
public void destroy() throws Exception {
jedis.disconnect();
}
}
Got this idea from Connections to multiple Redis servers with Spring Data Redis.