I am new to Redis and I am using Redis Java Client to work with Redis cluster.
I have following code:
public class HelloRedisCluster {
public static void main(String[] args) {
Set<HostAndPort> nodes = new HashSet<HostAndPort>();
nodes.add(new HostAndPort("127.0.0.1", 6001));
nodes.add(new HostAndPort("127.0.0.1", 6002));
JedisPoolConfig config = new JedisPoolConfig();
config.setMaxTotal(10000);
config.setMaxIdle(500);
JedisCluster cluster = new JedisCluster(nodes);
cluster.set("abc", "123");
System.out.println(cluster.get("abc"));
cluster.close();
}
}
In the above code, it simply opens the cluster, set/get with Redis, and then close the cluster.
If the code is running as a service(eg in Servlet), then it will frequently open and close cluster, which would cause bad performance.
I would ask how to use JedisCluster effectively?
Thanks!
I have figured out the way that JedisCluster works. Internally, it has already used Jedis Pool.
The operations that JedisCluster provides follow the same pattern, take set for example:
1. Borrow a Jedis object from Jedis Pool
2. Call Jedis#set method
3. Release the Jedis object back to the pool.
So that, we can hold a JedisCluster instance in a Singleton object, and then close JedisCluster object when JVM exits, with following code:
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.JedisPoolConfig;
import java.util.HashSet;
import java.util.Set;
public class JedisClusterUtil {
private static JedisCluster cluster;
static {
Set<HostAndPort> nodes = new HashSet<HostAndPort>();
nodes.add(new HostAndPort("127.0.0.1", 6001));
nodes.add(new HostAndPort("127.0.0.1", 6002));
JedisPoolConfig config = new JedisPoolConfig();
config.setMaxTotal(10000);
config.setMaxIdle(500);
cluster = new JedisCluster(nodes, config);
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
if (cluster != null) {
cluster.close();
}
}
});
}
public static JedisCluster getCluster() {
return cluster;
}
}
Related
Is there a way to add a specific behavior that should only be executed on (and follow) the Ignite Coordinator ServerNode?
Is there a add-on hook to add some customer specific behavior?
Thanks in advance.
Greg
AFAIK, there is not any built-in hook to achieve this.
However, it can be achieved through Node Singleton Ignite Service and using an api on TcpDiscoverySpi.isLocalNodeCoordinator().
If different discovery mechanism other than TcpDiscovery, is used, above mentioned way by #Alexandr can be used to get a co-ordinator node.
Define an Ignite Service as follows. It will schedule a task that will periodically execute on every node of the cluster and execute certain logic, if it's a coordinator node.
public class IgniteServiceImpl implements Service {
#IgniteInstanceResource
Ignite ignite;
#Override
public void cancel(ServiceContext ctx) {
}
#Override
public void init(ServiceContext ctx) throws Exception {
System.out.println("Starting a service");
}
#Override
public void execute(ServiceContext ctx) throws Exception {
Timer timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
System.out.println("Inside a service");
if (ignite != null) {
DiscoverySpi discoverySpi = ignite.configuration().getDiscoverySpi();
if (discoverySpi instanceof TcpDiscoverySpi) {
TcpDiscoverySpi tcpDiscoverySpi = (TcpDiscoverySpi) discoverySpi;
if(tcpDiscoverySpi.isLocalNodeCoordinator())
doSomething();
} else {
ClusterNode coordinatorNode = ((IgniteEx) ignite).context().discovery().discoCache().oldestAliveServerNode();
UUID localNodeId = ((IgniteEx) ignite).context().localNodeId();
if(localNodeId.equals(coordinatorNode.id()))
doSomething();
}
} else {
System.out.println("Ignite is null");
}
}
}, 5, (30 * 1000L));
}
private void doSomething(){
System.out.println("Hi I am Co-ordinator Node");
}
}
Start above service as node singleton using Ignite.services() as follows
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setIgniteInstanceName("ignite-node");
Ignite ignite = Ignition.start(igniteConfiguration);
IgniteServices services = ignite.services();
services.deployNodeSingleton("test-service",new IgniteServiceImpl());
In case of a low-level logic, you can extend Ignite with custom plugins.
I'm not sure if there is an easy way to check if a node is indeed the coordinator, but you might check for the oldest one:
private boolean isCoordinator() {
ClusterNode node = ((IgniteEx)ignite()).context().discovery().discoCache().oldestAliveServerNode();
return node != null && node.isLocal();
}
Otherwise, just run a custom initialization logic/compute task, once a node is started.
I have the following solution in order to implement multiple IDistributedCache definitions:
public interface IDBCache : IDistributedCache
{
}
public class DBCacheOptions : RedisCacheOptions { }
public class DBCache : RedisCache, IDBCache
{
public DBCache(IOptions<DBCacheOptions> optionsAccessor) : base(optionsAccessor)
{
}
}
And I have other definitions like the above pointint to different redis instances.
I am registering the cache service at Startup.cs as:
services.Configure<DBCacheOptions>(options => options.Configuration = configuration.GetValue<string>("Cache:DB"));
services.Add(ServiceDescriptor.Singleton<IDBCache, DBCache>());
And then I am wrapping IDBCache as:
public class DBCacheManager
{
private const string DB_CACHE_FORMAT = "DB:{0}";
private const int DB_EXPIRATION_HOURS = 8;
private readonly IDistributedCache _cache;
public DBCacheManager(IDBCache cache)
{
_cache = cache;
}
public Task AddDBItem(string name, string value)
{
return _cache.SetStringAsync(string.Format(DB_CACHE_FORMAT, name), value,
new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromDays(DB_EXPIRATION_HOURS) });
}
}
And when I check for clients connected to redis (info clients command) connected_clients are incrementing without stopping, also when I see the clients list (client list command) I see the large connection list with long ages and idles.
Insights: I am using redis implementation of AWS ElasticCache which has unlimited idle timeout by default but I guess I should not be forcing to close these connections, should I? I suppose my application should be responsible.
This was a bad implementation of dependency injection. IDistributedCache interface does not have implementation of redis INCR command so somewhere in our project we were connecting directly using StackExchange.Redis with a DI wrapper that was creating multiple connection multiplexers and IDatabases.
Bottom line: my bad
How do I get activejdbc working with a HikariCP connection pool? Mostly want this for documentation purposes.
I've tried a few different methods but none have worked so far.
Figured it out for postrgresql:
public static final HikariConfig hikariConfig() {
HikariConfig hc = new HikariConfig();
hc.setDataSourceClassName("org.postgresql.ds.PGSimpleDataSource");
hc.setJdbcUrl(DataSources.PROPERTIES.getProperty("jdbc.url"));
hc.setUsername(DataSources.PROPERTIES.getProperty("jdbc.username"));
hc.setPassword(DataSources.PROPERTIES.getProperty("jdbc.password"));
hc.setMaximumPoolSize(10);
return hc;
}
public static final HikariDataSource hikariDataSource = new HikariDataSource(hikariConfig());
public static final void dbInit() {
Base.open(hikariDataSource); // get connection from pool
}
public static final void dbClose() {
Base.close();
}
Is there a Connection Pool Manager available for RedisNativeClient? We are doing byte level operations and use RedisNativeClient instead of the RedisClient.
Here is the solution I implemented. RedisClient inherits RedisNativeClient so using PooledRedisClientManager and then casting the connection to RedisNativeClient works fine. It holds the same TCP socket.
P.S. I am using Dependency Injection so I keep the lifestyle of this helper class singleton.
//Lifestyle is singleton
public class RedisHelper:IRedisHelper
{
private readonly PooledRedisClientManager _poolManager;
public RedisHelper()
{
_poolManager = new PooledRedisClientManager("localhost:6379");
}
public void RedisSingleSet(string redisKey, byte[] redisValues)
{
using (var client = (RedisNativeClient)_poolManager.GetClient())
{
client.Set(redisKey, redisValues);
}
}
}
I want to test a JMS-worker included in my glassfish-application using arquillian (to have container-services). My Worker looks the following:
package queue.worker;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.MessageListener;
#MessageDriven(mappedName = "java:app/jms/MailQueue", activationConfig = {
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class MailWorker implements MessageListener {
public MailWorker() {
}
#Override
public void onMessage(javax.jms.Message inMessage) {
}
}
This is the test:
package queueTest.worker;
import java.io.File;
import javax.inject.Inject;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.spec.WebArchive;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import queue.worker.MailWorker;
#RunWith(Arquillian.class)
public class MailWorkerTest {
#Deployment
public static WebArchive createDeployment() {
WebArchive archive = ShrinkWrap
.create(WebArchive.class)
.addClasses(MailWorker.class)
.addAsWebInfResource(new File("src/test/resources/WEB-INF/glassfish-resources.xml"),
"glassfish-resources.xml")
.addAsWebInfResource(new File("src/main/webapp/WEB-INF/beans.xml"), "beans.xml");
return archive;
}
#Inject
protected MailWorker mailWorker;
#Test
public void sendRegisterMail() {
Assert.assertTrue(true);
}
}
Executing this test, the Glassfish-JSM-Queue is started[1], but I get the following error:
org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [MailWorker] with qualifiers [#Default] at injection point [[field] #Inject protected queueTest.worker.MailWorkerTest.mailWorker]
When I remove "#MessageDrivern[...]" at Mailworker.class and replace it with "#ApplicationScoped", e.g., everything works fine - so there seems to be not a problem with Arquillian in general, but JMS-related.
How can I test the JMS/Queue-Worker?
[1]
Dez 23, 2012 12:42:08 AM com.sun.messaging.jms.ra.ResourceAdapter start
Information: MQJMSRA_RA1101: GlassFish MQ JMS Resource Adapter starting: broker is EMBEDDED, connection mode is Direct
Dez 23, 2012 12:42:10 AM com.sun.messaging.jms.ra.ResourceAdapter start
Information: MQJMSRA_RA1101: GlassFish MQ JMS Resource Adapter Started:EMBEDDED
Testing MDBs is harder than testing usual EJBs and CDI beans as they are executed asynchronously. Even if you were able to inject them into your test, you could just test the onMessage() method by calling it synchronously.
My approach uses the MDB to only catch the message and to extract the underlying presentation (like String or Object). Then pass the extracted message to a separate CDI bean which has a test alternative.
#MessageDriven(mappedName = "jms/queue/example", activationConfig = {
#ActivationConfigProperty(propertyName = "destinationType",
propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination",
propertyValue = "jms/queue/example")
})
public class ExampleMDB implements MessageListener {
#Inject
private ExampleMessageHandler exampleMessageHandler;
#Override
public void onMessage(Message message) {
if (message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
try {
exampleMessageHandler.doSomething(textMessage.getText());
} catch (JMSException e) {
throw new RuntimeException("That was unexpected!", e);
}
}
}
}
The ExampleMessageHandler defines doSomething(String text).
For the test scope, we need an implementation that captures the arguments passed to doSomething() and makes them accessible to the test class. You can archieve this with the following implementation:
#Alternative
#ApplicationScoped
public class ExampleMessageHandlerTestable implements ExampleMessageHandler {
private BlockingQueue<String> queue = new LinkedBlockingQueue<String>();
public void doSomething(String text) {
queue.add(text);
}
public String poll(int secondsUntilInterrupt) throws InterruptedException {
return queue.poll(secondsUntilInterrupt, TimeUnit.SECONDS);
}
}
This is a CDI alternative to the real implementation used by the production code. Now just let the Arquillian test use this alternative. Here's the test class:
#RunWith(Arquillian.class)
public class ExampleMDBGoodTest {
#Resource(mappedName = "ConnectionFactory", name = "ConnectionFactory")
private ConnectionFactory connectionFactory;
#Resource(mappedName = "jms/queue/example", name = "jms/queue/example")
private Queue queue;
#Inject
private ExampleMessageHandler exampleMessageHandler;
#Deployment
public static WebArchive createDeployment() {
WebArchive archive = ShrinkWrap.create(WebArchive.class, "exampleMDB.war")
.addPackages(true, ExampleMDB.class.getPackage())
.addAsWebInfResource("hornetq-jms.xml", "hornetq-jms.xml")
.addAsWebInfResource("beans-alternative.xml", "beans.xml");
System.out.println(archive.toString(true));
return archive;
}
#Test
public void testOnMessage() throws Exception {
Connection connection = connectionFactory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
TextMessage textMessage = session.createTextMessage("Hello world!");
producer.send(textMessage);
session.close();
connection.close();
// We cast to our configured handler defined in beans.xml
ExampleMessageHandlerTestable testHandler =
(ExampleMessageHandlerTestable) exampleMessageHandler;
assertThat(testHandler.poll(10), is("Hello world!"));
}
}
Some explanations what's going on here: The test requests a JMS ConnectionFactory and the Queue on which the MDB listens. These create the JMS messages used by the MDB under test. Then we create a test deployment. The hornetq-jms.xml defines an adhoc queue for the test. By including beans-alternative.xml, we ensure that our test alternative is used by the MDB.
<beans xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/beans_1_0.xsd">
<alternatives>
<class>com.github.mcs.arquillian.mdb.example.ExampleMessageHandlerTestable</class>
</alternatives>
</beans>
The test case itself should be straight forward. A new JMS message is sent to the queue. Then we wait up to 10 seconds for a new message within our test alternative. By using a blocking queue, we can define a timeout after which the test fails. But the test itself finishes immediately as soon as the MDB calls the alternative bean.
I have uploaded a small Maven example project from where I copied the above code parts. Because I don't know much about Glassfish, it uses JBoss as managed container. Depending on the JBoss version you might use, you need to change the version of jboss-as-arquillian-container-managed.
Hope that helps someone :-)
MDBs are not eligible for injection in to other classes. You cannot inject them in to your test case.