Is there a way to pass redis commands in jedis, without using the functions? - redis

We are trying to build a console to process redis queries. But, in the back end we need to use Jedis. So, the commands, given as the inputs needs to be processed using Jedis. For example, in redis-cli, we use " keys * ". For the same we use jedis.keys(" * ") in Jedis. I have no idea, how to convert " keys * " into jedis.keys(" * "). Kindly tell me some suggestions....

I know this is an old question, but hopefully the following will be useful for others.
Here's something I came up with as the most recent version of Jedis (3.2.0 as of this time) did not support the "memory usage " command which is available on Redis >= 4. This code assumes a Jedis object has been created, probably from a Jedis resource pool:
import redis.clients.jedis.util.SafeEncoder;
// ... Jedis setup code ...
byteSize = (Long) jedis.sendCommand(new ProtocolCommand() {
#Override
public byte[] getRaw() {
return SafeEncoder.encode("memory");
}},
SafeEncoder.encode("usage"),
SafeEncoder.encode(key));
This is a special case command as it has a primary keyword memory with a secondary action usage (other ones are doctor, stats, purge, etc). When sending multi-keyword commands to Redis, the keywords must be treated as a list. My first attempt at specifying memory usage as a single argument failed with a Redis server error.
Subsequently, it seems the current Jedis implementation is geared toward single keyword commands, as underneath the hood there's a bunch of special code to deal with multi-keyword commands such as debug object that doesn't quite fit the original command keyword framework.
Anyway, once my current project that required the ability to call memory usage is complete, I'll try my hand at providing a patch to the Jedis maintainer to implement the above command in a more official/conventional way, which would look something like:
Long byteSize = jedis.memoryUsage(key);
Finally, to address your specific need, you're best bet is to use the scan() method of the Jedis class. There are articles here on SO that explain how to use the scan() method.

Hmm...You can make the same thing by referring to the following.
redis.clients.jedis.Connection.sendCommand(Command, String...)
Create a class extend Connection.
Create a class extend Connection instance and call the connect() method.
Call super.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), args[1~end]).
example for you:
public class JedisConn extends Connection {
public JedisConn(String host, int port) {
super(host, port);
}
#Override
protected Connection sendCommand(final Protocol.Command cmd, final String... args) {
return super.sendCommand(cmd, args);
}
public static void main(String[] args) {
JedisConn jedisConn = new JedisConn("host", 6379);
jedisConn.connect();
Connection connection = jedisConn.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), Arrays.copyOfRange(args, 1, args.length));
System.out.println(connection.getAll());
jedisConn.close();
}
}
Haha~~

I have found a way for this. There is a function named eval(). We can use that for this as shown below.
`Scanner s=new Scanner(System.in);String query=s.nextLine();
String[] q=query.split(" ");
String cmd='\''+q[0]+'\'';
for(int i=1;i<q.length;i++)
cmd+=",\'"+q[i]+'\'';
System.out.println(j.eval("return redis.call("+cmd+")"));`

Related

Optaplanner: NullPointerException when calling scoreDirector.beforeVariableChanged in a simple custom move

I am building a Capacited Vehicle Routing Problem with Time Windows, but with one small difference when compared to the one provided in examples from the documentation: I don't have a depot. Instead, each order has a pickup step, and a delivery step, in two different locations.
(like in the Vehicle Routing example from the documentation, the previousStep planning variable has the CHAINED graph type, and its valueRangeProviderRefs includes both Drivers, and Steps)
This difference adds a couple of constraints:
the pickup and delivery steps of a given order must be handled by the same driver
the pickup must be before the delivery
After experimenting with constraints, I have found that it would be more efficient to implement two types of custom moves:
assign both steps of an order to a driver
rearrange the steps of a driver
I am currently implementing that first custom move. My solver's configuration looks like this:
SolverFactory<RoutingProblem> solverFactory = SolverFactory.create(
new SolverConfig()
.withSolutionClass(RoutingProblem.class)
.withEntityClasses(Step.class, StepList.class)
.withScoreDirectorFactory(new ScoreDirectorFactoryConfig()
.withConstraintProviderClass(Constraints.class)
)
.withTerminationConfig(new TerminationConfig()
.withSecondsSpentLimit(60L)
)
.withPhaseList(List.of(
new LocalSearchPhaseConfig()
.withMoveSelectorConfig(CustomMoveListFactory.getConfig())
))
);
My CustomMoveListFactory looks like this (I plan on migrating it to an MoveIteratorFactory later, but for the moment, this is easier to read and write):
public class CustomMoveListFactory implements MoveListFactory<RoutingProblem> {
public static MoveListFactoryConfig getConfig() {
MoveListFactoryConfig result = new MoveListFactoryConfig();
result.setMoveListFactoryClass(CustomMoveListFactory.class);
return result;
}
#Override
public List<? extends Move<RoutingProblem>> createMoveList(RoutingProblem routingProblem) {
List<Move<RoutingProblem>> moves = new ArrayList<>();
// 1. Assign moves
for (Order order : routingProblem.getOrders()) {
Driver currentDriver = order.getDriver();
for (Driver driver : routingProblem.getDrivers()) {
if (!driver.equals(currentDriver)) {
moves.add(new AssignMove(order, driver));
}
}
}
// 2. Rearrange moves
// TODO
return moves;
}
}
And finally, the move itself looks like this (nevermind the undo or the isDoable for the moment):
#Override
protected void doMoveOnGenuineVariables(ScoreDirector<RoutingProblem> scoreDirector) {
assignStep(scoreDirector, order.getPickupStep());
assignStep(scoreDirector, order.getDeliveryStep());
}
private void assignStep(ScoreDirector<RoutingProblem> scoreDirector, Step step) {
StepList beforeStep = step.getPreviousStep();
Step afterStep = step.getNextStep();
// 1. Insert step at the end of the driver's step list
StepList lastStep = driver.getLastStep();
scoreDirector.beforeVariableChanged(step, "previousStep"); // NullPointerException here
step.setPreviousStep(lastStep);
scoreDirector.afterVariableChanged(step, "previousStep");
// 2. Remove step from current chained list
if (afterStep != null) {
scoreDirector.beforeVariableChanged(afterStep, "previousStep");
afterStep.setPreviousStep(beforeStep);
scoreDirector.afterVariableChanged(afterStep, "previousStep");
}
}
The idea being that at no point I'm doing an invalid chained list manipulation:
However, as the title and the code comment indicate, I am getting a NullPointerException when I call scoreDirector.beforeVariableChanged. None of my variables are null (I've printed them to make sure). The NullPointerException doesn't occur in my code, but deep inside Optaplanner's inner workings, making it difficult for me to fix it:
Exception in thread "main" java.lang.NullPointerException
at org.drools.core.common.NamedEntryPoint.update(NamedEntryPoint.java:353)
at org.drools.core.common.NamedEntryPoint.update(NamedEntryPoint.java:338)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.update(StatefulKnowledgeSessionImpl.java:1579)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.update(StatefulKnowledgeSessionImpl.java:1551)
at org.optaplanner.core.impl.score.stream.drools.DroolsConstraintSession.update(DroolsConstraintSession.java:49)
at org.optaplanner.core.impl.score.director.stream.ConstraintStreamScoreDirector.afterVariableChanged(ConstraintStreamScoreDirector.java:137)
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.retract(SingletonInverseVariableListener.java:96)
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.beforeVariableChanged(SingletonInverseVariableListener.java:46)
at org.optaplanner.core.impl.domain.variable.listener.support.VariableListenerSupport.beforeVariableChanged(VariableListenerSupport.java:170)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.beforeVariableChanged(AbstractScoreDirector.java:430)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.beforeVariableChanged(AbstractScoreDirector.java:390)
at test.optaplanner.solver.AssignMove.assignStep(AssignMove.java:98)
at test.optaplanner.solver.AssignMove.doMoveOnGenuineVariables(AssignMove.java:85)
at org.optaplanner.core.impl.heuristic.move.AbstractMove.doMove(AbstractMove.java:35)
at org.optaplanner.core.impl.heuristic.move.AbstractMove.doMove(AbstractMove.java:30)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.doAndProcessMove(AbstractScoreDirector.java:187)
at org.optaplanner.core.impl.localsearch.decider.LocalSearchDecider.doMove(LocalSearchDecider.java:132)
at org.optaplanner.core.impl.localsearch.decider.LocalSearchDecider.decideNextStep(LocalSearchDecider.java:116)
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase.solve(DefaultLocalSearchPhase.java:70)
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:98)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:189)
at test.optaplanner.OptaPlannerService.testOptaplanner(OptaPlannerService.java:68)
at test.optaplanner.App.main(App.java:13)
Is there something I did wrong? It seems I am following the documentation for custom moves fairly closely, outside of the fact that I am using exclusively java code instead of drools.
The initial solution I feed to the solver has all of the steps assigned to a single driver. There are 15 drivers and 40 orders.
In order to bypass this error, I have tried a number of different things:
remove the shadow variable annotation, turn Driver into a problem fact, and handle the nextStep field myself => this makes no difference
use Simulated Annealing + First Fit Decreasing construction heuristics, and start with steps not assigned to any driver (this was inspired by looking up the example here, which is more complete than the one from the documentation) => the NullPointerException appears on afterVariableChanged instead, but it still appears.
a number of other things which were probably not very smart
But without a more helpful error message, I can't think of anything else to try.
Thank you for your help

Is StreamTransformer concurrent safe?

Suppose that I have a ignite cluster with several nodes and a partitioned non-empty IgniteCache named "TEST_CACHE". Then I run following code in one of nodes:
ignite.compute().run(new IgniteRunnable(){
#IgniteInstanceResource
private Ignite ignite;
#Override
public void run() {
IgniteDataStreamer<String,Long> ds = ignite.dataStreamer("TEST_CACHE");
ds.receiver(new StreamTransformer<String,Long>(){
#Override
public Object process(MutableEntry<String, Long> entry, Object... arguments)
throws EntryProcessorException {
Long value = entry.getValue();
entry.setValue(value==null?1L:(value.longValue()+1L));
return null;
}
});
//loop for adding lots of String data
while(...)
ds.addData(...);
}
});
This is similar to the offical StreamTransformerExample code, but what different is each node will get a DataStreamer instance of a same cache, and invoke addData method concurrently. In other words, for the same string data in different nodes, maybe one node has just got the value by "Long value = entry.getValue()" but not execute next row code to set value and update into cache, then another node is executing "entry.getValue()". So is it possible to update wrong value in this concurrent StreamTransformer use case?
StreamReceiver.receive calls cache.invoke with your entry processor, so entry is locked within this operation. So yes, it is concurrent safe.
BTW, did you enable allowOverwrite in your DataStreamer?

Compiler optimization causes original static final value to be used even when it's changed by JMockit

Consider the following code that uses JSch to create an SSH connection:
public class DoSsh {
private static final int DEFAULT_PORT = 22;
public DoSsh(String user, String pass) {
JSch jsch = new JSch();
Session sess = jsch.getSession(user, pass, DEFAULT_PORT);
...
And the following test code that uses JMockit:
#Test
public void testDoShs() {
// Change the default port
Deencapsulation.setField(DoSsh.class, "DEFAULT_PORT", 2222);
DoSsh ssh = new DoSsh("me","mypass");
...
The goal here is to cause the SSH connection to use an alternate port during test (2222 in this case) to connect to an in-memory SSH server (Apache MIRA).
When I debug this, I can see that the value of 'DEFAULT_PORT' has indeed been changed (thank you JMockit :-) The problem is that compiler has already optimized the call to 'jsch.getSession' and hard-coded the original value of 22 into it. So when I step into that call in the debugger, even though the value being passed in is 2222, the value inside the call is 22.
My question is, can anyone suggest a way to solve this that doesn't involve making DEFAULT_PORT non-final?
Found my own answer. It involves mocking out the call to 'jsch.getSession', but then calling the real version from within the mock, with the desired port number. This is basically an AOP approach. Deencapsulation is not used. Here's the code:
#MockClass(realClass = JSch.class)
public static class MockedJSch {
public JSch it;
#Mock(reentrant = true)
public Session getSession(final String user, final String pass, final int port) throws JSchException {
return it.getSession(user, pass, TESTING_PORT);
}
}
#BeforeMethod
public void beforeMethod() {
Mockit.setUpMocks(MockedJSch.class);
}
There are two key points to note here.
The mocked method is marked as 'reentrant'.
The mock has a public instance member called "it" that is used to call the "real" method. That instance member is initialized somewhere in the bowels of JMockit to refer to the instance upon which this method is invoked, and that reference has access to the "real" version of the method.

Glassfish - JEE6 - Use of Interceptor to measure performance

For measuring execution time of methods, I've seen suggestions to use
public class PerformanceInterceptor {
#AroundInvoke
Object measureTime(InvocationContext ctx) throws Exception {
long beforeTime = System.currentTimeMillis();
Object obj = null;
try {
obj = ctx.proceed();
return obj;
}
finally {
time = System.currentTimeMillis() - beforeTime;
// Log time
}
}
Then put
#Interceptors(PerformanceInterceptor.class)
before whatever method you want measured.
Anyway I tried this and it seems to work fine.
I also added a
public static long countCalls = 0;
to the PerformanceInterceptor class and a
countCalls++;
to the measureTime() which also seems to work o.k.
With my newby hat on, I will ask if my use of the countCalls is o.k. i.e
that Glassfish/JEE6 is o.k. with me using static variables in a Java class that is
used as an Interceptor.... in particular with regard to thread safety. I know that
normally you are supposed to synchronize setting of class variables in Java, but I
don't know what the case is with JEE6/Glassfish. Any thoughts ?
There is not any additional thread safety provided by container in this case. Each bean instance does have its own instance of interceptor. As a consequence multiple thread can access static countCalls same time.
That's why you have to guard both reads and writes to it as usual. Other possibility is to use AtomicLong:
private static final AtomicLong callCount = new AtomicLong();
private long getCallCount() {
return callCount.get();
}
private void increaseCountCall() {
callCount.getAndIncrement();
}
As expected, these solutions will work only as long as all of the instances are in same JVM, for cluster shared storage is needed.

Apache Camel : GBs of data from database routed to JMS endpoint

I've done a few small projects in camel now but one thing I'm struggling to understand is how to deal with big data (that doesn't fit into memory) when consuming in camel routes.
I have a database containing a couple of GBs worth of data that I would like to route using camel. Obviously reading all data into memory isn't an option.
If I were doing this as a standalone app I would have code that paged through the data and send chunks to my JMS enpoint. I'd like to use camel as it provides a nice pattern. If I were consuming from a file I could use the streaming() call.
Also should I use camel-sql/camel-jdbc/camel-jpa or use a bean to read from my database.
Hope everyone is still with me. I'm more familiar with the Java DSL but would appreciate any help/suggestions people can provide.
Update : 2-MAY-2012
So I've had some time to play around with this and I think what I'm actually doing is abusing the concept of a Producer so that I can use it in a route.
public class MyCustomRouteBuilder extends RouteBuilder {
public void configure(){
from("timer:foo?period=60s").to("mycustomcomponent:TEST");
from("direct:msg").process(new Processor() {
public void process(Exchange ex) throws Exception{
System.out.println("Receiving value" : + ex.getIn().getBody() );
}
}
}
}
My producer looks something like the following. For clarity I've not included the CustomEndpoint or CustomComponent as it just seems to be a thin wrapper.
public class MyCustomProducer extends DefaultProducer{
Endpoint e;
CamelContext c;
public MyCustomProducer(Endpoint epoint){
super(endpoint)
this.e = epoint;
this.c = e.getCamelContext();
}
public void process(Exchange ex) throws Exceptions{
Endpoint directEndpoint = c.getEndpoint("direct:msg");
ProducerTemplate t = new DefaultProducerTemplate(c);
// Simulate streaming operation / chunking of BIG data.
for (int i=0; i <20 ; i++){
t.start();
String s ="Value " + i ;
t.sendBody(directEndpoint, value)
t.stop();
}
}
}
Firstly the above doesn't seem very clean. It seems like the cleanest way to perform this would be to populate a jms queue (in place of direct:msg) via a scheduled quartz job that my camel route then consumes so that I can have more flexibility over the message size received within my camel pipelines. However I quite liked the semantics of setting up time based activations as part of the Route.
Does anyone have any thoughts on the best way to do this.
In my understanding, all you need to do is:
from("jpa:SomeEntity" +
"?consumer.query=select e from SomeEntity e where e.processed = false" +
"&maximumResults=150" +
"&consumeDelete=false")
.to("jms:queue:entities");
maximumResults defines a limit of how many entities you get per query.
When you finish the processing of an entity instance, you need to set e.processed = true; and persist() it, so that the entity won't be processed again.
One way to do that is with the #Consumed annotation:
class SomeEntity {
#Consumed
public void markAsProcessed() {
setProcessed(true);
}
}
Another thing, you need to be careful with is how you serialize the entity before sending it to the queue. You might need to use an enricher between the from and to.