How to programmatically create a vertex in aws neptune using java - amazon-neptune

I am running the following java code (a very small modification on the example posted here https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-java.html) and getting a null pointer exception. I see the vertex is created in Neptune, but the driver seems to bomb on the response.
Am I doing something wrong here? Has anyone been successful in programmatically creating a vertex in Neptune using Java.
public class NeptuneMain {
public static void main(String[] args) {
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("<enter cluster url here>");
builder.port(8182);
Cluster cluster = builder.create();
GraphTraversalSource g = EmptyGraph.instance().traversal().withRemote(DriverRemoteConnection.using(cluster));
GraphTraversal t = g.addV("Aspect");
t.forEachRemaining(
e -> System.out.println(e)
);
cluster.close();
}
}
Stack trace is :
Exception in thread "main" java.util.concurrent.CompletionException: java.lang.NullPointerException
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.apache.tinkerpop.gremlin.driver.ResultSet.one(ResultSet.java:107)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.hasNext(ResultSet.java:159)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:166)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:153)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal$TraverserIterator.next(DriverRemoteTraversal.java:142)
at

You might be using an older version (3.2.x) of the gremlin-driver package. Try upgrading to >= 3.3.2 and let us know if you still observe this problem.

Related

Is there a way to pass redis commands in jedis, without using the functions?

We are trying to build a console to process redis queries. But, in the back end we need to use Jedis. So, the commands, given as the inputs needs to be processed using Jedis. For example, in redis-cli, we use " keys * ". For the same we use jedis.keys(" * ") in Jedis. I have no idea, how to convert " keys * " into jedis.keys(" * "). Kindly tell me some suggestions....
I know this is an old question, but hopefully the following will be useful for others.
Here's something I came up with as the most recent version of Jedis (3.2.0 as of this time) did not support the "memory usage " command which is available on Redis >= 4. This code assumes a Jedis object has been created, probably from a Jedis resource pool:
import redis.clients.jedis.util.SafeEncoder;
// ... Jedis setup code ...
byteSize = (Long) jedis.sendCommand(new ProtocolCommand() {
#Override
public byte[] getRaw() {
return SafeEncoder.encode("memory");
}},
SafeEncoder.encode("usage"),
SafeEncoder.encode(key));
This is a special case command as it has a primary keyword memory with a secondary action usage (other ones are doctor, stats, purge, etc). When sending multi-keyword commands to Redis, the keywords must be treated as a list. My first attempt at specifying memory usage as a single argument failed with a Redis server error.
Subsequently, it seems the current Jedis implementation is geared toward single keyword commands, as underneath the hood there's a bunch of special code to deal with multi-keyword commands such as debug object that doesn't quite fit the original command keyword framework.
Anyway, once my current project that required the ability to call memory usage is complete, I'll try my hand at providing a patch to the Jedis maintainer to implement the above command in a more official/conventional way, which would look something like:
Long byteSize = jedis.memoryUsage(key);
Finally, to address your specific need, you're best bet is to use the scan() method of the Jedis class. There are articles here on SO that explain how to use the scan() method.
Hmm...You can make the same thing by referring to the following.
redis.clients.jedis.Connection.sendCommand(Command, String...)
Create a class extend Connection.
Create a class extend Connection instance and call the connect() method.
Call super.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), args[1~end]).
example for you:
public class JedisConn extends Connection {
public JedisConn(String host, int port) {
super(host, port);
}
#Override
protected Connection sendCommand(final Protocol.Command cmd, final String... args) {
return super.sendCommand(cmd, args);
}
public static void main(String[] args) {
JedisConn jedisConn = new JedisConn("host", 6379);
jedisConn.connect();
Connection connection = jedisConn.sendCommand(Protocol.Command.valueOf(args[0].toUpperCase()), Arrays.copyOfRange(args, 1, args.length));
System.out.println(connection.getAll());
jedisConn.close();
}
}
Haha~~
I have found a way for this. There is a function named eval(). We can use that for this as shown below.
`Scanner s=new Scanner(System.in);String query=s.nextLine();
String[] q=query.split(" ");
String cmd='\''+q[0]+'\'';
for(int i=1;i<q.length;i++)
cmd+=",\'"+q[i]+'\'';
System.out.println(j.eval("return redis.call("+cmd+")"));`

Get broken constrains in OptaPanner with non-reversible accumulator

I am trying to obtains list of broken constrains from a problem instance in OptaPlanner. I am using OptaPlanner version 7.0.0.Final and drools for rules engine (also 7.0.0.Final). The problem is solved correctly and without any error, but when I try to obtain broken constrains I get a NullPointer exception.
As far as I have researched, I found out, that this only happens, when I use drools accumulator without reverse operation (like max or min). Further I have made a custom accumulator, which is the exact copy from org.drools.core.base.accumulators.LongSumAccumulateFunction and everything works as expected, but as soon as I change the supportsReverse() function to return false, the NullPointer exception rises.
I have managed to reconstruct this problem in one of the provided examples - CloudBalancing. This is the change to CloudBalancingHelloWorld, it's only purpose is to obtain list of broken constraints as mentioned in this post.
public class CloudBalancingHelloWorld {
public static void main(String[] args) {
// Build the Solver
SolverFactory<CloudBalance> solverFactory = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml");
Solver<CloudBalance> solver = solverFactory.buildSolver();
// Load a problem with 400 computers and 1200 processes
CloudBalance unsolvedCloudBalance = new CloudBalancingGenerator().createCloudBalance(400, 1200);
// Solve the problem
CloudBalance solvedCloudBalance = solver.solve(unsolvedCloudBalance);
// Display the result
System.out.println("\nSolved cloudBalance with 400 computers and 1200 processes:\n"
+ toDisplayString(solvedCloudBalance));
//
//A Piece of code added - start
//
ScoreDirector<CloudBalance> scoreDirector = solver.getScoreDirectorFactory().buildScoreDirector();
scoreDirector.setWorkingSolution(solvedCloudBalance);
Collection<ConstraintMatchTotal> constrains = scoreDirector.getConstraintMatchTotals();
System.out.println(constrains.size());
//
//A Piece of code added - end
//
}
public static String toDisplayString(CloudBalance cloudBalance) {
StringBuilder displayString = new StringBuilder();
for (CloudProcess process : cloudBalance.getProcessList()) {
CloudComputer computer = process.getComputer();
displayString.append(" ").append(process.getLabel()).append(" -> ")
.append(computer == null ? null : computer.getLabel()).append("\n");
}
return displayString.toString();
}
}
And this is the change to requiredCpoPowerTotal rule. Please note that I have done this only to demonstrate the problem. Basicaly I have changed sum to max.
rule "requiredCpuPowerTotal"
when
$computer : CloudComputer($cpuPower : cpuPower)
accumulate(
CloudProcess(
computer == $computer,
$requiredCpuPower : requiredCpuPower);
$requiredCpuPowerTotal : max($requiredCpuPower);
(Integer) $requiredCpuPowerTotal > $cpuPower
)
then
scoreHolder.addHardConstraintMatch(kcontext, $cpuPower - (Integer) $requiredCpuPowerTotal);
end
I am really confused, because the error does not happen during planing phase, but when the scoreDirector recomputes the score to obtain broken constrains it does. I mean the same calculations must have happened during the planning phase right?
Anyway here is the stack trace
Exception in thread "main" Exception executing consequence for rule "requiredCpuPowerTotal" in org.optaplanner.examples.cloudbalancing.solver: java.lang.NullPointerException
at org.drools.core.runtime.rule.impl.DefaultConsequenceExceptionHandler.handleException(DefaultConsequenceExceptionHandler.java:39)
at org.drools.core.common.DefaultAgenda.handleException(DefaultAgenda.java:1256)
at org.drools.core.phreak.RuleExecutor.innerFireActivation(RuleExecutor.java:438)
at org.drools.core.phreak.RuleExecutor.fireActivation(RuleExecutor.java:379)
at org.drools.core.phreak.RuleExecutor.fire(RuleExecutor.java:135)
at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:88)
at org.drools.core.concurrent.AbstractRuleEvaluator.internalEvaluateAndFire(AbstractRuleEvaluator.java:34)
at org.drools.core.concurrent.SequentialRuleEvaluator.evaluateAndFire(SequentialRuleEvaluator.java:43)
at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1072)
at org.drools.core.common.DefaultAgenda.internalFireAllRules(DefaultAgenda.java:1019)
at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1011)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1321)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1312)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1296)
at org.optaplanner.core.impl.score.director.drools.DroolsScoreDirector.getConstraintMatchTotals(DroolsScoreDirector.java:134)
at org.optaplanner.examples.cloudbalancing.app.CloudBalancingHelloWorld.main(CloudBalancingHelloWorld.java:52)
Caused by: java.lang.NullPointerException
at org.drools.core.base.accumulators.JavaAccumulatorFunctionExecutor$JavaAccumulatorFunctionContext.getAccumulatedObjects(JavaAccumulatorFunctionExecutor.java:208)
at org.drools.core.reteoo.FromNodeLeftTuple.getAccumulatedObjects(FromNodeLeftTuple.java:94)
at org.drools.core.common.AgendaItem.getObjectsDeep(AgendaItem.java:78)
at org.drools.core.reteoo.RuleTerminalNodeLeftTuple.getObjectsDeep(RuleTerminalNodeLeftTuple.java:359)
at org.optaplanner.core.api.score.holder.AbstractScoreHolder.extractJustificationList(AbstractScoreHolder.java:118)
at org.optaplanner.core.api.score.holder.AbstractScoreHolder.registerConstraintMatch(AbstractScoreHolder.java:88)
at org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScoreHolder.addHardConstraintMatch(HardSoftScoreHolder.java:53)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313.defaultConsequence(Rule_requiredCpuPowerTotal1284553313.java:14)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313DefaultConsequenceInvokerGenerated.evaluate(Unknown Source)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313DefaultConsequenceInvoker.evaluate(Unknown Source)
at org.drools.core.phreak.RuleExecutor.innerFireActivation(RuleExecutor.java:431)
... 13 more
Thank you for any help in advance.
That NPE sounds like a bug in Drools. The ConstraintMatch API should always just work. Very that you get it against the latest master version. If so, please create a jira for this with a minimal reproducer and we'll look into it.

How to make batch processing with Apex?

How can I create batch processing application with Apache Apex?
All the examples I've found were streaming applications, which means they are not ending and I would like my app to close once it has processed all the data.
Thanks
What is your use-case? Supporting batch natively is on the roadmap and is being worked on right now.
Alternately, till then, once you are sure that your processing is done, the input operator can send a signal as ShutdownException() and that will propogate through the DAG and shutdown the DAG.
Let us know if you need further details.
You can add an exit condition before running the app.
for example
public void testMapOperator() throws Exception
{
LocalMode lma = LocalMode.newInstance();
DAG dag = lma.getDAG();
NumberGenerator numGen = dag.addOperator("numGen", new NumberGenerator());
FunctionOperator.MapFunctionOperator<Integer, Integer> mapper
= dag.addOperator("mapper", new FunctionOperator.MapFunctionOperator<Integer, Integer>(new Square()));
ResultCollector collector = dag.addOperator("collector", new ResultCollector());
dag.addStream("raw numbers", numGen.output, mapper.input);
dag.addStream("mapped results", mapper.output, collector.input);
// Create local cluster
LocalMode.Controller lc = lma.getController();
lc.setHeartbeatMonitoringEnabled(false);
//Condition to exit the application
((StramLocalCluster)lc).setExitCondition(new Callable<Boolean>()
{
#Override
public Boolean call() throws Exception
{
return TupleCount == NumTuples;
}
});
lc.run();
Assert.assertEquals(sum, 285);
}
for the complete code refer https://github.com/apache/apex-malhar/blob/master/stream/src/test/java/org/apache/apex/malhar/stream/FunctionOperator/FunctionOperatorTest.java

Executing an unused lambda expression in debug session throws ClassNotFoundException

This is a bit nitpicky- I wonder if it's a bug or a feature:
I have this main in Intellij:
public static void main(String[] args) throws InterruptedException {
Comparator<String> comp = (s1,s2) -> 1;
System.out.println("Break here");
}
When I debug and break at the "System.out.." I see that comp is initialized. However, when I try to execute it from "Expression Evaluation" window I get a ClassNotFoundException!
Of course evaluating the same thing in code works perfectly. Is it somehow related to the way lambdas are implemented under the hood or just a bug in the IDE?
I am using Intellij 13.1.4.
Evaluation of Lambda expressions is supported only starting from version 14.
Taken from What's New in IntelliJ IDEA 14 page:

Using Janino in YARN with Apache Twill causes "Imported class x.y could not be loaded"

I'm porting an open source project, which uses Janino for dynamic compilation of classes, to YARN, using Apache Twill. This works great except one last error. When Janino is used with twill, I'm getting an exception that a class cannot be found, although the class in in the Classpath and even used.
The exception I'm getting is:
2014-06-09T18:30:40,093Z ERROR o.a.d.e.p.i.p.ProjectRecordBatch [zk1]
[37daf04b-7d82-4d2f-987c-59851f2aeafe:frag:0:0]
AbstractSingleRecordBatch:next(AbstractSingleRecordBatch.java:60) -
Failure during query
org.apache.drill.exec.exception.SchemaChangeException: Failure while
attempting to load generated class
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:243)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.drill.exec.exception.ClassTransformationException: Failure
Generating transformation classes for value:
package org.apache.drill.exec.test.generated;
import org.apache.drill.exec.exception.SchemaChangeException;
import org.apache.drill.exec.expr.holders.BitHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;
import org.apache.drill.exec.ops.FragmentContext;
import org.apache.drill.exec.record.RecordBatch;
import org.apache.drill.exec.vector.RepeatedVarCharVector;
import org.apache.drill.exec.vector.VarCharVector;
import org.apache.drill.exec.vector.complex.impl.RepeatedVarCharReaderImpl;
public class ProjectorGen0 {
RepeatedVarCharVector vv0;
RepeatedVarCharReaderImpl reader4;
VarCharVector vv5;
public boolean doEval(int inIndex, int outIndex)
throws SchemaChangeException
{
{
VarCharHolder out3 = new VarCharHolder();
complex:
vv0 .getAccessor().getReader().setPosition((inIndex));
reader4 .read(0, out3);
BitHolder out8 = new BitHolder();
out8 .value = 1;
if (!vv5 .getMutator().setSafe((outIndex), out3)) {
out8 .value = 0;
}
if (out8 .value == 0) {
return false;
}
}
{
return true;
}
}
public void doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing)
throws SchemaChangeException
{
{
int[] fieldIds1 = new int[ 1 ] ;
fieldIds1 [ 0 ] = 0;
Object tmp2 = (incoming).getValueAccessorById(RepeatedVarCharVector.class, fieldIds1).getValueVector();
if (tmp2 == null) {
throw new SchemaChangeException("Failure while loading vector vv0 with id: org.apache.drill.exec.record.TypedFieldId#1cf4a5a0.");
}
vv0 = ((RepeatedVarCharVector) tmp2);
reader4 = ((RepeatedVarCharReaderImpl) vv0 .getAccessor().getReader());
int[] fieldIds6 = new int[ 1 ] ;
fieldIds6 [ 0 ] = 0;
Object tmp7 = (outgoing).getValueAccessorById(VarCharVector.class, fieldIds6).getValueVector();
if (tmp7 == null) {
throw new SchemaChangeException("Failure while loading vector vv5 with id: org.apache.drill.exec.record.TypedFieldId#1ce776c0.");
}
vv5 = ((VarCharVector) tmp7);
}
}
}
at
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:302)
at
org.apache.drill.exec.ops.FragmentContext.getImplementationClass(FragmentContext.java:185)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:240)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by:
org.codehaus.commons.compiler.CompileException: Line 4, Column 8:
Imported class "org.apache.drill.exec.exception.SchemaChangeException"
could not be loaded at
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:9014)
at org.codehaus.janino.UnitCompiler.import2(UnitCompiler.java:192) at
org.codehaus.janino.UnitCompiler.access$000(UnitCompiler.java:104) at
org.codehaus.janino.UnitCompiler$1.visitSingleTypeImportDeclaration(UnitCompiler.java:166)
at
org.codehaus.janino.Java$CompilationUnit$SingleTypeImportDeclaration.accept(Java.java:171)
at org.codehaus.janino.UnitCompiler.(UnitCompiler.java:164) at
org.apache.drill.exec.compile.JaninoClassCompiler.getClassByteCode(JaninoClassCompiler.java:53)
at
org.apache.drill.exec.compile.QueryClassLoader.getClassByteCode(QueryClassLoader.java:69)
at
org.apache.drill.exec.compile.ClassTransformer.getImplementationClass(ClassTransformer.java:256)
at
org.apache.drill.exec.ops.FragmentContext.getImplementationClass(FragmentContext.java:185)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:240)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:57)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.next(LimitRecordBatch.java:99)
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:45)
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.next(RemovingRecordBatch.java:94)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
As you can see, the type of the exception is SchemaChangeException but the internal exception is a ClassNotFoundException for SchemaChangeException:
Line 4, Column 8: Imported class
"org.apache.drill.exec.exception.SchemaChangeException" could not be
loaded
So there is something wrong with the class loader, which changes when the application is run with Apache Twill. It works stand alone, but in both cases the underlying jars are identical.
Apache Twill also has a function to add additional resources, but adding my jar there didn't work either, instead I got an exception, that the jar is already included:
Exception in thread "ServiceDelegate STARTING" java.lang.RuntimeException: java.util.zip.ZipException: duplicate entry: lib/drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar
at com.google.common.base.Throwables.propagate(Throwables.java:160) at
org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:133)
at
org.apache.twill.internal.AbstractZKServiceController.startUp(AbstractZKServiceController.java:82)
at
org.apache.twill.internal.AbstractExecutionServiceController$ServiceDelegate.startUp(AbstractExecutionServiceController.java:109)
at
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:744) Caused by:
java.util.zip.ZipException: duplicate entry:
lib/drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar at
java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:215)
at
java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109)
at
org.apache.twill.internal.ApplicationBundler.copyResource(ApplicationBundler.java:347)
at
org.apache.twill.internal.ApplicationBundler.createBundle(ApplicationBundler.java:140)
at
org.apache.twill.yarn.YarnTwillPreparer.createContainerJar(YarnTwillPreparer.java:388)
at
org.apache.twill.yarn.YarnTwillPreparer.access$300(YarnTwillPreparer.java:106)
at
org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:264)
at
org.apache.twill.yarn.YarnTwillPreparer$1.call(YarnTwillPreparer.java:253)
at
org.apache.twill.yarn.YarnTwillController.doStartUp(YarnTwillController.java:98)
... 4 more
The underlying classloader used is the URLClassLoader. It's initialized with an empty array, but it works for the stand alone application, the problem is only when it runs with Apache Twill, where does it get the URLs it should look up from? How could I check it?
The classloader definition:
public class QueryClassLoader extends URLClassLoader {
static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(QueryClassLoader.class);
private final ClassCompiler classCompiler;
private AtomicLong index = new AtomicLong(0);
private ConcurrentMap<String, byte[]> customClasses = new MapMaker().concurrencyLevel(4).makeMap();
public QueryClassLoader(boolean useJanino) {
super(new URL[0]);
if (useJanino) {
this.classCompiler = new JaninoClassCompiler(this);
} else {
throw new UnsupportedOperationException("Drill no longer supports using the JDK class compiler.");
}
}
...
Any ideas where I could look into, why the error occurs or how to solve it?
The same question was asked in the Apache Twill mailing list. Here is the discussion and proposed solution to it.
http://mail-archives.apache.org/mod_mbox/twill-dev/201406.mbox/%3CCAHqY-MOa8jBYs%3DEZENxxNZg-9YGMR5SASg76P_k6%2Bm6p2L9JuQ%40mail.gmail.com%3E
Repeat my answer in the mail content:
I am not familiar with how janino works, but it seems to me that it may not be using context ClassLoader to load classes or as least the thread that is compiling the generated class does not have the context
ClassLoader set properly.
The way that Twill works is pretty straightforward. It creates a "launcher.jar", which has no dependency on any library and start the JVM in a YARN container like this:
java -cp launcher.jar ....
Hence the system classloader has no user/library classes, but only the Launcher class.
Then in the Launcher.main() method, it creates a URLClassLoader, using all the jars + .class files inside the "container.jar" file, to load the user TwillRunnable. It also sets it as the context ClassLoader of the thread that calls the "run()" method. So, if you want to load class manually (through ClassLoader or Class.forName) in a different thread than the "run()" thread, you'll have to use set the context ClassLoader of that thread or explicitly construct the ClassLoader with the correct parent ClassLoader.