Cannot register new CQ query on Apache Geode - gemfire

I stuck in place while trying to register cq query with ClientCache. Still getting this exception:
CqService is not available.
java.lang.IllegalStateException: CqService is not available.
at org.apache.geode.cache.query.internal.cq.MissingCqService.start(MissingCqService.java:171)
at org.apache.geode.cache.query.internal.DefaultQueryService.getCqService(DefaultQueryService.java:777)
at org.apache.geode.cache.query.internal.DefaultQueryService.newCq(DefaultQueryService.java:486)
The client cache is created as follow:
def client(): ClientCache = new ClientCacheFactory()
.setPdxPersistent(true)
.setPdxSerializer(new ReflectionBasedAutoSerializer(false, "org.geode.importer.domain.FooBar"))
.addPoolLocator(ConfigProvider.locator.host, ConfigProvider.locator.port)
.setPoolSubscriptionEnabled(true)
.create()
and suggested solution does not help. Actual library version is:
"org.apache.geode" % "geode-core" % "1.0.0-incubating"

You will have to pull in geode-cq as a dependency. In gradle
compile 'org.apache.geode:geode-cq:1.0.0-incubating'

Related

Upgrade to Java 17 throws java.lang.RuntimeException: Error creating extended parser class: Could not determine whether class has already been loaded

I am using jtwig lib and the code was working fine but when we upgraded to Java 17, I am getting the below mention runtime exception.
Below is the method and throws RuntimeException while calling template.render()
String renderDescription(String templatePath,String userId, String caseId) {
JtwigTemplate template =
JtwigTemplate.classpathTemplate(templatePath);
JtwigModel model = JtwigModel.newModel()
.with("userId", userId)
.with("caseId", caseId)
.with("statusPageUrlTemplate",
config.getStatusPageUrlTemplate());
return template.render(model);
}
java.lang.RuntimeException: Error creating extended parser class: Could not determine whether class 'org.jtwig.parser.parboiled.base.BooleanParser$$parboiled' has already been loaded
at org.parboiled.Parboiled.createParser(Parboiled.java:58)
at org.jtwig.parser.parboiled.ParserContext.instance(ParserContext.java:31)
at org.jtwig.parser.parboiled.ParboiledJtwigParser.parse(ParboiledJtwigParser.java:37)
at org.jtwig.parser.cache.InMemoryConcurrentPersistentTemplateCache.get(InMemoryConcurrentPersistentTemplateCache.java:39)
at org.jtwig.parser.CachedJtwigParser.parse(CachedJtwigParser.java:19)
at org.jtwig.JtwigTemplate.render(JtwigTemplate.java:98)
at org.jtwig.JtwigTemplate.render(JtwigTemplate.java:74)
I was facing a similar issue after upgrading JVM version, and I found that adding this environment variable helped:
JDK_JAVA_OPTIONS=--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED
I believe it has something to do with stricter default limits on reflection when trying to inspect built-in classes.

nutch 1.16 crawl example from NutchTutorial returns NoSuchMethodError on org.apache.commons.cli.OptionBuilder (Windows 10)

I have been trying to run a Nutch 1.16 crawler using code example and instructions from https://cwiki.apache.org/confluence/display/NUTCH/NutchTutorial but no matter what, I seem to get stuck when initiating the actual crawl.
I'm running it through Cygwin64 on a Windows 10 machine, using a binary installation (though I have tried compiling one with the same results). Initially, Nutch would throw an UnsatisfiedLinkError (NativeIO$Windows.access0) which I fixed by adding libraries from several other answers for the same issue. Upon doing so, I could at least start a server, but trying to crawl through nutch itself would return NoSuchMethodError no matter what I did. nutch-site.xml only contains http.agent.name and plugin.includes options, both taken from the same example.
The following is the error message (I also tried to omit seed.txt):
$ bin/nutch inject crawl/crawldb urls/seed.txt
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.cli.OptionBuilder.withArgPattern(Ljava/lang/String;I)Lorg/apache/commons/cli/OptionBuilder;
at org.apache.hadoop.util.GenericOptionsParser.buildGeneralOptions(GenericOptionsParser.java:207)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:370)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:138)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:59)
at org.apache.nutch.crawl.Injector.main(Injector.java:534)
The following is the list of libraries currently present in the lib directory:
activation-1.1.jar
amqp-client-5.2.0.jar
animal-sniffer-annotations-1.14.jar
antlr-runtime-3.5.2.jar
antlr4-4.5.1.jar
aopalliance-1.0.jar
apache-nutch-1.16.jar
apacheds-i18n-2.0.0-M15.jar
apacheds-kerberos-codec-2.0.0-M15.jar
api-asn1-api-1.0.0-M20.jar
api-util-1.0.0-M20.jar
args4j-2.0.16.jar
ascii-utf-themes-0.0.1.jar
asciitable-0.3.2.jar
asm-3.3.1.jar
asm-7.1.jar
avro-1.7.7.jar
bootstrap-3.0.3.jar
cglib-2.2.1-v20090111.jar
cglib-2.2.2.jar
char-translation-0.0.2.jar
checker-compat-qual-2.0.0.jar
closure-compiler-v20130603.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.2-sources.jar
commons-cli-1.2.jar
commons-codec-1.11.jar
commons-collections-3.2.2.jar
commons-collections4-4.2.jar
commons-compress-1.18.jar
commons-configuration-1.6.jar
commons-daemon-1.0.13.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-jexl-2.1.1.jar
commons-lang-2.6.jar
commons-lang3-3.8.1.jar
commons-logging-1.1.3.jar
commons-math3-3.1.1.jar
commons-net-3.1.jar
crawler-commons-1.0.jar
curator-client-2.7.1.jar
curator-framework-2.7.1.jar
curator-recipes-2.7.1.jar
cxf-core-3.3.3.jar
cxf-rt-bindings-soap-3.3.3.jar
cxf-rt-bindings-xml-3.3.3.jar
cxf-rt-databinding-jaxb-3.3.3.jar
cxf-rt-frontend-jaxrs-3.3.3.jar
cxf-rt-frontend-jaxws-3.3.3.jar
cxf-rt-frontend-simple-3.3.3.jar
cxf-rt-security-3.3.3.jar
cxf-rt-transports-http-3.3.3.jar
cxf-rt-transports-http-jetty-3.3.3.jar
cxf-rt-ws-addr-3.3.3.jar
cxf-rt-ws-policy-3.3.3.jar
cxf-rt-wsdl-3.3.3.jar
dom4j-1.6.1.jar
ehcache-3.3.1.jar
elasticsearch-0.90.1.jar
error_prone_annotations-2.1.3.jar
FastInfoset-1.2.16.jar
geronimo-jcache_1.0_spec-1.0-alpha-1.jar
gora-hbase-0.3.jar
gson-2.2.4.jar
guava-25.0-jre.jar
guice-3.0.jar
guice-servlet-3.0.jar
h2-1.4.197.jar
hadoop-0.20.0-ant.jar
hadoop-0.20.0-core.jar
hadoop-0.20.0-examples.jar
hadoop-0.20.0-test.jar
hadoop-0.20.0-tools.jar
hadoop-annotations-2.9.2.jar
hadoop-auth-2.9.2.jar
hadoop-common-2.9.2.jar
hadoop-core-1.2.1.jar
hadoop-core_0.20.0.xml
hadoop-core_0.21.0.xml
hadoop-core_0.22.0.xml
hadoop-hdfs-2.9.2.jar
hadoop-hdfs-client-2.9.2.jar
hadoop-mapreduce-client-common-2.2.0.jar
hadoop-mapreduce-client-common-2.9.2.jar
hadoop-mapreduce-client-core-2.2.0.jar
hadoop-mapreduce-client-core-2.9.2.jar
hadoop-mapreduce-client-jobclient-2.2.0.jar
hadoop-mapreduce-client-jobclient-2.9.2.jar
hadoop-mapreduce-client-shuffle-2.2.0.jar
hadoop-mapreduce-client-shuffle-2.9.2.jar
hadoop-yarn-api-2.9.2.jar
hadoop-yarn-client-2.9.2.jar
hadoop-yarn-common-2.9.2.jar
hadoop-yarn-registry-2.9.2.jar
hadoop-yarn-server-common-2.9.2.jar
hadoop-yarn-server-nodemanager-2.9.2.jar
hbase-0.90.0-tests.jar
hbase-0.90.0.jar
hbase-0.92.1.jar
hbase-client-0.98.0-hadoop2.jar
hbase-common-0.98.0-hadoop2.jar
hbase-protocol-0.98.0-hadoop2.jar
HikariCP-java7-2.4.12.jar
htmlparser-1.6.jar
htrace-core-2.04.jar
htrace-core4-4.1.0-incubating.jar
httpclient-4.5.6.jar
httpcore-4.4.9.jar
httpcore-nio-4.4.9.jar
icu4j-61.1.jar
istack-commons-runtime-3.0.8.jar
j2objc-annotations-1.1.jar
jackson-annotations-2.9.9.jar
jackson-core-2.9.9.jar
jackson-core-asl-1.9.13.jar
jackson-databind-2.9.9.jar
jackson-dataformat-cbor-2.9.9.jar
jackson-jaxrs-1.9.13.jar
jackson-jaxrs-base-2.9.9.jar
jackson-jaxrs-json-provider-2.9.9.jar
jackson-mapper-asl-1.9.13.jar
jackson-module-jaxb-annotations-2.9.9.jar
jackson-xc-1.9.13.jar
jakarta.activation-api-1.2.1.jar
jakarta.ws.rs-api-2.1.5.jar
jakarta.xml.bind-api-2.3.2.jar
jasper-compiler-5.5.12.jar
jasper-runtime-5.5.12.jar
java-xmlbuilder-0.4.jar
javassist-3.12.1.GA.jar
javax.annotation-api-1.3.2.jar
javax.inject-1.jar
javax.persistence-2.2.0.jar
javax.servlet-api-3.1.0.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jaxb-runtime-2.3.2.jar
jcip-annotations-1.0-1.jar
jersey-client-1.19.4.jar
jersey-core-1.9.jar
jersey-guice-1.9.jar
jersey-json-1.9.jar
jersey-server-1.9.jar
jets3t-0.9.0.jar
jettison-1.1.jar
jetty-6.1.26.jar
jetty-client-6.1.22.jar
jetty-continuation-9.4.19.v20190610.jar
jetty-http-9.4.19.v20190610.jar
jetty-io-9.4.19.v20190610.jar
jetty-security-9.4.19.v20190610.jar
jetty-server-9.4.19.v20190610.jar
jetty-sslengine-6.1.26.jar
jetty-util-6.1.26.jar
jetty-util-9.4.19.v20190610.jar
joda-time-2.3.jar
jquery-2.0.3-1.jar
jquery-selectors-0.0.3.jar
jquery-ui-1.10.2-1.jar
jquerypp-1.0.1.jar
jsch-0.1.54.jar
json-smart-1.3.1.jar
jsp-2.1-6.1.14.jar
jsp-api-2.1-6.1.14.jar
jsp-api-2.1.jar
jsr305-3.0.0.jar
junit-3.8.1.jar
juniversalchardet-1.0.3.jar
leveldbjni-all-1.8.jar
log4j-1.2.17.jar
lucene-analyzers-common-4.3.0.jar
lucene-codecs-4.3.0.jar
lucene-core-4.3.0.jar
lucene-grouping-4.3.0.jar
lucene-highlighter-4.3.0.jar
lucene-join-4.3.0.jar
lucene-memory-4.3.0.jar
lucene-queries-4.3.0.jar
lucene-queryparser-4.3.0.jar
lucene-sandbox-4.3.0.jar
lucene-spatial-4.3.0.jar
lucene-suggest-4.3.0.jar
maven-parent-config-0.3.4.jar
metrics-core-3.0.1.jar
modernizr-2.6.2-1.jar
mssql-jdbc-6.2.1.jre7.jar
neethi-3.1.1.jar
netty-3.6.2.Final.jar
netty-all-4.0.23.Final.jar
nimbus-jose-jwt-4.41.1.jar
okhttp-2.7.5.jar
okio-1.6.0.jar
org.apache.commons.cli-1.2.0.jar
ormlite-core-5.1.jar
ormlite-jdbc-5.1.jar
oro-2.0.8.jar
paranamer-2.3.jar
protobuf-java-2.5.0.jar
reflections-0.9.8.jar
servlet-api-2.5-20081211.jar
servlet-api-2.5.jar
skb-interfaces-0.0.1.jar
slf4j-api-1.7.26.jar
slf4j-log4j12-1.7.25.jar
snappy-java-1.0.5.jar
spatial4j-0.3.jar
spring-aop-4.0.9.RELEASE.jar
spring-beans-4.0.9.RELEASE.jar
spring-context-4.0.9.RELEASE.jar
spring-core-4.0.9.RELEASE.jar
spring-expression-4.0.9.RELEASE.jar
spring-web-4.0.9.RELEASE.jar
ST4-4.0.8.jar
stax-api-1.0-2.jar
stax-ex-1.8.1.jar
stax2-api-3.1.4.jar
t-digest-3.2.jar
tika-core-1.22.jar
txw2-2.3.2.jar
typeaheadjs-0.9.3.jar
warc-hadoop-0.1.0.jar
webarchive-commons-1.1.5.jar
wicket-bootstrap-core-0.9.2.jar
wicket-bootstrap-extensions-0.9.2.jar
wicket-core-6.17.0.jar
wicket-extensions-6.13.0.jar
wicket-ioc-6.17.0.jar
wicket-request-6.17.0.jar
wicket-spring-6.17.0.jar
wicket-util-6.17.0.jar
wicket-webjars-0.4.0.jar
woodstox-core-5.0.3.jar
wsdl4j-1.6.3.jar
xercesImpl-2.12.0.jar
xml-apis-1.4.01.jar
xml-resolver-1.2.jar
xmlenc-0.52.jar
xmlParserAPIs-2.6.2.jar
xmlschema-core-2.2.4.jar
zookeeper-3.4.6.jar
This is my java version:
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)
I'd also like to point out that, despite what another answer may have said, nutch 1.4 (or any other version of nutch for that matter) did NOT resolve the issue, at least on Windows.
EDIT: The following answer worked for me, but I left the original one because it may still be useful to someone working with other versions of nutch.
Again, thanks to Sebastian Nagel, in order to get around the NoSuchMethodError, just edit ivy\ivy.xml to reference a different version of hadoop libraries, in my case I installed hadoop 3.1.3 and I also added the corresponding 3.1.3 versions of winutils.exe and hadoop.dll to the hadoop\bin directory referenced by HADOOP_HOME. Running bin/crawl and it seems to be working correctly.
Outdated answer: Okay, after working on the source code itself (courtesy of https://github.com/apache/commons-cli) under the suggestion of Sebastian Nagel, I was able to find the (very simple) implementation for the method (https://github.com/marcelmaatkamp/EntityExtractorUtils/blob/master/src/main/java/org/apache/commons/cli/OptionBuilder.java):
/**
* The next Option created will have an argument patterns and
* the number of pattern occurances
*
* #param argPattern string representing a pattern regex
* #param limit the number of pattern occurance in the argument
* return the OptionBuilder instance
*/
public static OptionBuilder withArgPattern( String argPattern,
int limit )
{
OptionBuilder.argPattern = argPattern;
OptionBuilder.limit = limit;
Using maven I was then able to compile the code into their own jar files, which I then added in the lib folder for apache nutch.
This still did not completely resolve my problem, as there seem to be deprecated functions being used by the entire nutch framework, which will probably mean even more work under similar circumstances (for instance, right after using the new jar I've been returned a NoSuchMethodError over org.apache.hadoop.mapreduce.Job.getInstance).
I leave this answer here as a temporary solution to anyone who may have also gotten stuck on the same issue, but I surely wish there was an easier way of finding out which methods appear in which jar file before exploring their entire file structure, although it may just be me ignoring it.

vertx hazelcast class serialization on OSGi karaf

I want to use vertx cluster with hazelcast on karaf. When I try to write messages to the bus (after cluster is formed) I am getting this serialization error. I was thinking about adding a class definition to hazelcast to tell it where to find the vertx server id class (io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID) but I am not sure how.
On Karaf I had to wrap the vertx-hazelcast jar because it doesn't have a proper manifest file.
<bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>
here is my error
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID', exception: io.vertx.spi.cluster.hazelcast.impl.
HazelcastServerID
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170)[11:com.hazelcast:3.6.3]
at com.hazelcast.map.impl.DataAwareEntryEvent.getOldValue(DataAwareEntryEvent.java:82)[11:com.hazelcast:3.6.3]
at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:147)[64:wrap_file__C__Users_gadei_development_github_effectus.io_effectus-core_core.test_core.test.exam_target_paxexam_unpack_
5bf4439f-01ff-4db4-bd3d-e3b6a1542596_system_io_vertx_vertx-hazelcast_3.4.0-SNAPSHOT_vertx-hazelcast-3.4.0-SNAPSHOT.jar:0.0.0]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:111)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:84)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:55)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:371)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:65)[11:com.hazelcast:3.6.3]
at com.hazelcast.spi.impl.eventservice.impl.LocalEventDispatcher.run(LocalEventDispatcher.java:56)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:187)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:171)[11:com.hazelcast:3.6.3]
Caused by: java.lang.ClassNotFoundException: io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)[:1.8.0_101]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_101]
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:137)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:115)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.newInstance(ClassLoaderUtil.java:68)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:119)[11:com.hazelcast:3.6.3]
... 13 more
any suggestions appreciated
thanks.
This normally happens if one object has an acyclic serialization (reading one less / much property). In this case you're on a wrong stream position which means you end up reading the wrong datatype.
Another possible reason is multiple different Hazelcast versions in the classpath (please check that) or different versions on different nodes.
The solution involved classloading magic!
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
I ended up rolling my own hazelcast instance and configuring it the way vertx specification is instructing with the additional classloader configuration trick.
```
ServiceReference serviceRef = context.getServiceReference(HazelcastOSGiService.class);
log.info("Hazelcast OSGi Service Reference: {}", serviceRef);
hazelcastOsgiService = context.getService(serviceRef);
log.info("Hazelcast OSGi Service: {}", hazelcastOsgiService);
hazelcastOsgiService.getClass().getClassLoader();
Map<String, SemaphoreConfig> semaphores = new HashMap<>();
semaphores.put("__vertx.*", new SemaphoreConfig().setInitialPermits(1));
Config hazelcastConfig = new Config("effectus-instance")
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
.setGroupConfig(new GroupConfig("dev").setPassword("effectus"))
// .setSerializationConfig(new SerializationConfig().addClassDefinition()
.addMapConfig(new MapConfig()
.setName("__vertx.subs")
.setBackupCount(1)
.setTimeToLiveSeconds(0)
.setMaxIdleSeconds(0)
.setEvictionPolicy(EvictionPolicy.NONE)
.setMaxSizeConfig(new MaxSizeConfig().setSize(0).setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_NODE))
.setEvictionPercentage(25)
.setMergePolicy("com.hazelcast.map.merge.LatestUpdateMapMergePolicy"))
.setSemaphoreConfigs(semaphores);
hazelcastOSGiInstance = hazelcastOsgiService.newHazelcastInstance(hazelcastConfig);
log.info("New Hazelcast OSGI instance: {}", hazelcastOSGiInstance);
hazelcastOsgiService.getAllHazelcastInstances().stream().forEach(instance -> {
log.info("Registered Hazelcast OSGI Instance: {}", instance.getName());
});
clusterManager = new HazelcastClusterManager(hazelcastOSGiInstance);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager).setHAGroup("effectus");
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx v = res.result();
log.info("Vertx is running in cluster mode: {}", v);
// some more code...
```
so the issue is that hazelcast instance doesn't have access to the cleasass inside the vertx-hazelcst bundle.
I am sure there is a shorter cleaner way somewhere..
any better suggestions would be great.

Weblogic Exception after deploy: java.rmi.UnexpectedException

Just encountered a similar issue as described in the below article:
Question: Article with similar error description
java.rmi.UnmarshalException: cannot unmarshaling return; nested exception is:
java.rmi.UnexpectedException: Failed to parse descriptor file; nested exception is:
java.rmi.server.ExportException: Failed to export class
I found that the issue described is totally unrelated to any Java update and is rather an issue with the Weblogic bean-cache. It seems to use old compiled versions of classes when updating a deployment. I was hunting a similar issue in a related question (Question: Interface-Implementation-mismatch).
How can I fix this properly to allow proper automatic deployment (with WLST)?
After some feedback from the Oracle community it now works like this:
1) Shutdown the remote Managed Server
2) Delete directory "domains/#MyDomain#/servers/#MyManagedServer#/cache/EJBCompilerCache"
3) Redeploy EAR/application
In WLST (which one would need to automate this) this is quite tricky:
import shutil
servers=cmo.getServers()
domainPath = get('RootDirectory')
for thisServer in servers:
pathToManagedServer = domainPath + "\\servers\\" + thisServer.getName()
print ">Found managed server:" + pathToManagedServer
pathToCacheDir = pathToManagedServer + "\\" + "cache\\EJBCompilerCache"
if(os.path.exists(pathToCacheDir) and os.path.isdir(pathToCacheDir) ):
print ">Found a cache directory that will be deleted:" + pathToCacheDir
# shutil.rmtree(pathToCacheDir)
Note: Be careful when testing this, the path that is returned by "pathToCacheDir" depends on the MBean-context that is currently set. See samples for WLST command "cd()". You should first test the path output with "print domainPath" and later add the "rmtree" python command! (I uncommented the delete command in my sample, so that nobody accidentially deletes an entire domain!)

What are possible causes of Multiple ID-statement Exception in Sesame?

I would like to know what are possible causes that can rise the following Exception ?
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID test_3
It rises when i try to query the test_3 repository. Another fact is that after that there is two repository with the same name displayed in my web page http://localhost:8080/openrdf-workbench/repositories/NONE/repositories
any help is welcome !
EDIT
I'm using Sesame 2.7.7
EDIT 2
Providing more details about the code which cause the Exception
code
public void connectToRepository(){
RepositoryConnection connection;
RemoteRepositoryManager repositoryManager = new RemoteRepositoryManager("http://localhost:8080/openrdf-sesame/");
try {
repositoryManager.initialize();
SailImplConfig backendConfig = new NativeStoreConfig("spoc,sopc,posc,psoc,ospc,opsc");
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryID, repositoryTypeSpec);
repositoryManager.addRepositoryConfig(repConfig);
Repository myRepository = repositoryManager.getRepository(repositoryID);
myRepository.initialize();
connection = myRepository.getConnection();
}
catch (RepositoryException | RepositoryConfigException e) {
e.printStackTrace();
}
}
The Exception is cause by the following line in the code:
repositoryManager.addRepositoryConfig(repConfig);
Here are details
log4j:WARN No appenders could be found for logger (org.openrdf.rio.DatatypeHandlerRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Custom NTriples/NQuads Parser
Custom NTriples/NQuads Parser
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID 10m7_m
at org.openrdf.repository.config.RepositoryConfigUtil.getIDStatement(RepositoryConfigUtil.java:269)
at org.openrdf.repository.config.RepositoryConfigUtil.hasRepositoryConfig(RepositoryConfigUtil.java:91)
at org.openrdf.repository.manager.RemoteRepositoryManager.createRepository(RemoteRepositoryManager.java:174)
at org.openrdf.repository.manager.RepositoryManager.getRepository(RepositoryManager.java:376)
at soctrace.repositories.OLDSesameRepositoryManagement.connectToRepository(OLDSesameRepositoryManagement.java:123)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:150)
at soctrace.views.Main.main(Main.java:692)
[sesame in memory] connection to repository 10m7_m done , 444, ms
Exception in thread "main" java.lang.NullPointerException
at soctrace.repositories.OLDSesameRepositoryManagement.runQuery(OLDSesameRepositoryManagement.java:250)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:155)
at soctrace.views.Main.main(Main.java:692)
Your code creates a new repository (by adding a new repository configuration to the repository manager) every time you execute the connectToRepository() method. Since you use the exact same repository configuration (including the actual id of the repository) every time, naturally this causes an error the second time you execute it: you are trying to add a repository with an id that already exists.
You should rewrite your code to make sure that it only tries to create the repository when a repository with that id doesn't exist yet - if it already exists, it should just use the existing one.