I would like to know what are possible causes that can rise the following Exception ?
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID test_3
It rises when i try to query the test_3 repository. Another fact is that after that there is two repository with the same name displayed in my web page http://localhost:8080/openrdf-workbench/repositories/NONE/repositories
any help is welcome !
EDIT
I'm using Sesame 2.7.7
EDIT 2
Providing more details about the code which cause the Exception
code
public void connectToRepository(){
RepositoryConnection connection;
RemoteRepositoryManager repositoryManager = new RemoteRepositoryManager("http://localhost:8080/openrdf-sesame/");
try {
repositoryManager.initialize();
SailImplConfig backendConfig = new NativeStoreConfig("spoc,sopc,posc,psoc,ospc,opsc");
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryID, repositoryTypeSpec);
repositoryManager.addRepositoryConfig(repConfig);
Repository myRepository = repositoryManager.getRepository(repositoryID);
myRepository.initialize();
connection = myRepository.getConnection();
}
catch (RepositoryException | RepositoryConfigException e) {
e.printStackTrace();
}
}
The Exception is cause by the following line in the code:
repositoryManager.addRepositoryConfig(repConfig);
Here are details
log4j:WARN No appenders could be found for logger (org.openrdf.rio.DatatypeHandlerRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Custom NTriples/NQuads Parser
Custom NTriples/NQuads Parser
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID 10m7_m
at org.openrdf.repository.config.RepositoryConfigUtil.getIDStatement(RepositoryConfigUtil.java:269)
at org.openrdf.repository.config.RepositoryConfigUtil.hasRepositoryConfig(RepositoryConfigUtil.java:91)
at org.openrdf.repository.manager.RemoteRepositoryManager.createRepository(RemoteRepositoryManager.java:174)
at org.openrdf.repository.manager.RepositoryManager.getRepository(RepositoryManager.java:376)
at soctrace.repositories.OLDSesameRepositoryManagement.connectToRepository(OLDSesameRepositoryManagement.java:123)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:150)
at soctrace.views.Main.main(Main.java:692)
[sesame in memory] connection to repository 10m7_m done , 444, ms
Exception in thread "main" java.lang.NullPointerException
at soctrace.repositories.OLDSesameRepositoryManagement.runQuery(OLDSesameRepositoryManagement.java:250)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:155)
at soctrace.views.Main.main(Main.java:692)
Your code creates a new repository (by adding a new repository configuration to the repository manager) every time you execute the connectToRepository() method. Since you use the exact same repository configuration (including the actual id of the repository) every time, naturally this causes an error the second time you execute it: you are trying to add a repository with an id that already exists.
You should rewrite your code to make sure that it only tries to create the repository when a repository with that id doesn't exist yet - if it already exists, it should just use the existing one.
Related
We can turn on all of the SQL related logging with the following settings in spring:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
logging.level.org.hibernate.type=trace
If we have a standalone hibernate/springdata command like
myEntityRepository.save(myEntity);
OR
enityManager.persist(myEntity);
then it is easy to debug what happened just by reading the generated SQL from the log.
But, how would you debug when there isn't any explicit ORM action like here:
#Transactional
void doHundredOfTask(Long id){
MyEntity myEntity = myEntityRepository.findById(id);
// here comes ton of action on the entity like settings field,setting/adding to collection
// myEntity.setField1()..
//myEntity.setField2()
// ....
// myEntity.setField_N()
// myEntity.getSomeList.get(0).setSomeField()
// no ORM action
}
At the end we don't explicitly save anything but after the transaction hibernate will flush the changes, hence a massive amount of SQL will occur in the log. If you have a ton of action on the entity and on it's associations then it is extremly hard to debug why a given SQL was triggered.
Is there a way to assign the generated SQL to the triggering code in the log?
edit: Right know all I can do is splitting up the code to smaller chunks / or commenting out some part of it. But this process is slow..
p6spy can print a stacktrace for each executed SQL statement. Here is configuration to enable this: stacktrace=true.
How to configure p6spy for maven project:
Add p6spy dependency
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>3.9.1</version>
</dependency>
Wrap the jdbc connection with p6spy:
spring.datasource.url=jdbc:p6spy:mysql://localhost:3306/xxx
spring.datasource.driver-class-name=com.p6spy.engine.spy.P6SpyDriver
Add spy.properties config src/main/resources/spy.properties
stacktrace=true
appender=com.p6spy.engine.spy.appender.Slf4JLogger
logMessageFormat=com.p6spy.engine.spy.appender.MultiLineFormat
You can remove the properties bellow:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
With this configuration, p6spy will output SQL and the stacktrace. E.g.:
select x0_.id as id1_7_ from X x0_
15:10:16.166 default [main] INFO c.p.e.spy.appender.Slf4JLogger[logException]-39 -
java.lang.Exception: null
at com.p6spy.engine.common.P6LogQuery.doLog(P6LogQuery.java:126)
...
at org.hibernate.loader.Loader.getResultSet(Loader.java:2341)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:2094)
...
at com.springapp.Test.test(Test.java:36)
...
I want to use vertx cluster with hazelcast on karaf. When I try to write messages to the bus (after cluster is formed) I am getting this serialization error. I was thinking about adding a class definition to hazelcast to tell it where to find the vertx server id class (io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID) but I am not sure how.
On Karaf I had to wrap the vertx-hazelcast jar because it doesn't have a proper manifest file.
<bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>
here is my error
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID', exception: io.vertx.spi.cluster.hazelcast.impl.
HazelcastServerID
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170)[11:com.hazelcast:3.6.3]
at com.hazelcast.map.impl.DataAwareEntryEvent.getOldValue(DataAwareEntryEvent.java:82)[11:com.hazelcast:3.6.3]
at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:147)[64:wrap_file__C__Users_gadei_development_github_effectus.io_effectus-core_core.test_core.test.exam_target_paxexam_unpack_
5bf4439f-01ff-4db4-bd3d-e3b6a1542596_system_io_vertx_vertx-hazelcast_3.4.0-SNAPSHOT_vertx-hazelcast-3.4.0-SNAPSHOT.jar:0.0.0]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:111)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:84)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:55)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:371)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:65)[11:com.hazelcast:3.6.3]
at com.hazelcast.spi.impl.eventservice.impl.LocalEventDispatcher.run(LocalEventDispatcher.java:56)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:187)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:171)[11:com.hazelcast:3.6.3]
Caused by: java.lang.ClassNotFoundException: io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)[:1.8.0_101]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_101]
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:137)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:115)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.newInstance(ClassLoaderUtil.java:68)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:119)[11:com.hazelcast:3.6.3]
... 13 more
any suggestions appreciated
thanks.
This normally happens if one object has an acyclic serialization (reading one less / much property). In this case you're on a wrong stream position which means you end up reading the wrong datatype.
Another possible reason is multiple different Hazelcast versions in the classpath (please check that) or different versions on different nodes.
The solution involved classloading magic!
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
I ended up rolling my own hazelcast instance and configuring it the way vertx specification is instructing with the additional classloader configuration trick.
```
ServiceReference serviceRef = context.getServiceReference(HazelcastOSGiService.class);
log.info("Hazelcast OSGi Service Reference: {}", serviceRef);
hazelcastOsgiService = context.getService(serviceRef);
log.info("Hazelcast OSGi Service: {}", hazelcastOsgiService);
hazelcastOsgiService.getClass().getClassLoader();
Map<String, SemaphoreConfig> semaphores = new HashMap<>();
semaphores.put("__vertx.*", new SemaphoreConfig().setInitialPermits(1));
Config hazelcastConfig = new Config("effectus-instance")
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
.setGroupConfig(new GroupConfig("dev").setPassword("effectus"))
// .setSerializationConfig(new SerializationConfig().addClassDefinition()
.addMapConfig(new MapConfig()
.setName("__vertx.subs")
.setBackupCount(1)
.setTimeToLiveSeconds(0)
.setMaxIdleSeconds(0)
.setEvictionPolicy(EvictionPolicy.NONE)
.setMaxSizeConfig(new MaxSizeConfig().setSize(0).setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_NODE))
.setEvictionPercentage(25)
.setMergePolicy("com.hazelcast.map.merge.LatestUpdateMapMergePolicy"))
.setSemaphoreConfigs(semaphores);
hazelcastOSGiInstance = hazelcastOsgiService.newHazelcastInstance(hazelcastConfig);
log.info("New Hazelcast OSGI instance: {}", hazelcastOSGiInstance);
hazelcastOsgiService.getAllHazelcastInstances().stream().forEach(instance -> {
log.info("Registered Hazelcast OSGI Instance: {}", instance.getName());
});
clusterManager = new HazelcastClusterManager(hazelcastOSGiInstance);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager).setHAGroup("effectus");
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx v = res.result();
log.info("Vertx is running in cluster mode: {}", v);
// some more code...
```
so the issue is that hazelcast instance doesn't have access to the cleasass inside the vertx-hazelcst bundle.
I am sure there is a shorter cleaner way somewhere..
any better suggestions would be great.
I stuck in place while trying to register cq query with ClientCache. Still getting this exception:
CqService is not available.
java.lang.IllegalStateException: CqService is not available.
at org.apache.geode.cache.query.internal.cq.MissingCqService.start(MissingCqService.java:171)
at org.apache.geode.cache.query.internal.DefaultQueryService.getCqService(DefaultQueryService.java:777)
at org.apache.geode.cache.query.internal.DefaultQueryService.newCq(DefaultQueryService.java:486)
The client cache is created as follow:
def client(): ClientCache = new ClientCacheFactory()
.setPdxPersistent(true)
.setPdxSerializer(new ReflectionBasedAutoSerializer(false, "org.geode.importer.domain.FooBar"))
.addPoolLocator(ConfigProvider.locator.host, ConfigProvider.locator.port)
.setPoolSubscriptionEnabled(true)
.create()
and suggested solution does not help. Actual library version is:
"org.apache.geode" % "geode-core" % "1.0.0-incubating"
You will have to pull in geode-cq as a dependency. In gradle
compile 'org.apache.geode:geode-cq:1.0.0-incubating'
One way is directly call the HTable constructor, another is to call the getTable method from a HConnection. The second option requires the HConnection to be "unmanaged", which is not very good for me because my process will have many threads accessing HBase. I don't want to re-invent the wheel to manage the HConnections on my own.
Thanks for your help.
[Updates]:
We are stuck with 0.98.6, so ConnectionFactory is not available.
I found the bellow jira suggesting to create an "unmanaged" connection and use a single ExecuteService to create HTable. Why can't we simply use the getTable method of the unmanaged connection to get HTable? Is that because of HTable is not thread safe?
https://issues.apache.org/jira/browse/HBASE-7463
Im stuck with old versions (<0.94.11) in which you can still use HTablePool but since it has been deprecated by HBASE-6580 I think requests from HTables to the RS are now automatically pooled by providing an ExecutorService:
ExecutorService executor = Executors.newFixedThreadPool(10);
Connection connection = ConnectionFactory.createConnection(conf, executor);
Table table = connection.getTable(TableName.valueOf("mytable"));
try {
table.get(...);
...
} finally {
table.close();
connection.close();
}
I've been unable to find any good examples/docs about it, so please notice this is untested code which may not work as expected.
For more information you can take a look to the ConnectionFactory documentation & to the JIRA issue:
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
https://issues.apache.org/jira/browse/HBASE-6580
Update, since you're using 0.98.6 and ConnectionFactory is not available you can use HConnectionManager instead:
HConnection connection = HConnectionManager.createConnection(config); // You can also provide an ExecutorService if you want to override the default one. HConnection is thread safe.
HTableInterface table = connection.getTable("table1");
try {
// Use the table as needed, for a single operation and a single thread
} finally {
table.close();
connection.close();
}
HTable is not thread safe so you must make sure you always get a new instance (it's a lightweight process) with HTableInterface table = connection.getTable("table1") and close it afterwards with table.close().
The flow would be:
Start your process
Initialize your HConnection
Each thread:
3.1 Gets a table from your HConnection
3.2 Writes/reads from the table
3.3 Closes the table
Close your HConnection when your process ends
HConnectionManager: http://archive.cloudera.com/cdh5/cdh/5/hbase/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html#createConnection(org.apache.hadoop.conf.Configuration)
HTable: http://archive.cloudera.com/cdh5/cdh/5/hbase/apidocs/org/apache/hadoop/hbase/client/HTable.html
So i have an exercise given from my lecturer to build a registration system.
My job is to test the program. My friends gave me the source code, but i cant seem to get it running, although one of my friend can open it without any problems
Here is the error message
java.io.InvalidClassException: javax.swing.JComponent; local class incompatible: stream classdesc serialVersionUID = -3424753864000836906, local class serialVersionUID = 3742318830738515599
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:621)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at java.beans.Beans.instantiate(Beans.java:199)
at java.beans.Beans.instantiate(Beans.java:80)
at gui.MainWindow.initComponents(MainWindow.java:208)
at gui.MainWindow.<init>(MainWindow.java:34)
at srs.Driver.main(Driver.java:17)
Exception in thread "main" java.lang.IllegalArgumentException:
Component must be non-null
at javax.swing.GroupLayout$ComponentSpring.<init>(GroupLayout.java:2953)
at javax.swing.GroupLayout$ComponentSpring.<init>(GroupLayout.java:2933)
at javax.swing.GroupLayout$Group.addComponent(GroupLayout.java:1524)
at javax.swing.GroupLayout$ParallelGroup.addComponent(GroupLayout.java:2484)
at javax.swing.GroupLayout$ParallelGroup.addComponent(GroupLayout.java:2454)
at javax.swing.GroupLayout$Group.addComponent(GroupLayout.java:1505)
at javax.swing.GroupLayout$ParallelGroup.addComponent(GroupLayout.java:2476)
at gui.MainWindow.initComponents(MainWindow.java:1680)
at gui.MainWindow.<init>(MainWindow.java:34)
at srs.Driver.main(Driver.java:17)
Java Result: 1
In one of the package i have this class called "MainWindow.java", and "MainWindow_creditsField2.ser". This package is for GUI purposes.
I am assuming the error has something to do with the .ser file. When I asked my friend what that file is, he did not know what that file is, and said that it's
automatically generated
When I clicked on 3 of the last errors,
Driver tells me the line MainWindow mainWindow = new MainWindow();
MainWindow tells me line initComponents();
I think that is all the leads I can give you
The problem is with the serialization . Here is the link where it is described nicely why it occurs. Go through this thoroughly to understand how that field is used to make sure the serialized version and what the JVM wants to create from the serialized object are the same.