vertx hazelcast class serialization on OSGi karaf - serialization

I want to use vertx cluster with hazelcast on karaf. When I try to write messages to the bus (after cluster is formed) I am getting this serialization error. I was thinking about adding a class definition to hazelcast to tell it where to find the vertx server id class (io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID) but I am not sure how.
On Karaf I had to wrap the vertx-hazelcast jar because it doesn't have a proper manifest file.
<bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>
here is my error
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID', exception: io.vertx.spi.cluster.hazelcast.impl.
HazelcastServerID
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170)[11:com.hazelcast:3.6.3]
at com.hazelcast.map.impl.DataAwareEntryEvent.getOldValue(DataAwareEntryEvent.java:82)[11:com.hazelcast:3.6.3]
at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:147)[64:wrap_file__C__Users_gadei_development_github_effectus.io_effectus-core_core.test_core.test.exam_target_paxexam_unpack_
5bf4439f-01ff-4db4-bd3d-e3b6a1542596_system_io_vertx_vertx-hazelcast_3.4.0-SNAPSHOT_vertx-hazelcast-3.4.0-SNAPSHOT.jar:0.0.0]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:111)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:84)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:55)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:371)[11:com.hazelcast:3.6.3]
at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:65)[11:com.hazelcast:3.6.3]
at com.hazelcast.spi.impl.eventservice.impl.LocalEventDispatcher.run(LocalEventDispatcher.java:56)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:187)[11:com.hazelcast:3.6.3]
at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:171)[11:com.hazelcast:3.6.3]
Caused by: java.lang.ClassNotFoundException: io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)[:1.8.0_101]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)[:1.8.0_101]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)[:1.8.0_101]
at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:137)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:115)[11:com.hazelcast:3.6.3]
at com.hazelcast.nio.ClassLoaderUtil.newInstance(ClassLoaderUtil.java:68)[11:com.hazelcast:3.6.3]
at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:119)[11:com.hazelcast:3.6.3]
... 13 more
any suggestions appreciated
thanks.

This normally happens if one object has an acyclic serialization (reading one less / much property). In this case you're on a wrong stream position which means you end up reading the wrong datatype.
Another possible reason is multiple different Hazelcast versions in the classpath (please check that) or different versions on different nodes.

The solution involved classloading magic!
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
I ended up rolling my own hazelcast instance and configuring it the way vertx specification is instructing with the additional classloader configuration trick.
```
ServiceReference serviceRef = context.getServiceReference(HazelcastOSGiService.class);
log.info("Hazelcast OSGi Service Reference: {}", serviceRef);
hazelcastOsgiService = context.getService(serviceRef);
log.info("Hazelcast OSGi Service: {}", hazelcastOsgiService);
hazelcastOsgiService.getClass().getClassLoader();
Map<String, SemaphoreConfig> semaphores = new HashMap<>();
semaphores.put("__vertx.*", new SemaphoreConfig().setInitialPermits(1));
Config hazelcastConfig = new Config("effectus-instance")
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
.setGroupConfig(new GroupConfig("dev").setPassword("effectus"))
// .setSerializationConfig(new SerializationConfig().addClassDefinition()
.addMapConfig(new MapConfig()
.setName("__vertx.subs")
.setBackupCount(1)
.setTimeToLiveSeconds(0)
.setMaxIdleSeconds(0)
.setEvictionPolicy(EvictionPolicy.NONE)
.setMaxSizeConfig(new MaxSizeConfig().setSize(0).setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_NODE))
.setEvictionPercentage(25)
.setMergePolicy("com.hazelcast.map.merge.LatestUpdateMapMergePolicy"))
.setSemaphoreConfigs(semaphores);
hazelcastOSGiInstance = hazelcastOsgiService.newHazelcastInstance(hazelcastConfig);
log.info("New Hazelcast OSGI instance: {}", hazelcastOSGiInstance);
hazelcastOsgiService.getAllHazelcastInstances().stream().forEach(instance -> {
log.info("Registered Hazelcast OSGI Instance: {}", instance.getName());
});
clusterManager = new HazelcastClusterManager(hazelcastOSGiInstance);
VertxOptions options = new VertxOptions().setClusterManager(clusterManager).setHAGroup("effectus");
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx v = res.result();
log.info("Vertx is running in cluster mode: {}", v);
// some more code...
```
so the issue is that hazelcast instance doesn't have access to the cleasass inside the vertx-hazelcst bundle.
I am sure there is a shorter cleaner way somewhere..
any better suggestions would be great.

Related

In Apache hadoop 2.x. Does the current UserGroupInformation lose credentials in a child thread?

Here is the simple experiment I used to demonstrate the problem, written in scala, with Hadoop 2.7.7 as the only dependency:
import org.apache.hadoop.security.UserGroupInformation
import scala.concurrent.duration.{Duration, MINUTES}
import scala.concurrent.{Await, ExecutionContext, Future}
object UGITest {
val ugi = UserGroupInformation.getCurrentUser
val credential = ugi.getCredentials
val ff = Future {
val _ugi = UserGroupInformation.getCurrentUser
val _credential = _ugi.getCredentials
require(ugi == _ugi, s"UGI is lost, before: $ugi, now ${_ugi}")
require(credential == _credential, s"credential is lost, before: $credential, now ${_credential}")
}(ExecutionContext.global)
Await.result(ff, Duration.apply(1, MINUTES))
}
The first requirement ugi==_ugi passed successfully, indicating that closure of the Future was successfully launched in a child thread.
The second requirement credential==_credential fail with the following information:
java.lang.IllegalArgumentException: requirement failed: credential is lost, before: org.apache.hadoop.security.Credentials#cb6e68f, now org.apache.hadoop.security.Credentials#6b746674
at scala.Predef$.require(Predef.scala:281)
at ...
It appears that the same UserGroupInformation is used, but all credentials are lost. What's the purpose of this design?
The above experiment was just executed on a single computer not in any cluster. I have't tested it with any hadoop authentication framework (e.g. kerberos) enabled. But I think the result will be more or less the same.

How to disable ignite baseline auto-just?

Ignite 2.8.0, I enable persistent, code like this:
IgniteConfiguration igniteCfg = new IgniteConfiguration();
//igniteCfg.setClientMode(true);
DataStorageConfiguration dataStorageCfg = new DataStorageConfiguration();
dataStorageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
igniteCfg.setDataStorageConfiguration(dataStorageCfg);
Ignite ignite = Ignition.start(igniteCfg);
Then some exception like below:
Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining persistence node to in-memory cluster couldn't be allowed due to baseline auto-adjust is enabled and timeout equal to 0
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1997)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1116)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:427)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2099)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 15 more
Anyony can help me?
Thanks.
After starting first node, invoke ignite.cluster().baselineAutoAdjustEnabled(false)
You can also use bin/control.(sh|bat) --baseline auto_adjust [disable|enable] [timeout <timeoutMillis>] [--yes]
Please note that we don't recommend running mixed persistent~non-persistent clusters since they see very few testing. If you must, make sure that data regions have the same persistenceEnabled settings on all nodes.

Using JNDI Connection pool as Datanucleus PersistenceManagerFactory

I am developing a web application using DataNucleus as DAO layer (mainly due to historical reasons). It runs inside Payara server (a Glassfish 4 fork)
It works fine, but now I'd like to use a JNDI db connection pool to obtain the PersistenceManagerFactory for DataNucleus.
From the documentation, it seems that the following code would suffice:
pmf = JDOHelper.getPersistenceManagerFactory( "jdbc/HxWmDb", context );
but this way I obtain an error starting the application (DbSession is the class which implements the DAO layer, and the error line is exactly the one above):
Caused by: java.lang.ClassCastException
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:262)
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:150)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1791)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1755)
at ejb.DbSession.<init>(DbSession.java:119)
...
Caused by: java.lang.ClassCastException: com.sun.gjc.spi.jdbc40.DataSource40 cannot be cast to org.omg.CORBA.Object
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:245)
Any suggestion?
Little update, as requested by DN1:
As a first approach, I tried exactly what is described in the link:
Properties properties = new Properties();
properties.setProperty("datanucleus.ConnectionFactoryName","jdbc/HxWmDb");
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(properties);
And the error is, as already said, that a URI is anyway required:
Caused by: org.datanucleus.exceptions.NucleusException: You haven't specified persistence property 'datanucleus.ConnectionURL'

Cannot register new CQ query on Apache Geode

I stuck in place while trying to register cq query with ClientCache. Still getting this exception:
CqService is not available.
java.lang.IllegalStateException: CqService is not available.
at org.apache.geode.cache.query.internal.cq.MissingCqService.start(MissingCqService.java:171)
at org.apache.geode.cache.query.internal.DefaultQueryService.getCqService(DefaultQueryService.java:777)
at org.apache.geode.cache.query.internal.DefaultQueryService.newCq(DefaultQueryService.java:486)
The client cache is created as follow:
def client(): ClientCache = new ClientCacheFactory()
.setPdxPersistent(true)
.setPdxSerializer(new ReflectionBasedAutoSerializer(false, "org.geode.importer.domain.FooBar"))
.addPoolLocator(ConfigProvider.locator.host, ConfigProvider.locator.port)
.setPoolSubscriptionEnabled(true)
.create()
and suggested solution does not help. Actual library version is:
"org.apache.geode" % "geode-core" % "1.0.0-incubating"
You will have to pull in geode-cq as a dependency. In gradle
compile 'org.apache.geode:geode-cq:1.0.0-incubating'

What are possible causes of Multiple ID-statement Exception in Sesame?

I would like to know what are possible causes that can rise the following Exception ?
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID test_3
It rises when i try to query the test_3 repository. Another fact is that after that there is two repository with the same name displayed in my web page http://localhost:8080/openrdf-workbench/repositories/NONE/repositories
any help is welcome !
EDIT
I'm using Sesame 2.7.7
EDIT 2
Providing more details about the code which cause the Exception
code
public void connectToRepository(){
RepositoryConnection connection;
RemoteRepositoryManager repositoryManager = new RemoteRepositoryManager("http://localhost:8080/openrdf-sesame/");
try {
repositoryManager.initialize();
SailImplConfig backendConfig = new NativeStoreConfig("spoc,sopc,posc,psoc,ospc,opsc");
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryID, repositoryTypeSpec);
repositoryManager.addRepositoryConfig(repConfig);
Repository myRepository = repositoryManager.getRepository(repositoryID);
myRepository.initialize();
connection = myRepository.getConnection();
}
catch (RepositoryException | RepositoryConfigException e) {
e.printStackTrace();
}
}
The Exception is cause by the following line in the code:
repositoryManager.addRepositoryConfig(repConfig);
Here are details
log4j:WARN No appenders could be found for logger (org.openrdf.rio.DatatypeHandlerRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Custom NTriples/NQuads Parser
Custom NTriples/NQuads Parser
org.openrdf.repository.config.RepositoryConfigException: Multiple ID-statements for repository ID 10m7_m
at org.openrdf.repository.config.RepositoryConfigUtil.getIDStatement(RepositoryConfigUtil.java:269)
at org.openrdf.repository.config.RepositoryConfigUtil.hasRepositoryConfig(RepositoryConfigUtil.java:91)
at org.openrdf.repository.manager.RemoteRepositoryManager.createRepository(RemoteRepositoryManager.java:174)
at org.openrdf.repository.manager.RepositoryManager.getRepository(RepositoryManager.java:376)
at soctrace.repositories.OLDSesameRepositoryManagement.connectToRepository(OLDSesameRepositoryManagement.java:123)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:150)
at soctrace.views.Main.main(Main.java:692)
[sesame in memory] connection to repository 10m7_m done , 444, ms
Exception in thread "main" java.lang.NullPointerException
at soctrace.repositories.OLDSesameRepositoryManagement.runQuery(OLDSesameRepositoryManagement.java:250)
at soctrace.repositories.OLDSesameRepositoryManagement.queryInRepository(OLDSesameRepositoryManagement.java:155)
at soctrace.views.Main.main(Main.java:692)
Your code creates a new repository (by adding a new repository configuration to the repository manager) every time you execute the connectToRepository() method. Since you use the exact same repository configuration (including the actual id of the repository) every time, naturally this causes an error the second time you execute it: you are trying to add a repository with an id that already exists.
You should rewrite your code to make sure that it only tries to create the repository when a repository with that id doesn't exist yet - if it already exists, it should just use the existing one.