Using JNDI Connection pool as Datanucleus PersistenceManagerFactory - glassfish

I am developing a web application using DataNucleus as DAO layer (mainly due to historical reasons). It runs inside Payara server (a Glassfish 4 fork)
It works fine, but now I'd like to use a JNDI db connection pool to obtain the PersistenceManagerFactory for DataNucleus.
From the documentation, it seems that the following code would suffice:
pmf = JDOHelper.getPersistenceManagerFactory( "jdbc/HxWmDb", context );
but this way I obtain an error starting the application (DbSession is the class which implements the DAO layer, and the error line is exactly the one above):
Caused by: java.lang.ClassCastException
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:262)
at javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:150)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1791)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:1755)
at ejb.DbSession.<init>(DbSession.java:119)
...
Caused by: java.lang.ClassCastException: com.sun.gjc.spi.jdbc40.DataSource40 cannot be cast to org.omg.CORBA.Object
at com.sun.corba.ee.impl.javax.rmi.PortableRemoteObject.narrow(PortableRemoteObject.java:245)
Any suggestion?
Little update, as requested by DN1:
As a first approach, I tried exactly what is described in the link:
Properties properties = new Properties();
properties.setProperty("datanucleus.ConnectionFactoryName","jdbc/HxWmDb");
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(properties);
And the error is, as already said, that a URI is anyway required:
Caused by: org.datanucleus.exceptions.NucleusException: You haven't specified persistence property 'datanucleus.ConnectionURL'

Related

Upgrade to Java 17 throws java.lang.RuntimeException: Error creating extended parser class: Could not determine whether class has already been loaded

I am using jtwig lib and the code was working fine but when we upgraded to Java 17, I am getting the below mention runtime exception.
Below is the method and throws RuntimeException while calling template.render()
String renderDescription(String templatePath,String userId, String caseId) {
JtwigTemplate template =
JtwigTemplate.classpathTemplate(templatePath);
JtwigModel model = JtwigModel.newModel()
.with("userId", userId)
.with("caseId", caseId)
.with("statusPageUrlTemplate",
config.getStatusPageUrlTemplate());
return template.render(model);
}
java.lang.RuntimeException: Error creating extended parser class: Could not determine whether class 'org.jtwig.parser.parboiled.base.BooleanParser$$parboiled' has already been loaded
at org.parboiled.Parboiled.createParser(Parboiled.java:58)
at org.jtwig.parser.parboiled.ParserContext.instance(ParserContext.java:31)
at org.jtwig.parser.parboiled.ParboiledJtwigParser.parse(ParboiledJtwigParser.java:37)
at org.jtwig.parser.cache.InMemoryConcurrentPersistentTemplateCache.get(InMemoryConcurrentPersistentTemplateCache.java:39)
at org.jtwig.parser.CachedJtwigParser.parse(CachedJtwigParser.java:19)
at org.jtwig.JtwigTemplate.render(JtwigTemplate.java:98)
at org.jtwig.JtwigTemplate.render(JtwigTemplate.java:74)
I was facing a similar issue after upgrading JVM version, and I found that adding this environment variable helped:
JDK_JAVA_OPTIONS=--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED
I believe it has something to do with stricter default limits on reflection when trying to inspect built-in classes.

Apache Ignite : Transaction support and cache definition

We are experimenting with Apache Ignite to use it as a Read and Write through caching layer for Distributed applications. The need is to weave a cache layer for the aggregates we depend on. Indiviual constituent entities that these aggregates comprise of, are managed entities maintained by EntityManager.
Two Questions:
Does Apache Ignite participate in Container Managed Transaction out of box ?
In order to understand solution to Q1 , I did a small experiment described below. Any insights on what induces below behaviour ?
Aggregates : Strategy and Strategy Parameter - one to many mapping.
Individual Entities : Strategy and StrategyParam (both managed by JPA/Hibernate).
CacheStore definition based on entitymanager : eg write method:
#Override
public void write(Cache.Entry<? extends Long, ? extends StrategyAggregate> entry) throws CacheWriterException {
em.merge(entry.getValue().getStrategy());
entry.getValue().getStrategyParamList().forEach(strategyParam -> em.merge(strategyParam));
}
Now when we init first node with above cache definition, I see the transaction nature working alright i.e. post method completion I see both cache and database updated i.e. I can read the changes from cache.
But as soon as second node joins the cluster, the same api throws up error
"no entitymanager available ..." followed by stacktrace having transaction has been rolled back. Though Read from both cache and direct read from entity manager works fine.
Stacktrace
Caused by: javax.cache.integration.CacheWriterException: javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'merge' call
... 79 common frames omitted
Caused by: javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'merge' call
at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:285) ~[spring-orm-4.3.25.RELEASE.jar:4.3.25.RELEASE]
at com.sun.proxy.$Proxy102.merge(Unknown Source) ~[na:na]
at StrategyAggregateCacheStore.write(StrategyAggregateCacheStore.java:47) ~[classes/:na]
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:585) ~[ignite-core-2.11.0.jar:2.11.0]
... 78 common frames omitted

Apache Geode debug Unknown pdx type=2140705

If I start a GFSH client and connect to Geode. There is a lot of data in myRegion and to check through it then I run:
query --query="select * from /myRegion"
I am getting the response:
Result : false
startCount : 0
endCount : 20
Message : Unknown pdx type=2140705
How does one troubleshoot / debug this problem?
UPDATE: The error in the Geode server log is:
[info 2018/07/04 10:53:07.275 BST IsGeode <Function Execution Processor1> tid=0x48] Exception occurred:
java.lang.IllegalStateException: Unknown pdx type=1318971
at org.apache.geode.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:3042)
at org.apache.geode.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2859)
at org.apache.geode.DataSerializer.readObject(DataSerializer.java:2961)
at org.apache.geode.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:90)
at org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1911)
at org.apache.geode.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1904)
at org.apache.geode.internal.cache.PreferBytesCachedDeserializable.getDeserializedValue(PreferBytesCachedDeserializable.java:73)
at org.apache.geode.internal.cache.LocalRegion.getDeserialized(LocalRegion.java:1269)
at org.apache.geode.internal.cache.LocalRegion$NonTXEntry.getValue(LocalRegion.java:8771)
at org.apache.geode.internal.cache.EntriesSet$EntriesIterator.moveNext(EntriesSet.java:179)
at org.apache.geode.internal.cache.EntriesSet$EntriesIterator.next(EntriesSet.java:134)
at org.apache.geode.cache.query.internal.CompiledSelect.doNestedIterations(CompiledSelect.java:837)
at org.apache.geode.cache.query.internal.CompiledSelect.doIterationEvaluate(CompiledSelect.java:699)
at org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:423)
at org.apache.geode.cache.query.internal.CompiledSelect.evaluate(CompiledSelect.java:53)
at org.apache.geode.cache.query.internal.DefaultQuery.executeUsingContext(DefaultQuery.java:558)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:385)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:319)
at org.apache.geode.management.internal.cli.functions.DataCommandFunction.select(DataCommandFunction.java:247)
at org.apache.geode.management.internal.cli.functions.DataCommandFunction.select(DataCommandFunction.java:202)
at org.apache.geode.management.internal.cli.functions.DataCommandFunction.execute(DataCommandFunction.java:147)
at org.apache.geode.internal.cache.MemberFunctionStreamingMessage.process(MemberFunctionStreamingMessage.java:185)
at org.apache.geode.distributed.internal.DistributionMessage.scheduleAction(DistributionMessage.java:374)
at org.apache.geode.distributed.internal.DistributionMessage$1.run(DistributionMessage.java:440)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.geode.distributed.internal.DistributionManager.runUntilShutdown(DistributionManager.java:662)
at org.apache.geode.distributed.internal.DistributionManager$9$1.run(DistributionManager.java:1108)
at java.lang.Thread.run(Thread.java:748)
You can tell the immediate cause from the stack trace.
A PDX serialized stream contains a type id which is a reference into a repository of type metadata maintained by a GemFire cluster. In this case, the serialized data of the object contained a typeId that is not in the cluster's metadata repository.
So the question becomes, "what serialized that object and why did it use an invalid type id ?"
The only way I've seen this happen before is when a cluster is fully restarted and the pdx metadata goes away, either because it was not persistent or because it was deleted (by clearing out the locator working directory for example).
GemFire clients cache the mapping between a type and it's type ID. This allows them to quickly serialize objects without continually looking up the type id from the server. Client connections can persist across cluster restarts. When a client reconnects it does not flush the cached information and continues to write objects using its cached type ID.
So the combination of a pdx-metadata losing cluster restart and a client that is not restarted (e.g. an app. server) is the only way I have seen this happen before. Does this match your scenario ?
If so, one of the best ways to avoid this is to persist your pdx metadata and never delete it.

org.apache.ignite.IgniteCheckedException: Failed to read class name from file

I have a 3 node Apache Ignite Cluster, I have created a cache with Integer as Key and a 'Subscriber' POJO as value, when I connect to the cluster from inside a JAVA program and access the cache , I get the above mentioned exception, I have 'peerclassloading' property set to false, and I have deployed 'Subscriber' POJO Binaries in all the nodes, Please find the complete stack trace below. What am I missing here? Why is it looking for some file inside my IGNITE_HOME when I am starting client inside my JAVA program with Ignition.start()?
class org.apache.ignite.IgniteCheckedException: Failed to read class name from file [id=-1219769240, file=/home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/work/marshaller/-1219769240.classname]
at org.apache.ignite.internal.MarshallerContextImpl.className(MarshallerContextImpl.java:158)
at org.apache.ignite.internal.MarshallerContextAdapter.getClass(MarshallerContextAdapter.java:174)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:483)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1443)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:132)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1748)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setResult(GridPartitionedSingleGetFuture.java:598)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:454)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:153)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1200(GridDhtAtomicCache.java:128)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$11.apply(GridDhtAtomicCache.java:295)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$11.apply(GridDhtAtomicCache.java:293)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:582)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:280)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:204)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:80)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:163)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin/work/marshaller/-1219769240.classname (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileReader.<init>(FileReader.java:72)
at org.apache.ignite.internal.MarshallerContextImpl.className(MarshallerContextImpl.java:154)
... 26 more
Looks like the cache tries to deserialize the value after retrieving it from cache, but you don't have a class for it on the node where IgniteCache.get() was called. You can either deploy the class, or use IgniteCache.withKeepBinary() to avoid deserialization: https://apacheignite.readme.io/docs/binary-marshaller#binaryobject-cache-api
The issue turned out to be pretty simple, Ignite looks for the user defined POJOs from the list of classes loaded by default class loader, if it does not find it there , it looks inside marshalled classes, In my case, my value POJO was inside the test resources , hence default class loader was not loading the class, causing ignite to look inside marshalled classes folder(IGNITE_HOME/work/marshaller/) .

Caused by: java.lang.ClassCastException: weblogic.jndi.internal.WLEventContextImpl cannot be cast to javax.sql.DataSource

I am facing a class cast exception while doing lookup for datasource .we have recently migrated to weblogic 12c from weblogic 11. Below is the code via which I am looking up for Datasource .
ds = (javax.sql.DataSource) ctx.lookup("my_data_source_name");
this code is giving class cast exception
Caused by: java.lang.ClassCastException: weblogic.jndi.internal.WLEventContextImpl cannot be cast to javax.sql.DataSource
we have weblogic.jar in our class path .. i am not sure why it is returning the object of type WLEventContextImpl instead of Datasource . Can someone suggest something?
I had this problem and in my case I created a datasource without associate with a target, in the last page of datasource configuration you can see:
Select Targets:
Servers
[ ] AdminServer
After I check the AdminServer checkbox, I could use the datasource.