getting the below exception after deploying the application in WAS8. using axis2 1.7.4 family with Woden api and impl 1.0M8. Will someone please help
java.lang.NoClassDefFoundError: org.apache.woden.resolver.URIResolvergetting
at java.lang.J9VMInternals.verifyImpl(Native Method)
at java.lang.J9VMInternals.verify(J9VMInternals.java:93)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:170)
at org.apache.axis2.deployment.ModuleDeployer.deploy(ModuleDeployer.java:65)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:815)
at org.apache.axis2.deployment.RepositoryListener.loadClassPathModules(RepositoryListener.java:222)
at org.apache.axis2.deployment.RepositoryListener.init2(RepositoryListener.java:71)
at org.apache.axis2.deployment.RepositoryListener.<init>(RepositoryListener.java:64)
at org.apache.axis2.deployment.DeploymentEngine.loadFromClassPath(DeploymentEngine.java:177)
at org.apache.axis2.deployment.FileSystemConfigurator.getAxisConfiguration(FileSystemConfigurator.java:135)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContext(ConfigurationContextFactory.java:64)
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContextFromFileSystem(ConfigurationContextFactory.java:210)
at org.apache.axis2.client.ServiceClient.configureServiceClient(ServiceClient.java:151)
at org.apache.axis2.client.ServiceClient.<init>(ServiceClient.java:144)
and later below I am getting classnotfoundexception also
Caused by: java.lang.ClassNotFoundException: org.apache.woden.resolver.URIResolver
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:506)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:650)
... 27 more
If you're going to bring your own web services implementation, you have to run with PARENT_LAST class loading (or, preferably, package your version of the web services implementation in a shared library with an isolated class loader) and disable the built-in web services engine in WebSphere. Based on the exception stack, what appears to be happening is that something in your web services engine is interacting with WebSphere's version and triggering a load for a dependency that WebSphere doesn't package. Setting the environment to use your version will resolve that.
Note that WebSphere already includes Axis2, so unless you are strictly dependent on that specific point release, there's a very good chance that the best solution for you is just to rely on WebSphere's web services provider, rather than bringing your own. That will greatly simplify your configuration, since you won't need to mess with class loading delegation settings or system properties disabling the web services provider.
Related
I am getting an exception as given below.
I want Response class to be loaded from jawax-ws-rs.api.jar.
but its getting loaded from the j2ee.jar and throwing the below error:-
java.lang.NoSuchMethodError: javax/ws/rs/core/Response.readEntity(Ljava/lang/Class;)Ljava/lang/Object
Is there any way we can skip loading of j2ee.jar at startup of WebSphere?
Your question is incorrect, probably what you really want is to replace JAX-RS engine provided with WAS with some third part. And you should never pack j2ee.jar with your application as all required classes are already loaded by the server.
Check the following posts and links:
JAX-RS Jersey 2.10 support in Websphere 8
Disabling the JAX-RS runtime environment
I'm using neo4j in a glassfish server through a modified version of Alex Smirnov neo4j JCA connector.
My version is available here : https://github.com/Riduidel/neo4j-connector
I'm using this connector with neo4j 1.8.
As a consequence, when i want to use it, i first install the connector in my Glassfish application server, then use this connector in applications wishing to connect to.
It works OK when using it with fresh stores.
But, when using it with stores created with previous version, I encounter weird bugs.
Typically, I got today the following stack
javax.resource.spi.ResourceAllocationException: Error in allocating a connection. Cause: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#3bbd53b1 from NONE to STOPPED
...
...
.../* JCA internal exception stack */
...
...
Caused by: com.sun.appserv.connectors.internal.api.PoolingException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:924)
at com.sun.enterprise.resource.pool.ConnectionPool.createResource(ConnectionPool.java:1185)
at com.sun.enterprise.resource.pool.datastructure.RWLockDataStructure.addResource(RWLockDataStructure.java:98)
... 66 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:388)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:82)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:116)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:227)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:79)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:70)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:165)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:127)
at com.netoprise.neo4j.Neo4jManagedConnectionFactory.createManagedConnection(Neo4jManagedConnectionFactory.java:163)
at com.sun.enterprise.resource.allocator.ConnectorAllocator.createResource(ConnectorAllocator.java:160)
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:907)
... 68 more
Caused by: java.lang.AssertionError
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:265)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.<init>(LuceneDataSource.java:185)
at org.neo4j.index.lucene.LuceneIndexProvider.load(LuceneIndexProvider.java:72)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.loadIndexImplementations(InternalAbstractGraphDatabase.java:1171)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.init(InternalAbstractGraphDatabase.java:1143)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:382)
... 78 more
A fast inspection reveals that this exception is linked to an undeletable "write.lock" file. My write.lock file can't be deleted because I guess migration is not over.
How can I make sure the migration is done before using it without migrating it outside of Glassfish ?
Is there a way to ahve exclusive store migrations in that context ? And if so, how ?
And is it the solution for my problem ?
EDIT 1 Added exception message.
EDIT 2 All this only happen when loaded graph was previously used with a Neo4j 1.5 and now with a Neo4j 1.8 connector. when graph is created by connector, absolutely no error happens.
EDIT 3 Strangely enough, this happens as long as there is no debugger plugged into that code : as soon as I try to debug it, the issue stop appearing. Which make me thinking there may be a migration cleanup mechanism that remvoe the write lock once migration is done, and this cleanup is not performed when using my neo4j JCA connector. Is it a valid observation ?
I am not too familiar with the JCA connector, but to be sure, I would just write a very small migration java class that opens the database, lets it migrate and shut down. Then try it again with the JCA connector?
After further investigations, truth revealed to not be in multiple calls to the EmbeddedGraphDatabase constructor, but instead to multiple identicail IndexProvider being loaded.
I use neo4j embedded in an open-source JCA connector.
In this connector, the org.neo4j.kernel.Service class is replaced by a custom one which contains a workaround regarding service loading for JBoss non shared libraries.
Unfortunatly, in our context, this workaround implies loading twice the index provider :
once using the EAR classloader
once using the Glassfish library classloader.
Why ?
Because, as our neo4j instance is using for application data AND for authentication, neo4j connector jar is put in ${domain}/lib. As a consequence, due to Classloader delegation in application server, the EAR classloader delegates to the Glassfish library classloader, and find this way the LuceneIndexProvider. Then, the Glassfish library classloader is directly used to load the same LuceneIndexProvider class.
This concludes by us having two LuceneIndexProvider objects, both trying to migrate the lucene index. Which lead to the AssertionError as the write.lock file created by the first object should be deleted by the second one, which can't do that.
I've then changed slightly that very specific class to use JBoss workaround only when default loading mechanism do not return any class (seee commit here). This small change worked like a charm, so I think you can considered this issue as fixed.
I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.
I'm trying to raise the Spring Embedded Ldap Server using:
But I'm keep on getting this exception:
2010-06-10 14:33:35,559 ERROR main ApacheDSContainer start - Server startup failed
java.lang.NullPointerException
at org.apache.directory.server.core.schema.DefaultSchemaService.initialize(DefaultSchemaService.java:382)
at org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1425)
at org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:907)
at org.springframework.security.ldap.server.ApacheDSContainer.start(ApacheDSContainer.java:160)
at org.springframework.security.ldap.server.ApacheDSContainer.afterPropertiesSet(ApacheDSContainer.java:113)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1469)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:872)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3764)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4212)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:740)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:736)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:448)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:700)
at org.apache.catalina.startup.Catalina.start(Catalina.java:552)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433)
I'm using spring 3.0.2 and added the following jars for the ldap:
spring-security-ldap-3.0.2.RELEASE.jar
spring-ldap-1.3.0.RELEASE-all.jar
apacheds-all-1.5.6.jar
shared-ldap-0.9.15.jar
slf4j-api-1.5.6.jar
slf4j-simple-1.5.6.jar
Help please....
#Zorkus: I'm not sure exactly what kind of problem you came across with Apache Directory, and what is the root cause of that, but if all you need is a working embedded java LDAP server for integration testing with Spring Security then you might want to consider some alternatives.
I've recently started to investigate for alternatives, because I couldn't achieve with Apache Directory what I wanted despite a lot of time and effort invested. (I basically wanted to replicate the schema and the user database of an Active Directory instance into an embedded server.)
What I found is that the UnboundID LDAP SDK is a nice replacement. Integrating with it requires a bit more effort than a one-liner in your spring context (like <security:ldap-server/>), but not much more. Starting up an LDAP server requires just a few lines of code:
InMemoryDirectoryServerConfig config =
new InMemoryDirectoryServerConfig("dc=example, dc=com");
// schema config only necessary if the standard
// schema provided by the library doesn't suit your needs
config.setSchema(Schema.getSchema("your-custom-schema.schema"));
// listener config only necessary if you want to make sure that the
// server listens on port 33389, otherwise a free random port will
// be picked at runtime - which might be even better for tests btw.
config.setListenerConfigs(
new InMemoryListenerConfig("myListener", null, 33389, null, null, null));
InMemoryDirectoryServer ds = new InMemoryDirectoryServer(config);
ds.startListening();
// import your test data from ldif files
ds.importFromLDIF(true,"content.ldif");
The only dependency you will need for this to work is:
<dependency>
<groupId>com.unboundid</groupId>
<artifactId>unboundid-ldapsdk</artifactId>
<version>2.3.1</version>
</dependency>
It would be pretty easy to wrap the above code in a class that you can instantiate and configure from your Spring context.
For documentation and code examples on the UnboundID LDAP SDK see: https://www.unboundid.com/products/ldap-sdk/docs/
(I'm not affiliated with UnboundID in any way.)
Check whether the authorization state used by the LDAP client has access to the schema.
I get this error:
StandardWrapperValve[Vaadin Servlet]: PWC1406: Servlet.service() for servlet Vaadin Servlet threw exception
java.lang.ClassCastException: com.delhi.entities.Category cannot be cast to com.delhi.entities.Category
when I try to run my webapps on glassfish v2.
Category is a JPA entity object
the offending code according to the server log is:
for (Category c : categories) {
mymethod();
}
categories is derived from:
List<Category> categories = q.getResultList();
Any idea what went wrong?
This is a class loader issue. If a class is loaded by different class loaders, it's objects cannot be assigned to each other. You have probably passed an object from one WAR into another one. There are several options to resolve this:
Put all your code into a single WAR.
Use some form of remoting between your WARs. Serialization takes care of the class loader problem.
Try putting all you WARs into a single EAR. If that doesn't work, put all code into JARs that are on the EAR's Classpath in the MANIFEST.MF.
I once had the same problem and the environment I had was following:
I had Glassfish v4
Netbeans with following projects
webpage war project containing entities
and ear project with that webpage war project
The problem was that in war's project settings I had checked [x] Run>Deploy on save. This was causing deploying war project everyime I hit save. It was sometimes leading to PermGen (memory) problems and unability to deploy EAR correctly (because e.g. in between undeploying and deploying EAR - this "crazy" Netbeans was deploying this war).
Solution: If Netbeans && using EAR, then uncheck deploy on save in project properties.
EDIT:
it seems that this error is connected with
SEVERE: The web application [/faces] created a ThreadLocal with key of type [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1] (value [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1#249ea63a]) and a value of type [org.glassfish.web.loader.WebappClassLoader] (value [WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I've had same problem today. Solution was closing EntityManagerFactory after use.
This answer helped me:
https://stackoverflow.com/a/13823219/2455506
I'm experiencing this problem too with Glassfish v2 and Glassfish v3.
Can I ask you a question: Are you attempting to initialize any persistence object when the application is deployed (through a servlet loaded on startup or a context listener)?
Like bguiz, I've noticed this problem only happens on redeploy. A new deploy to a freshly restarted Glassfish server, never has this problem.
Like FelixM mentioned, I'm convinced this is a class loader issue, however I don't believe it's an issue with multiple wars (I only have 1 deployed to my server). In Glassfish 3, I can see that my WAR is utilizing 2 Glassfish "engines". One for the web(war) and one for the jpa. From what I understand, these are different containers each with their own classloader. I'm guessing Glassfish v2 works in the same manner.
I'm using Spring and (re)initialize some persistence objects on (re)deploy. What I'm thinking, is that while the web engine is reinitializing the war, the jpa engine is still using the old class definitions. Often if I retry the redeploy after this initial failure, it may succeed (sometimes it may take more than one retry but eventually I can get it to succeed without a restart - having better success with Glassfish v3 than v2).
At this point I'm thinking that either these two classloaders are out of sync or there is some sort of race condition on redeploy allowing this operation to sometimes succeed. I've tried to force the classloader, writing code like this
HashMap<Object, Object> properties = new HashMap<Object, Object>();
properties.put(PersistenceUnitProperties.CLASSLOADER, this.getClass().getClassLoader());
entityManagerFactory = Persistence.createEntityManagerFactory(jpaContext, properties);
but it didn't seem to have any affect.
I'm also wondering if eliminating the initialization at startup could fix the problem, giving the appserver time to resynchronize both engines before using any jpa classes (which is why I asked my follow up question).
My observation is that it only happens when using a hot redeploy or a static redeploy. This only applies, of course, if you get a class cast exception where both the to and from classes are the same.
Workarounds:
Don't use undeploy and deploy instead of redeploy
Restart app server
Remove static members of the affected classes
Use a remote interface (serialization makes this go away)
IMO I think the class loader was unable to reload the class and the old version was reused, resulting in the error.
This article doesn't talk about this error directly, but it is good background info on how the class loader works.