Context: Existing JavaSE application written in Swing which fires up an embedded server (so far it was Jetty) but we need to switch to Java EE, so we thought about bringing in an enterprise container (candidates are: Payara, Tomee, Wildfly).
The server should be able to run a web app based on dynamic input: web context, with its own web.xml, specific web resources which are not known at build time, so uber jar is not really an option for us.
We have successfully started a web app on Payara using code like the following (this is not working code, but it shows the steps we took for using Payara)
GlassFish glassfish;
WebContainer container;
GlassFishRuntime glassfishRuntime = = GlassFishRuntime.bootstrap();
glassfish = glassfishRuntime.newGlassFish();
glassfish.start();
// Access WebContainer
container = glassfish.getService(WebContainer.class);
WebContainerConfig config = new WebContainerConfig();
container.setConfiguration(config);
Context context = container.createContext(contextPathLocation);
m_webAppContexts.put(p_contextName, context);
WebListener listener = container.createWebListener("listener-1", HttpListener.class);
listener.setPort(myDynamicPortNumber);
container.addWebListener(listener);
container.addContext(context, myDynamicContextPath);
context.addServlet(myDynamicMapping, myServletName);
This is all working and a basic web application starts in Payara when invoked from our Java SE application.
We also have a fragment of web.xml declaring additional servlets that we want to bring in this dynamic deployment if given conditions are satisfied.
What is the best way to override the existing web.xml with fragments from another web.xml? We need pointers to documentation, directions from more experienced Payara users.
This is not possible with Payara or Wildfly, as they work very differently from how Jetty works.
However, it is possible with Tomee.
Related
I am using R2DBC-H2 driver, and my UR.L is spring.r2dbc.url=r2dbc:h2:mem:///customer
Using this configuration, SpringBoot starts fine, however, I can not access the h2-console.
Does anybody know why, and how I can fix it?
If I understand the source code of H2ConsoleAutoConfiguration correctly, the h2 console auto configuration from spring boot does not work in a reactive environment.
...
#ConditionalOnWebApplication(type = Type.SERVLET)
...
public class H2ConsoleAutoConfiguration {
You can confirm this by yourself by changing the type of your web application to SERVLET (for example, by adding spring-boot-starter-web as a dependency) which will activate the route to the h2 console (if enabled in the application properties). The h2-console route endpoint will start working again.
As the whole code seems very servlet-specific, I don't know how to properly fix this problem.
H2 Console depends on traditional Jdbc drivers, not compatible with Spring WebFlux stack.
If you are developing a WebFlux application, you can use H2 as a standalone database, ane use H2 Console freely.
Following the official Getting Started guide to start H2 Database and H2 Console.
Set your spring.r2dbc.url to the database url you are running in the first step.
NOTE: Do not use a Memory DB here.
I am porting a suite of related applications from WebLogic to JBoss EAP v6.2.
I have set up a data source connection using the JBoss command line interface and hooked it to an oracle database. This database has a name of "mydatasource" and a JNDI name of
"java:jboss/datasources/mydatasource" as per JBoss standards. I can test and validate this database connection.
However, when I try to port the code and run it, the connection doesn't work. The code that worked in WebLogic was simply:
InitialContext ic = new InitialContext() ;
DataSource ds = (DataSource)ic.lookup(dataSource) ;
with a value in dataSource of "mydatasource".
This worked in Web Logic but in JBoss it throws a NameNotFoundException
javax.naming.NameNotFoundException: mydatasource-- service jboss.naming.context.java.mydatasource
Clearly there is a difference in how the InitialContext is set up between the two servers.
But this port involves a large number of small applications, all of which connect to the datasource via code like that above. I don't want to rewrite all that code.
Is there a way through configuration (InitialContextFactory, maybe) to define the initial context such that code like that above will work without rewriting, or perhaps is there another way of naming the datasource that JBoss will accept that would allow code like that above to work without rewriting?
Or must we bite the bullet and accept that this code needs a rewrite?
Update: Yes, I know that simply passing "java:jboss/datasources/mydatasource" to the InitialContext lookup solves the problem, but I am looking for a solution via configuration, rather than via coding if there is such a solution.
The way to do this correctly through configuration is to use
java:comp/env/jdbc/myDataSource
then use resource-ref in web.xml to map it to the declare datasource and use weblogic.xml or jboss-web.xml to actually map it to the real one
in weblogic admin console, when you define datasource it can be jdbc/realDataSource
JNDI path Tomcat vs. Jboss
For weblogic http://docs.oracle.com/cd/E13222_01/wls/docs103/jdbc_admin/packagedjdbc.html
I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.
I am using an embedded version of jetty 7 to load a web application using Apache MyFaces 1.2 in a junit 4 test class on the advice from another thread.
While running the test i get this error.
java.lang.IllegalStateException: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions!
If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml.
A typical config looks like this;
<listener>
<listener-class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class>
</listener>
This application works fine with tomcat, weblogic and even oc4j!
How can i get this to work with jetty?
I solved this by adding the myfaces-impl jar which had the .tld file inside the WEB-INF/lib directory.
I get this error:
StandardWrapperValve[Vaadin Servlet]: PWC1406: Servlet.service() for servlet Vaadin Servlet threw exception
java.lang.ClassCastException: com.delhi.entities.Category cannot be cast to com.delhi.entities.Category
when I try to run my webapps on glassfish v2.
Category is a JPA entity object
the offending code according to the server log is:
for (Category c : categories) {
mymethod();
}
categories is derived from:
List<Category> categories = q.getResultList();
Any idea what went wrong?
This is a class loader issue. If a class is loaded by different class loaders, it's objects cannot be assigned to each other. You have probably passed an object from one WAR into another one. There are several options to resolve this:
Put all your code into a single WAR.
Use some form of remoting between your WARs. Serialization takes care of the class loader problem.
Try putting all you WARs into a single EAR. If that doesn't work, put all code into JARs that are on the EAR's Classpath in the MANIFEST.MF.
I once had the same problem and the environment I had was following:
I had Glassfish v4
Netbeans with following projects
webpage war project containing entities
and ear project with that webpage war project
The problem was that in war's project settings I had checked [x] Run>Deploy on save. This was causing deploying war project everyime I hit save. It was sometimes leading to PermGen (memory) problems and unability to deploy EAR correctly (because e.g. in between undeploying and deploying EAR - this "crazy" Netbeans was deploying this war).
Solution: If Netbeans && using EAR, then uncheck deploy on save in project properties.
EDIT:
it seems that this error is connected with
SEVERE: The web application [/faces] created a ThreadLocal with key of type [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1] (value [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1#249ea63a]) and a value of type [org.glassfish.web.loader.WebappClassLoader] (value [WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I've had same problem today. Solution was closing EntityManagerFactory after use.
This answer helped me:
https://stackoverflow.com/a/13823219/2455506
I'm experiencing this problem too with Glassfish v2 and Glassfish v3.
Can I ask you a question: Are you attempting to initialize any persistence object when the application is deployed (through a servlet loaded on startup or a context listener)?
Like bguiz, I've noticed this problem only happens on redeploy. A new deploy to a freshly restarted Glassfish server, never has this problem.
Like FelixM mentioned, I'm convinced this is a class loader issue, however I don't believe it's an issue with multiple wars (I only have 1 deployed to my server). In Glassfish 3, I can see that my WAR is utilizing 2 Glassfish "engines". One for the web(war) and one for the jpa. From what I understand, these are different containers each with their own classloader. I'm guessing Glassfish v2 works in the same manner.
I'm using Spring and (re)initialize some persistence objects on (re)deploy. What I'm thinking, is that while the web engine is reinitializing the war, the jpa engine is still using the old class definitions. Often if I retry the redeploy after this initial failure, it may succeed (sometimes it may take more than one retry but eventually I can get it to succeed without a restart - having better success with Glassfish v3 than v2).
At this point I'm thinking that either these two classloaders are out of sync or there is some sort of race condition on redeploy allowing this operation to sometimes succeed. I've tried to force the classloader, writing code like this
HashMap<Object, Object> properties = new HashMap<Object, Object>();
properties.put(PersistenceUnitProperties.CLASSLOADER, this.getClass().getClassLoader());
entityManagerFactory = Persistence.createEntityManagerFactory(jpaContext, properties);
but it didn't seem to have any affect.
I'm also wondering if eliminating the initialization at startup could fix the problem, giving the appserver time to resynchronize both engines before using any jpa classes (which is why I asked my follow up question).
My observation is that it only happens when using a hot redeploy or a static redeploy. This only applies, of course, if you get a class cast exception where both the to and from classes are the same.
Workarounds:
Don't use undeploy and deploy instead of redeploy
Restart app server
Remove static members of the affected classes
Use a remote interface (serialization makes this go away)
IMO I think the class loader was unable to reload the class and the old version was reused, resulting in the error.
This article doesn't talk about this error directly, but it is good background info on how the class loader works.