This is my first question in Stack Overflow so I expect it will not be too much simple. I've been looking around Internet for a good solution but by now I don't have it.
I am a very begginer to EJB, JNDI and Java EE world in general, but on the last months I've been able to do some acceptable things in this environment. Now I am focusing a problem at work and by now the solution is not as good as I would like.
The scenario is this: I have a EAR application running in Glassfih 3.1.2. I have declared EJBs within this EAR app with stateless beans offering methods through a remote Interface.
This is my Remote Bean running in a server called server1, for example
package com.booreg;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
import com.booreg.IMyRemoteBean;
#Stateless
#LocalBean
public class MyRemoteBean implements IMyRemoteBean
{
#Override
public String helloWorld()
{
return "Hi what's up boy";
}
}
This is the interface for it
package com.booreg;
import javax.ejb.Remote;
#Remote
public interface IMyRemoteBean
{
public String helloWorld();
}
Then I have a second EAR app that must run imperatively on another server, called server2. The second APP is using JSF and Managed Beans. We have a Managed Bean acting as a remote client of MyRemoteBeanRemote, as this:
package com.nucleus;
import javax.ejb.EJB;
import javax.faces.bean.ManagedBean;
import javax.faces.bean.ViewScoped;
import com.booreg.IMyRemoteBean;
#ManagedBean
#ViewScoped
public class MyManagedBean
{
#EJB( name="TheRef") IMyRemoteBean myRemoteBean;
public String getPhrase() { return myRemoteBean.helloWorld(); }
}
I've arrived to the point that this works declaring an ejb-ref inside WEB-INF/sun-web.xml file in my web project.
<ejb-ref>
<ejb-ref-name>TheRef</ejb-ref-name>
<jndi-name>corbaname:iiop:server1:3700#java:global/booreg/booreg.ejb/MyRemoteBean!com.booreg.IMyRemoteBean</jndi-name>
</ejb-ref>
I understand that with this sun-web.xml file the jndi-name makes the second app to know where to locate the ejb implementation located at first app. But here I have some questions:
It's necessary to declare one ejb-ref entry for each EJB interface I have in my project ?
How can I avoid making a static reference to server/port (server1:3700 during development) inside sun-web.xml ? When this will go into production I will have to change manually the sever name for each ?? This sounds bizarre. Can I use some kind of variable at server side to specify server/port ?
I expect I have explained myself well enough.
Many thanks
Update: finally, thanks to this link I see that is possible to make references to the ejb server (server1) creating a jndi.properties file inside my classpath. This file should contain lines like this.
java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory
java.naming.factory.url.pkgs=com.sun.enterprise.naming
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl
org.omg.CORBA.ORBInitialHost=server1
org.omg.CORBA.ORBInitialPort=3700
But I am still facing problems. When deploying the application appear the next exception at server1 an I can't deploy the application.
ADVERTENCIA: IOP00100006: Class com.sun.jersey.server.impl.cdi.CDIExtension is not Serializable
org.omg.CORBA.BAD_PARAM: ADVERTENCIA: IOP00100006: Class com.sun.jersey.server.impl.cdi.CDIExtension is not Serializable vmcid: SUN minor code: 6 completed: Maybe
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.sun.corba.ee.spi.orbutil.logex.corba.CorbaExtension.makeException(CorbaExtension.java:248)
at com.sun.corba.ee.spi.orbutil.logex.corba.CorbaExtension.makeException(CorbaExtension.java:95)
at com.sun.corba.ee.spi.orbutil.logex.WrapperGenerator.handleFullLogging(WrapperGenerator.java:387)
at com.sun.corba.ee.spi.orbutil.logex.WrapperGenerator.access$400(WrapperGenerator.java:107)
at com.sun.corba.ee.spi.orbutil.logex.WrapperGenerator$2.invoke(WrapperGenerator.java:511)
at com.sun.corba.ee.spi.orbutil.proxy.CompositeInvocationHandlerImpl.invoke(CompositeInvocationHandlerImpl.java:99)
at $Proxy117.notSerializable(Unknown Source)
at com.sun.corba.ee.impl.orbutil.ORBUtility.throwNotSerializableForCorba(ORBUtility.java:783)
at com.sun.corba.ee.impl.encoding.CDROutputStream_1_0.write_abstract_interface(CDROutputStream_1_0.java:697)
at com.sun.corba.ee.impl.encoding.CDROutputObject.write_abstract_interface(CDROutputObject.java:545)
at com.sun.corba.ee.impl.javax.rmi.CORBA.Util.writeAbstractObject(Util.java:493)
...
Anybody has any idea ?
#dgisbert
In the last comment you mentioned , that one server is a public and other one is Internal Server of your enterprise. Well calling the Application layer from a public server is not a best practise. It means you are directly giving access to your critical business layer. I would suggest rather to have a Web Service Layer on top of your EJB calls so every call from Public site has to come through WebServer -> App Server. This way you can higly reduce the risk of attacks
Related
I have an issue with Hangfire, most likely because of my ignorance about some topics.
I have a host/plugins infrastructure, where each plugin is loaded at runtime and it register its interfaces.
public void ConfigureServices(IServiceCollection services, IConfigurationRoot Configuration)
{
services.AddTransient<IManager, Manager>();
services.AddTransient<IAnotherManager, AnotherManager>();
this.AddControllers(services);
}
Some plugin may add jobs using Hangfire, which are also set during runtime
public void ScheduleJobs()
{
RecurringJob.AddOrUpdate<IManager>(n => n.SayHello(), Cron.Monthly);
}
The issue I have is, while any service registered directly in the host is correctly resolved in hangfire,
all the interfaces (ex IManager) that are defined in external assemblies aren't found.
I added a customer JobActivator where I'm passing the IServiceCollection and I can actually see that those external services are registered (and I can use them anywhere else but from Hangfire), but still
in the JobActivator, when Hangfire tries to resolve the external service, it fails.
public override object ActivateJob(Type type)
{
// _serviceCollection contains the IManager service
var _provider = _serviceCollection.BuildServiceProvider();
// this will throw an Exception => No service for type '[...].IManager' has been registered.
var implementation = _provider.GetRequiredService(type);
return implementation;
}
In the same example, if I use the Default JobActivator, then the exception I get is System.MissingMethodException: Cannot create an instance of an interface.
I could enqueue the job using the Class instead of the Interface, but that's not the point and anyway if the Class has services injected, those will not be resolved as well.
What am I missing?
The problem has been solved. The solution is to add a specific IoC Container for hangfire. I used Unity. In that way dependencies are resolved correctly.
Thanks Matteo for making it clear that HF requires its own IoC container. This link makes the point too:
Hangfire needs to have it's own container with dependencies registered independently of the global UnityContainer. The reason for this is twofold; Hangfire's dependencies need to be registered with the PerResolveLifetimeManager lifetime manager. This is so that you don't get concurrency issues between workers that have resolved a dependency to the same instance. For example; with the normal HierarchicalLifetimeManager, two workers needing the same repository dependency may resolve to the same instance and share a common db context. The workers are meant to each have their own db contexts. Secondly, when the OWIN bootstrapper is run, the global UnityContainer may or may not be initialised yet and Hangfire is unable to take in a reference to the container. So giving Hangfire it's own managed container is a clear separation of purpose and behaviour in how our dependencies are resolved.
I am trying to use Spring LDAP in one of my Spring Boot projects but I am getting an 'Address already in use' error when running multiple tests.
I have cloned locally the sample project here:
https://spring.io/guides/gs/authenticating-ldap/
...and just added the boilerplate test normally created by Spring Boot to verify that the Application Context loads correctly:
#RunWith(SpringRunner.class)
#SpringBootTest
public class MyApplicationTests {
#Test
public void contextLoads() {
}
}
If run alone, this test passes. As soon as LdapAuthenticationTests and MyApplicationTests are run together, I get the error above for the latter.
After debugging a bit, I've found out that this happens because the system tries to spawn a second instance of the embedded server.
I am sure I am missing something very stupid in the configuration.
How can I fix this problem?
I had a similar problem, and it looks like you had a static port configured (as was in my case).
According to this article:
Spring Boot starts an embedded LDAP server for each application
context. Logically, that means, it starts an embedded LDAP server for
each test class. Practically, this is not always true since Spring
Boot caches and reuses application contexts. However, you should
always expect that there is more than one LDAP server running while
executing your tests. For this reason, you may not declare a port for
your LDAP server. In this way, it will automatically uses a free port.
Otherwise, your tests will fail with “Address already in use”
Thus it might be a better idea not to define spring.ldap.embedded.port at all.
I addressed the same issue. I solved it with an additional TestExecutionListener since you can get the InMemoryDirectoryServer bean.
/**
* #author slemoine
*/
public class LdapExecutionListener implements TestExecutionListener {
#Override
public void afterTestClass(TestContext testContext) {
InMemoryDirectoryServer ldapServer = testContext.getApplicationContext().getBean(InMemoryDirectoryServer.class);
ldapServer.shutDown(true);
}
}
And on each SpringBootTest (or only once in an abstract super class)
#RunWith(SpringRunner.class)
#SpringBootTest
#TestExecutionListeners(listeners = LdapExecutionListener.class,
mergeMode = TestExecutionListeners.MergeMode.MERGE_WITH_DEFAULTS)
public class MyTestClass {
...
}
also do not forget
mergeMode = TestExecutionListeners.MergeMode.MERGE_WITH_DEFAULTS
to avoid disabling the whole #SpringBootTest auto configuration.
Okay, I think I found a solution by adding a #DirtiesContext annotation to my test classes:
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
If you are using spring embedded ldap, try to comment or remove port value from config file as below :
spring :
ldap:
embedded:
base-dn: dc=example,dc=org
credential:
username: cn=admin,dc=example,dc=org
password: admin
ldif: classpath:test-schema.ldif
# port: 12345
validation:
enabled: false
Try specifying the web environment type and the base configuration class (the one with !SpringBootApplication on it).
#RunWith(SpringRunner.class)
#SpringBootTest(
classes = MyApplication.class,
webEnvironment = RANDOM_PORT
)
public class MyApplicationTests {
#Test
public void contextLoads() {
}
}
Do this for all your test classes.
I solved this problem by adding #DirtiesContext over each test class that requires embedded ldap server. In my case (and as I feel in many others), embedded ldap server was starting up at every #SpringBootTest, since I added all spring.ldap.embedded.* properties to general application-test.properties. Therefore, when I run a bunch of tests, the problem of 'Address already in use' broke all test passing.
Steps I followed:
create an additional test profile (with corresponding named application properties file, e.g. 'application-ldaptest.properties')
move to that file all spring.ldap.embedded.* properties (with fixed port value)
over all #SpringBootTest-s that do require embedded server running up, add #ActiveProfiles("testladp") and #DirtiesContext annotations.
Hope, that helps.
I have a Singleton EJB (javax.ejb.Singleton version. sigh.) which has a CDI observer method on it. When I try to deploy this to glassfish 3.1 the server fails to deploy the EAR file without any real explanation - simply saying there was an exception during deployment without any more details.
SEVERE: Exception while loading the app
SEVERE: Exception while shutting down application container
....
SEVERE: Exception while shutting down application container : java.lang.NullPointerException
This is the CDI event listener :
public void updateFromGranule(#Observes #CloudMask GranuleAvailableEvent granuleEvent) {
LOG.info("updating cloud map");
update(granuleEvent.getGranule(), CloudMask.class);
fireUpdate();
}
If I change the Singleton bean to just be an #ApplicationScoped bean the app deploys fine. Similarly, if I remove the CDI event observer method the application deploys fine.
I actually need the class to be an EJB singleton because I want the transaction, thread safety etc. of EJBs, so just leaving this as a #ApplicationScoped POJO isn't much use to me. The problem doesn't seem to be limited to Singleton beans though - I've experimented by changing the annotation to #Stateless and #Stateful and I get the same issue.
It seems to me that this might be a bug in Weld, perhaps Weld and EJB are fighting about how they proxy that method - presumably EJB needs to add an interceptor class and wrap that method to ensure thread safety, and Weld is trying to do something else to make the event listener work?
Am I misunderstanding something here, and should CDI event handlers simply not be used on EJBs (in which case there should be better error messages from glassfish) - or is this actually just a bug in the CDI or EJB implementation?
I think this is the answer :
CDI observer methods must apparently either be static or declared in the local interface of an EJB if the EJB declares a local interface. Normally if you try to declare an observer method that isn't in the local interface you get an exception from Weld like this :
org.jboss.weld.exceptions.DefinitionException: WELD-000088 Observer method must be static or local business method: [method] public org.stain.ObserverBean.testMethod(EventClass) on public#Singleton class org.stain.ObserverBean
For some reason glassfish does not report this exception properly when loading my EAR file and simply says Exception while loading the app.
Adding the method to the local interface (or removing the interface declaration on the class) fixes the problem and allows the application to load normally.
I noticed the same problem with the latest version of weld. But if you add the #LocalBean annotation it will work with #Singleton and #Singleton #Startup.
I get this error:
StandardWrapperValve[Vaadin Servlet]: PWC1406: Servlet.service() for servlet Vaadin Servlet threw exception
java.lang.ClassCastException: com.delhi.entities.Category cannot be cast to com.delhi.entities.Category
when I try to run my webapps on glassfish v2.
Category is a JPA entity object
the offending code according to the server log is:
for (Category c : categories) {
mymethod();
}
categories is derived from:
List<Category> categories = q.getResultList();
Any idea what went wrong?
This is a class loader issue. If a class is loaded by different class loaders, it's objects cannot be assigned to each other. You have probably passed an object from one WAR into another one. There are several options to resolve this:
Put all your code into a single WAR.
Use some form of remoting between your WARs. Serialization takes care of the class loader problem.
Try putting all you WARs into a single EAR. If that doesn't work, put all code into JARs that are on the EAR's Classpath in the MANIFEST.MF.
I once had the same problem and the environment I had was following:
I had Glassfish v4
Netbeans with following projects
webpage war project containing entities
and ear project with that webpage war project
The problem was that in war's project settings I had checked [x] Run>Deploy on save. This was causing deploying war project everyime I hit save. It was sometimes leading to PermGen (memory) problems and unability to deploy EAR correctly (because e.g. in between undeploying and deploying EAR - this "crazy" Netbeans was deploying this war).
Solution: If Netbeans && using EAR, then uncheck deploy on save in project properties.
EDIT:
it seems that this error is connected with
SEVERE: The web application [/faces] created a ThreadLocal with key of type [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1] (value [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1#249ea63a]) and a value of type [org.glassfish.web.loader.WebappClassLoader] (value [WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I've had same problem today. Solution was closing EntityManagerFactory after use.
This answer helped me:
https://stackoverflow.com/a/13823219/2455506
I'm experiencing this problem too with Glassfish v2 and Glassfish v3.
Can I ask you a question: Are you attempting to initialize any persistence object when the application is deployed (through a servlet loaded on startup or a context listener)?
Like bguiz, I've noticed this problem only happens on redeploy. A new deploy to a freshly restarted Glassfish server, never has this problem.
Like FelixM mentioned, I'm convinced this is a class loader issue, however I don't believe it's an issue with multiple wars (I only have 1 deployed to my server). In Glassfish 3, I can see that my WAR is utilizing 2 Glassfish "engines". One for the web(war) and one for the jpa. From what I understand, these are different containers each with their own classloader. I'm guessing Glassfish v2 works in the same manner.
I'm using Spring and (re)initialize some persistence objects on (re)deploy. What I'm thinking, is that while the web engine is reinitializing the war, the jpa engine is still using the old class definitions. Often if I retry the redeploy after this initial failure, it may succeed (sometimes it may take more than one retry but eventually I can get it to succeed without a restart - having better success with Glassfish v3 than v2).
At this point I'm thinking that either these two classloaders are out of sync or there is some sort of race condition on redeploy allowing this operation to sometimes succeed. I've tried to force the classloader, writing code like this
HashMap<Object, Object> properties = new HashMap<Object, Object>();
properties.put(PersistenceUnitProperties.CLASSLOADER, this.getClass().getClassLoader());
entityManagerFactory = Persistence.createEntityManagerFactory(jpaContext, properties);
but it didn't seem to have any affect.
I'm also wondering if eliminating the initialization at startup could fix the problem, giving the appserver time to resynchronize both engines before using any jpa classes (which is why I asked my follow up question).
My observation is that it only happens when using a hot redeploy or a static redeploy. This only applies, of course, if you get a class cast exception where both the to and from classes are the same.
Workarounds:
Don't use undeploy and deploy instead of redeploy
Restart app server
Remove static members of the affected classes
Use a remote interface (serialization makes this go away)
IMO I think the class loader was unable to reload the class and the old version was reused, resulting in the error.
This article doesn't talk about this error directly, but it is good background info on how the class loader works.
I have created and EJB with a remote interface:
#Stateless
public class TestSessionBean implements TestSessionRemote
{
public void businessMethod()
{
System.out.println ("***businessMethod");
}
}
I to access it from another component (e.g a servlet) running on the server via:
ic = new InitialContext();
ic.lookup("myEJB");
I am using netBeans 6.5.1 and glassfish v2.
How can I do that?
Thanks,
Ido
actually ejb3 use a default naming convention, wich i've not found a way to get around.
The name for your bean would be something like:
TestSessionBean#package.TestSessionBean
To acess your remote service you can do something like this
InitialContext ctx = new InitialContext();
ctx.lookup(interfaceClass.getSimpleName()+"#"+interfaceClass.getName());
where interfaceClass is the class of your remote interface.
do note you havent defined a remote interface(or local for that matter) for that webserver. you mightnot be able to acess theejb from another context.
As for changing the name that is actually i dont think is possible through anotations. not sure though