Glassfish, Jackrabbit and JAAS - glassfish

I'm running Jackrabbit 2.6.4 in Glassfish 4. I have deployed Jackrabbit as a connector resource using the provided rar.
I have got it up and running so that I can call the Jackrabbit repository from inside stateless EJB's and can create nodes etc.. I am now trying to replace the Default LoginModule Mechanism that is provided out of the box with my own custom LoginModule.
So far I have:
Created a Custom Realm and LoginModule that returns a users Principles (currently String values e.g. admin, read, write) and deployed this to the domain/lib directory
Configured my web.xml and sun-web.xml files with the roles to group mappings and enabled basic authentication. This is all working as expected and I can enforce roles on my EJBs.
Got Jackrabbit to use my Custom Login module instead of it's own (I removed the login module configuration from repository.xml and changed the security app name to match my realm name)
I am now running into the following problems:
Jackrabbit does not find the existing subject created by the application container when I login. This appears to be a problem with the way Jackrabbit looks up the Subject:
AccessControlContext acc = AccessController.getContext();
subject = Subject.getSubject(acc);
This returns null in Glassfish. Instead it appears you need to use:
Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");
I worked round this issue by getting the subject using the above code and then logging in to a repository inside a Subject.doAs block e.g.
Subject.doAs(subject, new PrivilegedAction<String>() {
#Override public String run() {
Session session = null;
try {
session = repository.login();
} catch (RepositoryException e) {
log.error("Failed", e);
} finally {
if (session != null) {
session.logout();
}
}
This now works but the next problem is that the JackRabbit DefaultAccessManager expects the Subject to contain JackRabbit typed principles e.g. org.apache.jackrabbit.core.security.SystemPrincipal which I can not return from my custom login module as it does not have access to the JackRabbit classes.
My first attempt to work around this was to create my own AccessManager but JackRabbit can't instantiate this as it is in my WAR and is not available to the JackRabbit code inside the connector resource.
My next attempt was to programmatically add the principle to the Subject inside my EJB before passing it to Jackrabbit, this worked but then I discovered running Subject.doAs inside an EJB in glassfish causes a number of issues and does not appear to be supported. There are also background threads inside Jackrabbit that need a subject with the JackRabbit typed principles in it.
I am now completely stumped on how to get a custom JAAS glassfish login module to work with Jackrabbit inside Glassfish and am wondering if anyone out there has figured this out.
In the mean time I am currently considering giving up on JackRabbit security and handling it all in my application layer and just using the default login module under the covers to log into Jackrabbit.

I've finally got Glassfish, JackRabbit and JAAS working together so that I can create a Subject using my custom LoginModule that JackRabbit then uses to create a session. Below are the steps that I took to resolve this issues described in my original question:
Instead of using the JackRabbit RAR (Model 2) I now include the JackRabbit Jars inside my war (Model 1). This allowed me to implement my own custom AccessManager that does not rely on the JackRabbit typed principles. The biggest disadvantage of this approach is that I now have to create and shutdown the repository myself. The solution I went with was an ApplicationScoped CDI Producer that creates the repo and the shuts it down in the dispose method. This makes it easy to inject the repo into the classes.
I solved the issue with JackRabbit finding the Subject in Glassfish by patching jackrabbit-core. It appears this issue has been around for some time see (JCR-3188), and a patch has been provided but never included in the source code. I applied the patch to 2.6.4 and JackRabbit is now able to find and use the Subject in Glassfish.

Related

Wildfly migrate authentication to Elytron

I am trying to migrate wildfly authentication to elytron and got almost everything to work as i want except for one problem.
We are using quartz scheduler to run jobs. These jobs are not bound to a caller principle. Using
SecurityContextAssociation.pushRunAsIdentity(new RunAsIdentity("My_Role", "My_User"));
i was able propagate a princple to following EJB calls. This is not working anymore, the principle is always "anonymous". Is there a way to do the same with Elytron?
Maybe you can use some variation of following:
SecurityIdentity si = SecurityDomain.getCurrent().getCurrentSecurityIdentity();
si.createRunAsIdentity(...);
The current identity needs to have permissions for it to succeed, so if you'll get unauthorized exception you should add RunAsPrincipal permissions to that user: https://developer.jboss.org/people/fjuma/blog/2018/06/01/configuring-permissions-using-elytron-in-wildfly-13

Error creating bean named `conversionServicePostProcessor` when using spring-boot-admin-server

I was trying to enable Spring boot admin server for my application. The default settings work perfectly fine but when I attempt to enable security, I am getting following error:
APPLICATION FAILED TO START
Description:
The bean 'conversionServicePostProcessor', defined in class path
resource
[org/springframework/security/config/annotation/web/configuration/WebSecurityConfiguration.class],
could not be registered. A bean with that name has already been
defined in class path resource
[org/springframework/security/config/annotation/web/reactive/WebFluxSecurityConfiguration.class]
and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting
spring.main.allow-bean-definition-overriding=true
Process finished with exit code 1
I am using the latest SNAPSHOT version of spring-boot-admin-starter-server (2.2.0-SNAPSHOT). Here is my security configuration:
#EnableAdminServer
#EnableWebFluxSecurity
#Configuration(proxyBeanMethods = false)
class AdminServerSecurityConfigurations(val adminServerProperties: AdminServerProperties) {
#Bean
fun adminServerSecurityWebFilterChain(http: ServerHttpSecurity): SecurityWebFilterChain = http
// #formatter:off
.authorizeExchange()
.pathMatchers("${adminServerProperties.contextPath}/assets/**").permitAll()
.pathMatchers("${adminServerProperties.contextPath}/login").permitAll()
.anyExchange().authenticated().and()
.formLogin().loginPage("${adminServerProperties.contextPath}/login").and()
.logout().logoutUrl("${adminServerProperties.contextPath}/logout").and()
.httpBasic().and()
// #formatter:on
.csrf().disable()
.build()
#Bean
fun notifyLogger(instanceRepository: InstanceRepository) = LoggingNotifier(instanceRepository)
}
I found a pull request to update the documentation: https://github.com/spring-projects/spring-boot/issues/14069
For Reactive WebSockets,
{spring-reference}web-reactive.html#webflux-websocket[Spring WebFlux] offers rich support,
which is accessible through the spring-boot-starter-webflux module.
See the spring-boot-sample-websocket-reactive sample project to see how WebSockets may
be implemented using Spring WebFlux.
it turns out that using webflux and websocket leads to conflicts.
also in this pull request was denied in the resolution of the conflict
https://github.com/spring-projects/spring-boot/issues/14810
for reactive websocket see this sample https://www.baeldung.com/spring-5-reactive-websockets
I had the same issue and was able solve it by adding
spring.main.allow-bean-definition-overriding=true
to my application.properties.
Sounds like a workaround and it was also only necessary if I deployed it as WAR -- as a standalone application the exception never occured.
I also faced this error, after Reimport All Mavne Projects(Intellij IDE) it works fine for me. Here my detailed input on this issue here

Embed Payara in Java SE

Context: Existing JavaSE application written in Swing which fires up an embedded server (so far it was Jetty) but we need to switch to Java EE, so we thought about bringing in an enterprise container (candidates are: Payara, Tomee, Wildfly).
The server should be able to run a web app based on dynamic input: web context, with its own web.xml, specific web resources which are not known at build time, so uber jar is not really an option for us.
We have successfully started a web app on Payara using code like the following (this is not working code, but it shows the steps we took for using Payara)
GlassFish glassfish;
WebContainer container;
GlassFishRuntime glassfishRuntime = = GlassFishRuntime.bootstrap();
glassfish = glassfishRuntime.newGlassFish();
glassfish.start();
// Access WebContainer
container = glassfish.getService(WebContainer.class);
WebContainerConfig config = new WebContainerConfig();
container.setConfiguration(config);
Context context = container.createContext(contextPathLocation);
m_webAppContexts.put(p_contextName, context);
WebListener listener = container.createWebListener("listener-1", HttpListener.class);
listener.setPort(myDynamicPortNumber);
container.addWebListener(listener);
container.addContext(context, myDynamicContextPath);
context.addServlet(myDynamicMapping, myServletName);
This is all working and a basic web application starts in Payara when invoked from our Java SE application.
We also have a fragment of web.xml declaring additional servlets that we want to bring in this dynamic deployment if given conditions are satisfied.
What is the best way to override the existing web.xml with fragments from another web.xml? We need pointers to documentation, directions from more experienced Payara users.
This is not possible with Payara or Wildfly, as they work very differently from how Jetty works.
However, it is possible with Tomee.

How to address JNDI configuration when using mvn scala:console

I'm troubleshooting a Mapper problem and I'm running into an issue trying to use a Mapper class inside of the Scala/Lift console. Our MetaMappers have their datasource configured through a ConnectionIdentifier that points to a JDBC datasource configured in JNDI. This works great when bootstrapping through Jetty.
When loading the console and running (new bootstrap.liftweb.Boot).boot to initialize, Schemifier.schemify fails JNDI configuration is not available.
scala> (new bootstrap.liftweb.Boot).boot
java.lang.NullPointerException: Looking for Connection Identifier ConnectionIdentifier(jdbc/svcHub) but failed to find either a JNDI data source with the name jdbc/svcHub or a lift connection manager with the correct name
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.mapper.DB$$anonfun$7$$anonfun$apply$12.apply(DB.scala:141)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.mapper.DB$$anonfun$7.apply(DB.scala:140)
at net.liftweb.common.EmptyBox.openOr(Box.scala:465)
at net.liftweb.mapper.DB$.newConnection(DB.scala:134)
at net.liftweb.mapper.DB$.getConnection(DB.scala:230)
at net.liftweb.mapper.DB$.use(DB.scala:581)
at net.liftweb.mapper.Schemifier$.schemify(Sche...
Essentially, I'd like to have full MetaMapper functionality from within the console. My question is: What's the best way to bootstrap a Lift app from the console such that the JNDI-based dependencies can also be fulfilled outside of a JNDI-capable web container?
Under a application server it's likely that the server will provide a JNDI context for you. In a standalone application you must provide a JNDI Context your self. For that you can use a javax.naming.InitialContext.
There is a nice example using Apache's DBCP here: http://commons.apache.org/dbcp/guide/jndi-howto.html. Of course, will you have to fix the Datasource objects to the implementation you are using.
This will be enough (not very elegant, though) for simple JNDI usage.

Servlet Exception + Class Cast Exception + Glassfish + Netbeans + JPA Entities + Vaadin

I get this error:
StandardWrapperValve[Vaadin Servlet]: PWC1406: Servlet.service() for servlet Vaadin Servlet threw exception
java.lang.ClassCastException: com.delhi.entities.Category cannot be cast to com.delhi.entities.Category
when I try to run my webapps on glassfish v2.
Category is a JPA entity object
the offending code according to the server log is:
for (Category c : categories) {
mymethod();
}
categories is derived from:
List<Category> categories = q.getResultList();
Any idea what went wrong?
This is a class loader issue. If a class is loaded by different class loaders, it's objects cannot be assigned to each other. You have probably passed an object from one WAR into another one. There are several options to resolve this:
Put all your code into a single WAR.
Use some form of remoting between your WARs. Serialization takes care of the class loader problem.
Try putting all you WARs into a single EAR. If that doesn't work, put all code into JARs that are on the EAR's Classpath in the MANIFEST.MF.
I once had the same problem and the environment I had was following:
I had Glassfish v4
Netbeans with following projects
webpage war project containing entities
and ear project with that webpage war project
The problem was that in war's project settings I had checked [x] Run>Deploy on save. This was causing deploying war project everyime I hit save. It was sometimes leading to PermGen (memory) problems and unability to deploy EAR correctly (because e.g. in between undeploying and deploying EAR - this "crazy" Netbeans was deploying this war).
Solution: If Netbeans && using EAR, then uncheck deploy on save in project properties.
EDIT:
it seems that this error is connected with
SEVERE: The web application [/faces] created a ThreadLocal with key of type [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1] (value [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1#249ea63a]) and a value of type [org.glassfish.web.loader.WebappClassLoader] (value [WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I've had same problem today. Solution was closing EntityManagerFactory after use.
This answer helped me:
https://stackoverflow.com/a/13823219/2455506
I'm experiencing this problem too with Glassfish v2 and Glassfish v3.
Can I ask you a question: Are you attempting to initialize any persistence object when the application is deployed (through a servlet loaded on startup or a context listener)?
Like bguiz, I've noticed this problem only happens on redeploy. A new deploy to a freshly restarted Glassfish server, never has this problem.
Like FelixM mentioned, I'm convinced this is a class loader issue, however I don't believe it's an issue with multiple wars (I only have 1 deployed to my server). In Glassfish 3, I can see that my WAR is utilizing 2 Glassfish "engines". One for the web(war) and one for the jpa. From what I understand, these are different containers each with their own classloader. I'm guessing Glassfish v2 works in the same manner.
I'm using Spring and (re)initialize some persistence objects on (re)deploy. What I'm thinking, is that while the web engine is reinitializing the war, the jpa engine is still using the old class definitions. Often if I retry the redeploy after this initial failure, it may succeed (sometimes it may take more than one retry but eventually I can get it to succeed without a restart - having better success with Glassfish v3 than v2).
At this point I'm thinking that either these two classloaders are out of sync or there is some sort of race condition on redeploy allowing this operation to sometimes succeed. I've tried to force the classloader, writing code like this
HashMap<Object, Object> properties = new HashMap<Object, Object>();
properties.put(PersistenceUnitProperties.CLASSLOADER, this.getClass().getClassLoader());
entityManagerFactory = Persistence.createEntityManagerFactory(jpaContext, properties);
but it didn't seem to have any affect.
I'm also wondering if eliminating the initialization at startup could fix the problem, giving the appserver time to resynchronize both engines before using any jpa classes (which is why I asked my follow up question).
My observation is that it only happens when using a hot redeploy or a static redeploy. This only applies, of course, if you get a class cast exception where both the to and from classes are the same.
Workarounds:
Don't use undeploy and deploy instead of redeploy
Restart app server
Remove static members of the affected classes
Use a remote interface (serialization makes this go away)
IMO I think the class loader was unable to reload the class and the old version was reused, resulting in the error.
This article doesn't talk about this error directly, but it is good background info on how the class loader works.