I am currently using a UserDetailsService to get values from a user file:
<bean id="userDetailsService" class="org.springframework.security.userdetails.memory.InMemoryDaoImpl">
<property name="userProperties" value="users.properties"/>
</bean>
My properties file is meant to be edited by the admin and the username passwords are not encrypted:
bob=bobpassword
alice=alicepassword
Now, since I use a PasswordEncoder in my application, I need to encrypt the passwords and add them to the UserDetails. This can be done somewhere in the code, but is not very handy in my opinion.
I found the PropertyPlaceholderConfigurer with the method convertPropertyValue(String value), which can be overridden.
From what I understand, it should be possible to load the properties file into the PropertyPlaceholderConfigurer, where the properties could be encrypted in the convertPropertyValue method and then loaded by the UserDetailsService. Is that possible to do? If yes, hints would help me, otherwise I'd appreciate to see an alternative solution.
Take a look at Jasypt, it is a java library which allows the developer to add basic encryption capabilities to his/her projects with minimum effort, and without the need of having deep knowledge on how cryptography works.
You can see how to configure it with Spring here
As an alternative, you may also implement your own propertyPersister to do the (d)encryption:
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>classpath:com/foo/jdbc.properties</value>
</property>
<property name="propertiesPersister">
<bean class="com.mycompany.MyPropertyPersister" />
</property>
</bean>
Take a look at the example here
Similar to what you expect can be found in
http://kayalvizhiameen.blogspot.in/2014/04/handling-obfuscated-property-values-in.html
Related
I want to disable the CANONICALIZE_FIELD_NAMES feature and wanted to know how can I do that through xml http://cxf.apache.org/schemas/jaxrs.xsd. For example I would configure the providers as follows
<jaxrs:providers>
<bean class="org.codehaus.jackson.jaxrs.JacksonJsonProvider"/>
</jaxrs:providers>
I assume the features would also be configured in a similar fashion
<jaxrs:features>
----
</jaxrs:features>
I tried checking online did not find any solution
I am migrating a legacy Spring 3, Hibernate 3, JTA on JBoss 5 application to the latest versions (Spring 4.1.0.RELEASE, Hibernate 4.3.6.Final, JBoss Wildfly 8.1). It seems that Spring 4.1.0.RELEASE and Hibernate 4.3.6.Final do NOT work together in supporting transactions for write operations with the LocalSessionFactoryBean and the HibernateTransactionManager as configured below. Read-only get operations appear to be working ok.
To migrate, org.springframework.orm.hibernate3.support.HibernateDaoSupport has been updated to org.springframework.orm.hibernate4.support.HibernateDaoSupport. The code in question is trying to save with getHibernateTemplate().saveOrUpdate(myObject); where myObject is the object to save (that works in Spring3 + Hibernate 3). The code compiles but at runtime I see the code throw an exception for the call at:
https://github.com/spring-projects/spring-framework/blob/master/spring-orm-hibernate4/src/main/java/org/springframework/orm/hibernate4/HibernateTemplate.java#L325
Questions:
Is the opening/closing of Hibernate sessions triggered by the getSessionFactory().getCurrentSession() call an issue (performance or otherwise)? If so, is there something in the configuration that can be set to avoid it?
HibernateTemplate always sets the newly opened session to FlushMode.MANUAL while handling the exception. And, in the debugger, I see that this fails the check for write operations at:
https://github.com/spring-projects/spring-framework/blob/master/spring-orm-hibernate4/src/main/java/org/springframework/orm/hibernate4/HibernateTemplate.java#L1134
Note that setting getHibernateTemplate().setCheckWriteOperations(false); bypasses the Spring check but the getHibernateTemplate().saveOrUpdate(myObject) call silently fails in the Hibernate code without throwing any exceptions and nothing gets written to the database. What config change(s) do I need to make to get the write operations to commit?
Bean Definitions:
Here're the relevant bean definition snippets from the application-context.xml Spring config file:
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<tx:annotation-driven/>
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean" lazy-init="false">
<property name="jndiName" value="java:jboss/datasources/jdbc/my-srvr"/>
<property name="cache">
<value>false</value>
</property>
<property name="proxyInterface">
<value>javax.sql.DataSource</value>
</property>
</bean>
<!-- Hibernate SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" lazy-init="true">
<property name="dataSource" ref="dataSource"/>
<property name="mappingResources">
<list>
<value>com/mydomain/dao/Hib.hib.xml</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.generate_statistics">false</prop>
<!-- JTA -->
<prop key="hibernate.transaction.factory_class">org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory</prop>
<prop key="hibernate.flushMode">AUTO</prop>
<prop key="jta.UserTransaction">java:jboss/UserTransaction</prop>
<prop key="jta.TransactionManager">java:jboss/TransactionManager</prop>
<prop key="hibernate.transaction.jta.platform">org.hibernate.engine.transaction.jta.platform.internal.JBossAppServerJtaPlatform</prop>
<prop key="hibernate.current_session_context_class">org.hibernate.context.internal.JTASessionContext</prop>
<!--prop key="hibernate.transaction.manager_lookup_class">
org.hibernate.transaction.JBossTransactionManagerLookup
</prop-->
<!-- Turn caching off to focus on JTA issues-->
<prop key="hibernate.cache.use_second_level_cache">false</prop>
<prop key="hibernate.cache.use_query_cache">false</prop>
<!--prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop-->
<prop key="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.EhCacheRegionFactory</prop>
<prop key="net.sf.ehcache.configurationResourceName">sample-ehcache.xml</prop>
</props>
</property>
<!--No equivalent class in Spring4; comment out for now-->
<!--property name="eventListeners">
<map>
<entry key="merge">
<bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener"/>
</entry>
</map>
</property-->
</bean>
Note: An important change from the legacy bean definition is the change from org.springframework.transaction.jta.JtaTransactionManager to org.springframework.orm.hibernate4.HibernateTransactionManager.
JNDI View
Once deployed, the JNDI View in JBoss Wildfly is as below (of course the object references change every deployment):
java:jboss
TransactionManager TransactionManagerDelegate#49e6e9c8
TransactionSynchronizationRegistry TransactionSynchronizationRegistryImple#40cd0746
UserTransaction UserTransaction
jaas java:jboss/jaas/ Context proxy
So I finally got the write operations to work in the legacy code. Here are the steps I followed to get the code working:
Ensure that you are using the Hibernate specific transaction manager org.springframework.orm.hibernate4.HibernateTransactionManager and NOT the generic org.springframework.transaction.jta.JtaTransactionManager
Verify that the annotations are enabled.
Add the #Transactional annotation (org.springframework.transaction.annotation.Transactional) to the methods where you perform the save/update/delete etc operations. The legacy code worked without needing this annotation but now, for the latest versions, it is required to enable write operations.
In my case, I got a bunch of auto-wiring issues as soon as I added the annotation. The root cause turned out to be that some implementation classes, not interfaces, were being used to auto-wire properties at the #Service level classes. Changing the references to use the interfaces fixed that issue. You can read more about it on other threads such as this one.
I had to search and repeat the steps to fix all such instances in the legacy code. Note that setting the FlushMode to AUTO globally via OpenSessionInViewFilter is not a clean solution; there is a good reason why Spring sets the FlushMode to MANUAL by default. Spring makes the necessary runtime tweaks to support write operations when the #Transactional annotation is present. I debugged all the way to the Hibernate org.hibernate.engine.transaction.internal.TransactionCoordinatorImpl class and the Jta synchronization works fine with the setup above. Hope this helps anyone stuck in trying to migrate legacy code to the latest versions.
I have defined a property in "alfresco-global.properties". How can I access this property from the FTL file of my webscript in Alfresco Share?
I'm using Alfresco Community Version 4.2
You can't access properties files directly from the view (FTL), it violates the Separation Of Concerns principle.
Since the alfresco-global-properties is actually a spring bean of type java.util.Properties, you can inject the whole thing into your Java Class of your webscript:
<property name="properties">
<ref bean="global-properties"/>
</property>
And then you can access your property like this properties.getProperty("my.custom.property")
We have a spring batch component implemented as a component of an ear application deployed on weblogic. We want to implement max thread constraint on the spring batch component and not on the web application as a whole. So we think of implementing through work manager. Before implementing i have following doubts:
1. i can create a global work manager of maximum thread constraint in weblogic console
2. Refer it in spring batch component.
My doubt is if I implement the above approach, will it be affecting all the applications deployed on weblogic or will it affect the application only if work manager is referenced by an application.
Also I know, i can do this work manager creation through weblogic.xml of webapp, doing so may affect whole webapp as i need the max thread constraint only for a component of webapp.
Please suggest
You can control the threads available for Spring batch jobs by setting the appropriate TaskExecutor on the JobLauncher. For example:
<bean id="jobTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value ="5" />
<property name="maxPoolSize" value ="10" />
<property name="allowCoreThreadTimeOut" value="true" />
<property name="threadNamePrefix" value="batch-job-thread-" />
</bean>
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="jobTaskExecutor" />
</bean>
The above example is for Spring batch 2.1.8.
I'm trying to get integration testing working for a GlassFish 2.x project, using Maven2 and Cargo. I finally have Cargo attempting to deploy my EAR but it fails to start because the data source is not configured. The app also depends on a few JMS queues and a connection factory - how do I add these?
The Cargo Glassfish 2.x plugin says existing configurations are not supported, so I can't do that.
Using the maven-glassfish-plugin is an option, but we also run OC4J so a Cargo solution would be preferred.
edit: The resources are: 1 JDBC connection pool, 1 JDBC resource, 4 JMS queues, 2 JMS connection factories and a custom security realm (pear tree optional). The realm needs an entry in the login.conf like:
myRealm {
uk.co.mycom.MyGlassFishLoginModule required;
};
I'm not sure (I never used this) but IIRC, you should be able to put your datasource configuration in a sun-resources.xml file and package it under META-INF/sun-resources.xml in your EAR and GlassFish is supposed to create the resources at deploy time.
Here is an example sun-resources.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems Inc.//DTD Application Server 9.0 Domain//EN" "sun-resources_1_3.dtd">
<resources>
<jdbc-connection-pool name="SPECjPool" steady-pool-size="100"
max-pool-size="150" max-wait-time-in-millis="60000"
pool-resize-quantity="2" idle-timeout-in-seconds="300"
is-isolation-level-guaranteed="true"
is-connection-validation-required="false"
connection-validation-method="auto-commit"
fail-all-connections="false"
datasource-classname="oracle.jdbc.pool.OracleDataSource">
<property name="URL"
value="jdbc:oracle:thin:#iasperfsol12:1521:specdb"/>
<property name="User" value="spec"/>
<property name="Password" value="spec"/>
<property name="MaxStatements" value="200"/>
<property name="ImplicitCachingEnabled" value="true"/>
</jdbc-connection-pool>
<jdbc-resource enabled="true" pool-name="SPECjPool"
jndi-name="jdbc/SPECjDB"/>
</resources>
Give it a try.
Resources
The sun-resources.xml File
Thanks, that worked. The datasource seems to have gone in okay and the app has deployed. However from the doc you linked, I can't see how to add the other things I need (edited more detail into my question about these). This solution also means that I will have to (use profiles to?) build my EAR differently for IT, which is imperfect.
I somehow missed that you wanted to create other resources than Datasources and I've seen several threads reporting that the suggested approach won't work with GlassFish v2 for any resources (like JMS resources). My bad.
So, given the current state, your options are (IMO):
contribute to Cargo to provide an "existing" configuration implementation for GlassFish v2
use the maven-glassfish-plugin as you suggested
I don't have any better suggestions.