accessing alfresco-global.properties from FTL - properties

I have defined a property in "alfresco-global.properties". How can I access this property from the FTL file of my webscript in Alfresco Share?
I'm using Alfresco Community Version 4.2

You can't access properties files directly from the view (FTL), it violates the Separation Of Concerns principle.
Since the alfresco-global-properties is actually a spring bean of type java.util.Properties, you can inject the whole thing into your Java Class of your webscript:
<property name="properties">
<ref bean="global-properties"/>
</property>
And then you can access your property like this properties.getProperty("my.custom.property")

Related

how to configure the external properfiles into the OSGI

we are using the Jboss fuse 6.2 along with technical stack blueprint,camel ,activeMQ and Mybatis.
We need to know about how to configure the property files in OSGI ,
as per my knowledge we could configure .cfg files, but is there any simplest way to use like spring configuring the configuring.
In Our code we are reading from property files . using namespace ext:proeprtyplaceHolder giving that bean id and values we are giving .
Help to provide is there any simplest way to read the property files
There is several ways to add configuration, because OSGi services can access configuration via ConfigurationAdmin service. The blueprint also can access property values over it.
JBoss fuse using karaf, so you can use the following methods.
(There is some quotes from http://www.liquid-reality.de/display/liquid/2011/09/23/Karaf+Tutorial+Part+2+-+Using+the+Configuration+Admin+Service)
Configuration with Blueprint
The integration with our bean class is mostly a simple bean definition where we define the title property and assign the placeholder which will be resolved using the config admin service. The only special thing is the init-method. This is used to give us the chance to react after all changes were made like in the pure OSGi example.
For blueprint we do not need any maven dependencies as our Java Code is a pure Java bean. The blueprint context is simply activated by putting it in the OSGI-INF/blueprint directory and by having the blueprint extender loaded. As blueprint is always loaded in Karaf we do not need anything else.
<cm:property-placeholder persistent-id="ConfigApp" update-strategy="reload" >
<cm:default-properties>
<cm:property name="title" value="Default Title"/>
</cm:default-properties>
</cm:property-placeholder>
<bean id="myApp" init-method="refresh">
<property name="title" value="${title}"></property>
</bean>
After you can put a cfg file (which is a standard java property file) to
karaf's etc or deploy directory with the name of of the given persistent-id which is MyApp in our example. (For example: /etc/ConfigApp.cfg)
title=Configured title

Multiple gemfire cache-server on same host machine

Hi i want to start more than one gemfire cache-server on same host using Spring gemfire 8.1.Please find below gemfire configuration file. I want to start GFServer1 and GFServer2 on same host i.e.HOSTNAME using Spring Gemfire configuration. I want to avoid gfsh command and start everything from eclipse and connect client to these servers on the same host.
Thanks in advance
<util:properties id="gemfireProperties">
<prop key="name">Locator_Dev</prop>
<prop key="mcast-port">0</prop>
<prop key="locators">HOSTNAME[1099]</prop>
<prop key="log-level">warning</prop>
<prop key="http-service-port">8181</prop>
<prop key="jmx-manager">true</prop>
<prop key="jmx-manager-port">1199</prop>
<prop key="jmx-manager-start">true</prop>
<prop key="start-locator">HOSTNAME[1099]</prop>
</util:properties>
<gfe:cache properties-ref="gemfireProperties" />
<gfe:cache-server id="GFServer1" auto-startup="true"
bind-address="HOSTNAME" port="40411" host-name-for-clients="HOSTNAME"
load-poll-interval="2000" max-connections="22" max-threads="16"
max-message-count="1000" max-time-between-pings="30000" >
<gfe:subscription-config eviction-type="ENTRY"
capacity="1000" disk-store="diskStore1" />
</gfe:cache-server>
<gfe:cache-server id="GFServer2" auto-startup="true"
bind-address="HOSTNAME" port="40412" host-name-for-clients="HOSTNAME"
load-poll-interval="2000" max-connections="22" max-threads="16"
max-message-count="1000" max-time-between-pings="30000" >
<gfe:subscription-config eviction-type="ENTRY"
capacity="1000" disk-store="diskStore1" />
</gfe:cache-server>
<gfe:disk-store id="diskStore1" queue-size="50"
auto-compact="true" max-oplog-size="10" time-interval="9999">
<gfe:disk-dir
location="D:\NP\WorkSpace\GemfireRegionSolutionNStart\disk-store\store_1"
max-size="20" />
<gfe:disk-dir
location="D:\NP\WorkSpace\GemfireRegionSolutionNStart\disk-store\store_2"
max-size="20" />
</gfe:disk-store>
<gfe:replicated-region id="customer" name="Customer">
</gfe:replicated-region>
<gfe:replicated-region id="bookMaster" name="BookMaster">
</gfe:replicated-region>
</beans>
The configuration you have posted will create two cache-servers within the same JVM, i.e it will open up two ports within the same process.
If this is not what you want, i.e you want two distinct process, in eclipse you will have to provide two runtime configurations to start the two servers.
Is there a specific question? As #Swapnil points out, this will start 2 GemFire "Cache Servers" (ServerSockets listening for Cache Clients) as you have appropriately configured on the same host within the same JVM. This will work regardless of how this is executed (i.e. IDE, command-line, from Gfsh or from Spring Boot).
Let us know if you have a more specific question, thanks!
So you can configure the LocatorLauncherFactoryBean, for example, like so...
<uti:properties id="gemfireProperties">
<prop key="log-level">config</prop>
<prop key="http-service-port">8181</prop>
<prop key="jmx-manager">true</prop>
<prop key="jmx-manager-port">1199</prop>
<prop key="jmx-manager-start">true</prop>
<prop key="locators">host1[10334],host2[11235],...,hostN[20668]</prop>
</util:properties>
<bean id="locator" class="org.spring.data.gemfire.config.LocatorLauncherFactoryBean">
<property name="gemfireProperties" ref="gemfireProperties"/>
<property name="memberName" value="SpringDataGemFireLocator"/>
<property name="bindAddress" value="10.124.12.24"/>
<property name="port" value="12480"/>
</bean>
As you may have noticed, this Locator can join other Locators in the GemFire Cluster, which were specified in the "gemfireProperties" bean with the "locators" GemFire System property.
NOTE: the "bindAddress" property to the LocatorLauncherFactoryBean is only necessary if the localhost where this Locator will be running has multiple NICs and you want to bind to a specific NIC.
Also, I have set the JMX Manager GemFire System properties to enable the Locator to become and actually start a Manager (on port 1199). This allows you to connect to this Locator from Gfsh either with gfsh>connect --locator=localhost[12480] or with gfsh>connect --jmx-manager=localhost[1199].
Basically, the "gemfireProperties" bean allows you to configure any valid GemFire System property.
Now, since this Locator is running from within your IDE, you will need to configure the "run profile" with a $GEMFIRE environment variable pointing at a GemFire distribution downloaded from Pivotal's website in order to get Pulse running from this Locator. This is expected by the GemFire Manager's ManagementAgent when making a decision of whether to 1. start the embedded HTTP Service (Jetty) running GemFire's out-of-box webapps (e.g. Pulse) and 2. whether it can find Pulse and start the webapp. The ManagementAgent looks for Pulse in the distro.
For instance, I set my $GEMFIRE environment variable to...
/Users/jblum/Downloads/Pivotal/GemStone/Products/GemFire/Pivotal_GemFire_820_b17919_Linux
Now, to get your individiual Spring-configured GemFire Servers to connect to the cluster, that is simple.
Again, you only need a "gemfireProperties" bean defined in each Spring GemFire Server XML configuration file with the "locators" GemFire System property defined, e.g. ...
<uti:properties id="gemfireProperties">
<prop key="log-level">config</prop>
<prop key="locators">localhost[12480]</prop>
</util:properties>
<gfe:cache properties-ref="gemfireProperties"/>
This configuration will enable the GemFire Data Nodes to connect to the cluster, and this cluster will be visible from Pulse, if everything is setup correctly.
Again, hope this helps.
Cheers,
John
OK... so you have a few options from within your IDE (e.g. Eclipse).
If you break the Spring config above into 2 separate Spring (Data GemFire) XML configuration files, each containing 1 of the <gfe:cache-server> elements, call these files, for example, spring-gemfire-server1-context.xml and spring-gemfire-server2-context.xml, you could run the Spring-based/configure GemFire "data nodes" and Cache Servers with the following...
SimpleSpringApplication
Where the argument to the simple little Java "main" program is the file system path to 1 of the Spring XML configuration files noted above.
Better yet, you can launch these Spring (GemFire) configs with a SpringBoot application using something like...
UsefulSpringBootGemFireApplication
As you can see here, you can use either the #Import annotation if you are configuring GemFire from Spring using Java-based Configuration (for example, UsefulSpringBasedGemFireConfiguration), or in your case, with XML using the
#ImportResoruce("/classpath/to/spring-gemfire-server1-context.xml") annotation.
It maybe possible (not sure) to pass a System property value into the #ImportResource annotation, like so...
#SpringBootApplication
#ImportResource("${spring-gemfire-context-xml-location}")
class SpringBootGemFireApplication {
...
}
Then, from within your IDE, you can create 2 separate "run profiles" using the same SpringBootGemFireApplication class and then setting the System property (e.g. -Dspring-gemfire-context-xml-location=/path/to/spring-gemfire-server1-context.xml) appropriately for each configuration.
Remember, you can use Spring's ResourceLoader path qualifiers (e.g. file:, http:, etc) to resolve your Spring GemFire config from different source locations (CLASSPATH, file system, etc)... see the table ("Table 7.1 Resource strings") in hyperlinked section of the Spring Framework Reference Guide.
In the worst case, you need to create 2 separate, but nearly identical SpringBoot application Java main classes to launch your 2 configuration files for each GemFire Data Node (& CacheServer) JVM process. However, hopefully using the System property approach allows you to recycle the same class in 2 separate run profiles.
So, this leaves you with 2 GemFire Data Node JVM processes now. But, what about he Locator?
Well, you can continue to embed the Locator in the GemFire Data Node Server JVM process as before, but if you really want a standalone Locator JVM process, then you can use the following, "experimental" class...
LocatorLauncherFactoryBean
This class uses GemFire's LocatorLauncher class to configure and bootstrap a GemFire Locator from Spring config. I created this class over 2 years ago for a customer POC as example of how a developer might configure a GemFire Locator from Spring config.
The Spring XML configuration (used by the SpringBootGemFireApplication #ImportResoruce annotation, in yet another "run profile" and System property combination), would look similar to the following...
locator.xml
Now, you have effectively achieved 3 separate GemFire JVM processes (2 Servers and 1 Locator). You can scale this from with our IDE to however many Servers and Locators you want, and Pulse will show all of these in the cluster, providing the cluster is configured correctly (Servers pointed at Locators, etc).
Hope this helps.
Cheers!
John

Spring4 + Hibernate4 + JTA Write Operations Fail

I am migrating a legacy Spring 3, Hibernate 3, JTA on JBoss 5 application to the latest versions (Spring 4.1.0.RELEASE, Hibernate 4.3.6.Final, JBoss Wildfly 8.1). It seems that Spring 4.1.0.RELEASE and Hibernate 4.3.6.Final do NOT work together in supporting transactions for write operations with the LocalSessionFactoryBean and the HibernateTransactionManager as configured below. Read-only get operations appear to be working ok.
To migrate, org.springframework.orm.hibernate3.support.HibernateDaoSupport has been updated to org.springframework.orm.hibernate4.support.HibernateDaoSupport. The code in question is trying to save with getHibernateTemplate().saveOrUpdate(myObject); where myObject is the object to save (that works in Spring3 + Hibernate 3). The code compiles but at runtime I see the code throw an exception for the call at:
https://github.com/spring-projects/spring-framework/blob/master/spring-orm-hibernate4/src/main/java/org/springframework/orm/hibernate4/HibernateTemplate.java#L325
Questions:
Is the opening/closing of Hibernate sessions triggered by the getSessionFactory().getCurrentSession() call an issue (performance or otherwise)? If so, is there something in the configuration that can be set to avoid it?
HibernateTemplate always sets the newly opened session to FlushMode.MANUAL while handling the exception. And, in the debugger, I see that this fails the check for write operations at:
https://github.com/spring-projects/spring-framework/blob/master/spring-orm-hibernate4/src/main/java/org/springframework/orm/hibernate4/HibernateTemplate.java#L1134
Note that setting getHibernateTemplate().setCheckWriteOperations(false); bypasses the Spring check but the getHibernateTemplate().saveOrUpdate(myObject) call silently fails in the Hibernate code without throwing any exceptions and nothing gets written to the database. What config change(s) do I need to make to get the write operations to commit?
Bean Definitions:
Here're the relevant bean definition snippets from the application-context.xml Spring config file:
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<tx:annotation-driven/>
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean" lazy-init="false">
<property name="jndiName" value="java:jboss/datasources/jdbc/my-srvr"/>
<property name="cache">
<value>false</value>
</property>
<property name="proxyInterface">
<value>javax.sql.DataSource</value>
</property>
</bean>
<!-- Hibernate SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" lazy-init="true">
<property name="dataSource" ref="dataSource"/>
<property name="mappingResources">
<list>
<value>com/mydomain/dao/Hib.hib.xml</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.generate_statistics">false</prop>
<!-- JTA -->
<prop key="hibernate.transaction.factory_class">org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory</prop>
<prop key="hibernate.flushMode">AUTO</prop>
<prop key="jta.UserTransaction">java:jboss/UserTransaction</prop>
<prop key="jta.TransactionManager">java:jboss/TransactionManager</prop>
<prop key="hibernate.transaction.jta.platform">org.hibernate.engine.transaction.jta.platform.internal.JBossAppServerJtaPlatform</prop>
<prop key="hibernate.current_session_context_class">org.hibernate.context.internal.JTASessionContext</prop>
<!--prop key="hibernate.transaction.manager_lookup_class">
org.hibernate.transaction.JBossTransactionManagerLookup
</prop-->
<!-- Turn caching off to focus on JTA issues-->
<prop key="hibernate.cache.use_second_level_cache">false</prop>
<prop key="hibernate.cache.use_query_cache">false</prop>
<!--prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</prop-->
<prop key="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.EhCacheRegionFactory</prop>
<prop key="net.sf.ehcache.configurationResourceName">sample-ehcache.xml</prop>
</props>
</property>
<!--No equivalent class in Spring4; comment out for now-->
<!--property name="eventListeners">
<map>
<entry key="merge">
<bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener"/>
</entry>
</map>
</property-->
</bean>
Note: An important change from the legacy bean definition is the change from org.springframework.transaction.jta.JtaTransactionManager to org.springframework.orm.hibernate4.HibernateTransactionManager.
JNDI View
Once deployed, the JNDI View in JBoss Wildfly is as below (of course the object references change every deployment):
java:jboss
TransactionManager TransactionManagerDelegate#49e6e9c8
TransactionSynchronizationRegistry TransactionSynchronizationRegistryImple#40cd0746
UserTransaction UserTransaction
jaas java:jboss/jaas/ Context proxy
So I finally got the write operations to work in the legacy code. Here are the steps I followed to get the code working:
Ensure that you are using the Hibernate specific transaction manager org.springframework.orm.hibernate4.HibernateTransactionManager and NOT the generic org.springframework.transaction.jta.JtaTransactionManager
Verify that the annotations are enabled.
Add the #Transactional annotation (org.springframework.transaction.annotation.Transactional) to the methods where you perform the save/update/delete etc operations. The legacy code worked without needing this annotation but now, for the latest versions, it is required to enable write operations.
In my case, I got a bunch of auto-wiring issues as soon as I added the annotation. The root cause turned out to be that some implementation classes, not interfaces, were being used to auto-wire properties at the #Service level classes. Changing the references to use the interfaces fixed that issue. You can read more about it on other threads such as this one.
I had to search and repeat the steps to fix all such instances in the legacy code. Note that setting the FlushMode to AUTO globally via OpenSessionInViewFilter is not a clean solution; there is a good reason why Spring sets the FlushMode to MANUAL by default. Spring makes the necessary runtime tweaks to support write operations when the #Transactional annotation is present. I debugged all the way to the Hibernate org.hibernate.engine.transaction.internal.TransactionCoordinatorImpl class and the Jta synchronization works fine with the setup above. Hope this helps anyone stuck in trying to migrate legacy code to the latest versions.

Spring Encrypt Values from Properties File

I am currently using a UserDetailsService to get values from a user file:
<bean id="userDetailsService" class="org.springframework.security.userdetails.memory.InMemoryDaoImpl">
<property name="userProperties" value="users.properties"/>
</bean>
My properties file is meant to be edited by the admin and the username passwords are not encrypted:
bob=bobpassword
alice=alicepassword
Now, since I use a PasswordEncoder in my application, I need to encrypt the passwords and add them to the UserDetails. This can be done somewhere in the code, but is not very handy in my opinion.
I found the PropertyPlaceholderConfigurer with the method convertPropertyValue(String value), which can be overridden.
From what I understand, it should be possible to load the properties file into the PropertyPlaceholderConfigurer, where the properties could be encrypted in the convertPropertyValue method and then loaded by the UserDetailsService. Is that possible to do? If yes, hints would help me, otherwise I'd appreciate to see an alternative solution.
Take a look at Jasypt, it is a java library which allows the developer to add basic encryption capabilities to his/her projects with minimum effort, and without the need of having deep knowledge on how cryptography works.
You can see how to configure it with Spring here
As an alternative, you may also implement your own propertyPersister to do the (d)encryption:
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>classpath:com/foo/jdbc.properties</value>
</property>
<property name="propertiesPersister">
<bean class="com.mycompany.MyPropertyPersister" />
</property>
</bean>
Take a look at the example here
Similar to what you expect can be found in
http://kayalvizhiameen.blogspot.in/2014/04/handling-obfuscated-property-values-in.html

Setup resources for GlassFish2.x Cargo deployment

I'm trying to get integration testing working for a GlassFish 2.x project, using Maven2 and Cargo. I finally have Cargo attempting to deploy my EAR but it fails to start because the data source is not configured. The app also depends on a few JMS queues and a connection factory - how do I add these?
The Cargo Glassfish 2.x plugin says existing configurations are not supported, so I can't do that.
Using the maven-glassfish-plugin is an option, but we also run OC4J so a Cargo solution would be preferred.
edit: The resources are: 1 JDBC connection pool, 1 JDBC resource, 4 JMS queues, 2 JMS connection factories and a custom security realm (pear tree optional). The realm needs an entry in the login.conf like:
myRealm {
uk.co.mycom.MyGlassFishLoginModule required;
};
I'm not sure (I never used this) but IIRC, you should be able to put your datasource configuration in a sun-resources.xml file and package it under META-INF/sun-resources.xml in your EAR and GlassFish is supposed to create the resources at deploy time.
Here is an example sun-resources.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems Inc.//DTD Application Server 9.0 Domain//EN" "sun-resources_1_3.dtd">
<resources>
<jdbc-connection-pool name="SPECjPool" steady-pool-size="100"
max-pool-size="150" max-wait-time-in-millis="60000"
pool-resize-quantity="2" idle-timeout-in-seconds="300"
is-isolation-level-guaranteed="true"
is-connection-validation-required="false"
connection-validation-method="auto-commit"
fail-all-connections="false"
datasource-classname="oracle.jdbc.pool.OracleDataSource">
<property name="URL"
value="jdbc:oracle:thin:#iasperfsol12:1521:specdb"/>
<property name="User" value="spec"/>
<property name="Password" value="spec"/>
<property name="MaxStatements" value="200"/>
<property name="ImplicitCachingEnabled" value="true"/>
</jdbc-connection-pool>
<jdbc-resource enabled="true" pool-name="SPECjPool"
jndi-name="jdbc/SPECjDB"/>
</resources>
Give it a try.
Resources
The sun-resources.xml File
Thanks, that worked. The datasource seems to have gone in okay and the app has deployed. However from the doc you linked, I can't see how to add the other things I need (edited more detail into my question about these). This solution also means that I will have to (use profiles to?) build my EAR differently for IT, which is imperfect.
I somehow missed that you wanted to create other resources than Datasources and I've seen several threads reporting that the suggested approach won't work with GlassFish v2 for any resources (like JMS resources). My bad.
So, given the current state, your options are (IMO):
contribute to Cargo to provide an "existing" configuration implementation for GlassFish v2
use the maven-glassfish-plugin as you suggested
I don't have any better suggestions.