I have googled this for the last half an hour, and found hits for pentaho parameters etc but nothing that appears to ask or answer this question.
I have a set of reports that are the same for each customer, but need to connect to different databases depending upon the customer who is running the report.
So my idea is to pass the JNDI data source name to the report at runtime as a parameter, so that the customer will connect to the correct database.
Is this possible, or is there a better way of managing a common set of reports that are used by different customers running on different databases but in the same single instance of the pentaho engine ?
OK, I have found a better solution using the little documented multi-tennant feature.
1) Stop Pentaho
2) Modify ( pentaho-solutions/system/pentahoObjects.spring.xml )
<!-- Original Code
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.DynamicallyPooledOrJndiDatasourceService" scope="singleton">
<property name="pooledDatasourceService" ref="pooledOrJndiDatasourceService" />
<property name="nonPooledDatasourceService" ref="nonPooledOrJndiDatasourceService" />
</bean>
-->
<!--Begin Tenant -->
<bean id="IDBDatasourceService" class="org.pentaho.platform.engine.services.connection.datasource.dbcp.tenantaware.TenantAwareLoginParsingDatasourceService"
scope="singleton">
<property name="requireTenantId" value="false" />
<property name="datasourceNameFormat" value="{1}-{0}" />
<property name="tenantSeparator" value="#" />
<property name="tenantOnLeft" value="false" />
</bean>
<!-- End Tenant -->
3) Add Suffix to Data Sources ( biserver-ce/tomcat/webapps/pentaho/META-INF/context.xml )
<Resource
name="jdbc/MYDBSRC-xxx"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=SOMEDB"
/>
<Resource
name="jdbc/MYDBSRC-aaa"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.commons.dbcp.BasicDataSourceFactory"
maxActive="20"
maxIdle="5"
maxWait="10000"
username="XXXX"
password="XXXX"
driverClassName="net.sourceforge.jtds.jdbc.Driver"
url="jdbc:jtds:sqlserver://192.168.42.0:1433;DatabaseName=AOTHERDB"
/>
4) Delete /tomcat/conf/Catalina/localhost/pentaho.xml
5) Restart Pentaho, create a user someone#xxx etc etc
6) Create a report using the JNDI Name "MYDBSRC"
7) Login as someone#xxx and you will get a different report / datasource than either logging in as user, or user#aaa
Tadah !!
Related
I have been using IntelliJ IDEA 15 for close to a year now, and using the same project this whole time, where I have created Tasks to basically act as workspaces for various work assignments so I can group files I've touched based on the assignment title. I've recently had an issue in my project workspace where I am basically being forced to create a new workspace, and thus a new project in IntelliJ. The problem is that this new project has none of my Task history in it.
Does anyone know if it's possible, and if so, how, to migrate this Task history from one project to another?
Thanks in advance!
Tasks are saved in YOUR_PROJECT/.idea/workspace.xml so you can backup this file and if needed you can just copy and paste these lines defining tasks to the other workspace.xml file. This is the example of one:
<configuration default="false" name="my-debug-task" type="JavascriptDebugType" factoryName="JavaScript Debug" uri="http://localhost:4200">
<mapping url="webpack:////home/marcin/Sprawozdania/Inzynierka/sqap/sqap-ui/src" local-file="$PROJECT_DIR$/sqap-ui/src" />
<mapping url="webpack:////home/marcin/Sprawozdania/Inzynierka/sqap/sqap-ui" local-file="$PROJECT_DIR$/sqap-ui" />
<method />
</configuration>
Addittionaly In workspace.xml you need to add reference of copied task to:
<project>
<component>
<list>
like here :
<list size="3">
<item index="0" class="java.lang.String" itemvalue="JavaScript Debug.my-debug-task" />
<item index="1" class="java.lang.String" itemvalue="JavaScript Debug.Unnamed" />
<item index="2" class="java.lang.String" itemvalue="Maven.config start -devs" />
</list>
I'm using last version of Jasig CAS server (4.0.0) with an LDAP server.
Users are stored under this LDAP structure : ou=Users,ou=SSOTEST,dc=mycompany,dc=com
What I want is to search an user from a top level (example : ou=SSOTEST,dc=mycompany,dc=com).
CAS server has an LdapPersonAttributeDao bean which is looking for an object matching a search filter. Here is the code for this bean :
<bean id="ldapPersonAttributeDao"
class="org.jasig.cas.persondir.LdapPersonAttributeDao"
p:connectionFactory-ref="searchPooledLdapConnectionFactory"
p:baseDN="ou=SSOTEST,dc=company,dc=com"
p:searchControls-ref="searchControls"
p:searchFilter="uid={0}">
<property name="resultAttributeMapping">
<map>
<!--
| Key is LDAP attribute name, value is principal attribute name.
-->
<entry key="memberOf" value="userMemberOf" />
<entry key="cn" value="userCn" />
</map>
</property>
</bean>
And now the searchControls bean which do a lookup at SUBTREE_SCOPE (2) level (according toSearchControls scope level values).
<bean id="searchControls"
class="javax.naming.directory.SearchControls"
p:searchScope="2"
p:countLimit="10" />
When I run my CAS server and I try to authenticate, everything works but there are no extra attributes returned.
I think the problem comes from searchScope, which don't seems to be set to wanted value.
Here is output log from the server :
<execute request=[org.ldaptive.SearchRequest#-1312441815::baseDn=ou=SSOTEST,dc=mycompany,dc=com, searchFilter=[org.ldaptive.SearchFilter#-3391
91059::filter=uid={0}, parameters={0=myuser}], returnAttributes=[], searchScope=null, timeLimit=0, sizeLimit=10 [...]
I know its been some time since this question was asked. But I managed to fix this problem by adding:
<bean class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor" />
to deployerConfigContext.xml.
The cause of this issue was that the initalize method in LdapPersonAttributeDao was not being invoked because the #PostConstruct annotation wasn't being executed. For this reason the searchScope variable was never set.
I'm using Spring 3.1.1, MyBatis 3.1.1, MySQL 5.0.67. My Spring configuration is below:
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="14400000"/>
<property name="testOnBorrow" value="false"/>
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation" value="classpath:mybatis/myBatisConfig.xml"/>
</bean>
<bean id="sqlSessionTemplate" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg ref="sqlSessionFactory"/>
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="dataSource"/>
<tx:annotation-driven transaction-manager="transactionManager"/>
And log4.properties is below:
log4j.logger.org.springframework=DEBUG
log4j.logger.org.apache=DEBUG
log4j.logger.org.mybatis=DEBUG
log4j.logger.java.sql=DEBUG
log4j.logger.java.sql.Connection=DEBUG
log4j.logger.java.sql.Statement=DEBUG
log4j.logger.java.sql.PreparedStatement=DEBUG
log4j.logger.java.sql.ResultSet=DEBUG
With these configuration, I can see SQL query statement which is executed and parameters to that query but I can't see query result log. My log is like this:
[org.mybatis.spring.SqlSessionUtils] - Creating a new SqlSession
[org.mybatis.spring.SqlSessionUtils] - SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f] was not registered for synchronization because synchronization is not active
[org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
[org.mybatis.spring.transaction.SpringManagedTransaction] - JDBC Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]] will not be managed by Spring
[java.sql.Connection] - ooo Using Connection [ProxyConnection[PooledConnection[com.mysql.jdbc.JDBC4Connection#3cfde82]]]
[java.sql.Connection] - ==> Preparing: SELECT col FROM table WHERE col1=? AND col2=?
[java.sql.PreparedStatement] - ==> Parameters: 93(Integer), 4(Integer)
[org.mybatis.spring.SqlSessionUtils] - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#4ccdd1f]
[org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
Is there any way to print log including query result?
Two ways:
log4j.logger.java.sql.ResultSet=TRACE
Or use the namespaces to set logging. This is the only logging method in mybatis 3.2
http://mybatis.github.io/mybatis-3/logging.html
What works for me is:
log4j.logger.org.springframework.jdbc.core.StatementCreatorUtils=TRACE
This prints entries like these:
...
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [12345], value class [java.math.BigDecimal], SQL type 3
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 2, parameter value [ADDRESS], value class [java.lang.String], SQL type 12
TRACE SimpleAsyncTaskExecutor-1 org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 3, parameter value [20130916], value class [java.math.BigDecimal], SQL type 3
...
In my case I am using running a query inside a JdbcBatchItemWriter in a Spring Batch application, which uses named params so it may not be a general solution.
It seems that I'm not completely understand how an XA transaction works. I thought that it is atomic: I thought that when I commit a transaction, then new messages and new data will be available in the same time.
This misunderstanding led me to the following issue:
new rows are inserted to DB and a message is sent to a queue in a transactional route. In another route the message is received. Then this route tries to perform some manipulations with the rows that were inserted in the previous route. But it doesn't see them!
The second route is configured so it rolls back a message to the queue when an exception is happened. And I see that after a second run the route sees the rows!
As a conclusion I would ask the next questions:
Is an XA transaction really atomic?
If no, how can I configure commit order for my transactional resources?
Additional note: the issue is found in Fuse ESB/ServiceMix 4.4.1
2 Jake:
My camel context configuration looks like following:
<osgi:reference id="osgiPlatformTransactionManager" interface="org.springframework.transaction.PlatformTransactionManager"/>
<osgi:reference id="osgiJtaTransactionManager" interface="javax.transaction.TransactionManager"/>
<osgi:reference id="myDataSource"
interface="javax.sql.DataSource"
filter="(osgi.jndi.service.name=jdbc/postgresXADB)"/>
<bean id="PROPAGATION_MANDATORY" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_MANDATORY"/>
</bean>
<bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/>
</bean>
<bean id="jmstx" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsTxConfig" />
</bean>
<bean id="jmsTxConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="jmsXaPoolConnectionFactory"/>
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="transacted" value="false"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="concurrentConsumers" value="${jms.concurrentConsumers}" />
</bean>
<bean id="jmsXaPoolConnectionFactory" class="org.apache.activemq.pool.XaPooledConnectionFactory">
<property name="maxConnections" value="${jms.maxConnections}" />
<property name="connectionFactory" ref="jmsXaConnectionFactory" />
<property name="transactionManager" ref="osgiJtaTransactionManager" />
</bean>
<bean id="jmsXaConnectionFactory" class="org.apache.activemq.ActiveMQXAConnectionFactory">
<property name="brokerURL" value="${jms.broker.url}"/>
<property name="redeliveryPolicy">
<bean class="org.apache.activemq.RedeliveryPolicy">
<property name="maximumRedeliveries" value="-1"/>
<property name="initialRedeliveryDelay" value="2000" />
<property name="redeliveryDelay" value="5000" />
</bean>
</property>
</bean>
DB data source is configured as following:
<bean id="myDataSource" class="org.postgresql.xa.PGXADataSource">
<property name="serverName" value="${db.host}"/>
<property name="databaseName" value="${db.name}"/>
<property name="portNumber" value="${db.port}"/>
<property name="user" value="${db.user}"/>
<property name="password" value="${db.password}"/>
</bean>
<service ref="myDataSource" interface="javax.sql.XADataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="jdbc/postgresXADB"/>
<entry key="datasource" value="postgresXADB"/>
</service-properties>
</service>
I'm not an expert in this stuff, but my view would be that the atomicity that XA provides guaruntees only that:
Either the entire commit occurs or the entire commit rolls back.
That the whole commit/rollback completes before the commit request returns to whoever called it.
I don't think any guaruntee is made regarding the individual participants completing at the same instant, nor is there any kind of 'commit dependency tree' maintained guaranteeing that the subsequent processing only happens on participants who have committed.
I think to achieve what you want, you might need to put the message queue outside the main transaction... Which destroys the whole point of the transaction in the first place :(
I think you might just have to put a retry/timeout loop in your downstream processing. The alternative might be to explore the concurrency options to see if you an allow that downstream transaction to 'see' the upstream.
Hopefully this answer will prompt someone with more knowladge of this stuff to chip in!
For XA transactions, when you commit, then camel will run xa commit for each DB, ensuring that all XA transaction will be committed eventually, not simultaneously. Since the data in each DB is committed separately, it is not atomic, and it is impossible to make data modification across database atomic.
For your application, there may be two choice to avoid the problem.
Don't use XA transactions, instead use OutBox pattern. You can update part of the data, send message to queue, and then return. Any order dependent operation is put into queue, where you can easily custom the order.
Base on your XA solution, when you want to read the newest data, you call select for update, which will wait for the row lock holding by unfinished XA transaction. When XA transactions returned, select for update will return the newest data.
Your AMQ_SCHEDULED_DELAY header is a workaround, but not working when exceptions happen.
Hi I have a issue respect a hbm2ddl.import_files, it seems that don't work and not seems to appear in the log.
this is my configuration:
<property name="hibernateProperties">
<value>
hibernate.dialect=${hibernate.dialect}
hibernate.default_schema=${hibernate.default_schema}
hibernate.jdbc.batch_size=${hibernate.jdbc.batch_size}
hibernate.show_sql=${hibernate.show_sql}
hibernate.hbm2ddl.auto=${hibernate.hbm2ddl.auto}
hibernate.id.new_generator_mappings=${hibernate.id.new_generator_mappings}
hibernate.hbm2ddl.import_files=${hibernate.hbm2ddl.import_files}
<!-- Auto Generated Schemas and tables not good for production
hibernate.hbm2ddl.auto=update-->
</value>
</property>
the hibernate.hbm2ddl.import_files=/import.sql, and the file is:
insert into DEPARTAMENTO (NOMBRE_DEPART,REFERENCIA_DEPART) values ('AMAZONAS')
The jdbc.properties:
#org.hibernate.dialect.PostgreSQLDialect
hibernate.default_schema = "DBMERCANCIAS"
hibernate.show_sql = true
hibernate.id.new_generator_mappings = true
hibernate.hbm2ddl.auto = create
hibernate.jdbc.batch_size = 5
#Default the factory to use to instantiate transactions org.transaction.JDBCTransactionFactory
hibernate.transaction.factory_class=org.transaction.JDBCTransactionFactory
#Initialize values statements only on create-drop or create
hibernate.hbm2ddl.import_files = /import.sql
The database is postgresql 9.1.1, spring 3.1.0.RELEASE and hibernate 4.1.2.Final, the hibernate.hbm2ddl.auto is set to "create", the tables and the schema create but not run sql command insert why?, I can see in the log where this command run.
My error was the location in hibernate properties.
hibernate.hbm2ddl.import_files = /META-INF/spring/import.sql
is the correct location.
You could put import.sql in classpath(/classes/import.sql) and remove property hibernate.hbm2ddl.import_files from hibernate configuration/property.
NOTE: hibernate.hbm2ddl.auto must to create
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</property>
</bean>