Sql Connection in Spring Servicemix camel
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="jdbc:sqlserver://localhost:1433/orderdb"/>
<property name="username" value="abc"/>
<property name="password" value="pqr"/>
</bean>
When I try to make connection using dataSource.getConnection()
Not allowing please help
*****Connection Code **********
public class DatabaseBeanH2 {
private DataSource dataSource;
private static final Logger LOGGER = LoggerFactory.getLogger(DatabaseBeanH2.class);
public DatabaseBeanH2(){}
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
public void create() throws SQLException{
Statement sta = dataSource.getConnection().createStatement();
try {
sta.executeUpdate("CREATE TABLE orders ( id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, item VARCHAR(50), amount INT, description VARCHAR(300), processed BOOLEAN, consumed BOOLEAN);");
} catch (SQLException e) {
LOGGER.info("Table orders already exists");
}
}
public void destroy() throws SQLException {
dataSource.getConnection().close();
}
}
You have to setting up your database using following code
<!-- this is the JDBC data source which uses an in-memory only Apache Derby database -->
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.apache.derby.jdbc.EmbeddedDriver"/>
<property name="url" value="jdbc:derby:memory:orders;create=true"/>
<property name="username" value=""/>
<property name="password" value=""/>
</bean>
<!-- bean which creates/destroys the database table for this example -->
<bean id="initDatabase" class="org.apache.camel.example.sql.DatabaseBean"
init-method="create" destroy-method="destroy">
<property name="dataSource" ref="dataSource"/>
</bean>
<!-- configure the Camel SQL component to use the JDBC data source -->
<bean id="sql" class="org.apache.camel.component.sql.SqlComponent">
<property name="dataSource" ref="dataSource"/>
</bean>
Please check this link http://camel.apache.org/sql-example.html
You have to inject the dataSource bean in your DatabaseBeanH2 in the camel/spring context, something like this:
<bean id="databaseBean" class="my.package.DatabaseBeanH2">
<property name="dataSource" ref="dataSource" />
</bean>
Related
I have a query that stalls/hangs over large argument inputs. The same code works well on smaller SQL argument input. The code is as follows:
subEntries = dataRDD.sql("SELECT v.id,v.sub,v.obj FROM VPRow v JOIN table(id bigint = ?) i ON v.id = i.id",new Object[] {subKeyEntries.toArray()});
LOG.debug("Reading : "+subEntries.count());
Please note that the Ignite documentation mentions that the input argument can be of any size - "Here you can provide object array (Object[]) of any length as a parameter". The parameter passed to the query in the stalling case was of size 23641 of long values.
My spring configuration file is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="dataRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Long</value>
<value>sample.VPRow</value>
</list>
</property>
<property name="backups" value="0"/>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>[IP1]</value>
<value>[...]</value>
<value>[IP5]</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
The VPRow class is defined as follows
public class VPRow implements Serializable {
#QuerySqlField
private long id;
#QuerySqlField
private String sub;
#QuerySqlField
private String obj;
public VPRow(long id,String sub, String obj) {
this.id = id;
this.sub = sub;
this.obj = obj;
}
...
}
Usually databases has a limitation for "IN" operator.
You can divide this sql query in parts and run them simultaneously.
Please see paragraph 2 here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
IN operator doesn't use indexes, so with a long list like this query will do too many scans. Changing the query like described should help.
I have worked on an Spring Batch Admin example by taking Spring Batch Talk as reference. The example ran perfectly how I wanted. Using rabbit server, I have established communication between master and slave. But how can I know which partition is running in master and which is running in slave. Is there any chance to view it from Spring Batch Admin UI.
While doing partitioning in ColumnRangePartitioner, I have added one more value partitionId to ExecutionContext.
value.putLong("minValue", start);
value.putLong("maxValue", end);
value.putLong("partitionId", number);
I have added a new column called partitionInfo to TARGET which stores the details like which port has been scanned in which partition. While reading ports, I have added explicitly values.
<bean id="targetItemReader" class="org.springframework.batch.item.database.JdbcPagingItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="queryProvider">
<bean
class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="selectClause" value="ID, IP, PORT, CONNECTED, BANNER, :partitionId as PARTITIONINFO" />
<property name="fromClause" value="FROM TARGET" />
<property name="whereClause" value="ID >= :minId AND ID <= :maxId AND CONNECTED IS NULL"/>
<property name="sortKey" value="ID" />
</bean>
</property>
<property name="pageSize" value="10" />
<property name="parameterValues">
<map>
<entry key="minId" value="#{stepExecutionContext[minValue]}"/>
<entry key="maxId" value="#{stepExecutionContext[maxValue]}"/>
<entry key="partitionId" value="#{stepExecutionContext[partitionId]}" />
</map>
</property>
<property name="rowMapper">
<bean class="com.michaelminella.springbatch.domain.TargetRowMapper"/>
</property>
</bean>
Later I ran update query at the end and saved the details in table.
<bean id="targetWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="assertUpdates" value="true" />
<property name="itemSqlParameterSourceProvider">
<bean class="org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider" />
</property>
<property name="sql" value="UPDATE TARGET SET CONNECTED = :connected, BANNER = :banner, PARTITIONINFO = :partitionId WHERE ID = :id" />
<property name="dataSource" ref="dataSource" />
</bean>
I have added new variable to Target bean which stores partitionId information.
private int partitionId;
public int getPartitionId() {
return partitionId;
}
public void setPartitionId(int partitionId) {
this.partitionId = partitionId;
}
DB Changes in business-schema-mysql.sql
DROP TABLE IF EXISTS TARGET;
CREATE TABLE TARGET (
ID BIGINT NOT NULL PRIMARY KEY ,
IP VARCHAR(15) NOT NULL,
PORT INT NOT NULL,
CONNECTED BOOLEAN NULL,
BANNER VARCHAR(255),
PARTITIONINFO INT
) ENGINE=InnoDB;
I'm getting an odd error. If I pass in a valid user/password to my Shiro LDAP all is ok but if the combination is not valid it throws an exception and keeps on looping through the Shiro realm code. In the debugger it just stays in Shiro code except for my one override method:
public class MyJndiLdapRealm extends JndiLdapRealm {
public MyJndiLdapRealm () {
super();
}
#Override
protected AuthenticationInfo queryForAuthenticationInfo(AuthenticationToken token,
LdapContextFactory ldapContextFactory)
throws NamingException {
Object principal = token.getPrincipal();
Object credentials = token.getCredentials();
principal = getLdapPrincipal(token);
LdapContext ctx = null;
try {
ctx = ldapContextFactory.getLdapContext(principal, credentials);
//context was opened successfully, which means their credentials were valid. Return the AuthenticationInfo:
return createAuthenticationInfo(token, principal, credentials, ctx);
} finally {
LdapUtils.closeContext(ctx);
}
}
<bean id="shiroFilter" class="org.apache.shiro.spring.web.ShiroFilterFactoryBean">
<property name="securityManager" ref="securityManager"/>
<property name="loginUrl" value="/ldapLogin"/>
<property name="unauthorizedUrl" value="/ldapLogin"/>
<property name="successUrl" value="/ldapLogin"/>
<property name="filterChainDefinitions">
<value>
[urls]
/** = ssl[8443],authc, customAuthFilter
[main]
/logout = logout
</value>
</property>
</bean>
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="realms">
<list>
<ref bean="authenticateLdapRealm"/>
<ref bean="authenticateDbRolesRealm"/>
<ref bean="DbAuthorizingRealm"/>
</list>
</property>
<property name="authenticator.authenticationStrategy">
<bean class="org.apache.shiro.authc.pam.AllSuccessfulStrategy"/>
</property>
</bean>
<bean id="lifecycleBeanPostProcessor" class="org.apache.shiro.spring.LifecycleBeanPostProcessor"/>
<bean id="authenticateLdapRealm" class="security.MyJndiLdapRealm">
<property name="contextFactory" ref="contextFactory" />
<property name="userDnTemplate" value="cn={0},ou=REMOTE,o=OFF" />
</bean>
<bean id="contextFactory" class="org.apache.shiro.realm.ldap.JndiLdapContextFactory">
<property name="url" value="ldap://172.25.3.91:389"/>
</bean>
<bean id="authenticateDbRolesRealm" class="security.DbRolesRealm">
</bean>
<bean id="SwiDbAuthorizingRealm" class="security.DbAuthorizingRealm">
</bean>
<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator" depends-on="lifecycleBeanPostProcessor"/>
<bean class="org.apache.shiro.spring.security.interceptor.AuthorizationAttributeSourceAdvisor">
<property name="securityManager" ref="securityManager"/>
</bean>
Somehow my custom filter was the problem. Went to PassThruAuthenticationFilter and the problem was solved.
I'm trying to setup spring's DefaultMessageListenerContainer class to redeliver messages after an exception is thrown or session.rollback() is called. I am also trying to get this running on glassfish 3.1.2 web profile.
When calling session.rollback() in the onMessage() method of my SessionAwareMessageListener, I get an exception with the message saying: MessageDispatcher - [C4024]: The session is not transacted. I don't see this problem with ActiveMQ, but of course that configuration is different because I'm not using it in an application server.
Has anyone here gotten this working? My configuration follows:
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">com.sun.enterprise.naming.SerialInitContextFactory</prop>
<prop key="java.naming.provider.url">${jms.jndicontext.url}</prop>
<prop key="java.naming.factory.state">com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl</prop>
<prop key="java.naming.factory.url.pkgs">com.sun.enterprise.naming</prop>
</props>
</property>
</bean>
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="${jms.connection.factory}" />
</bean>
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="jmsConnectionFactory"/>
<property name="defaultDestination" ref="jmsServiceQueue"/>
</bean>
<bean id="jmsServiceProducer"
class="net.exchangesolutions.services.messaging.service.jms.JmsMessageServiceProducerImpl">
<property name="serviceTemplate" ref="jmsTemplate"/>
<property name="serviceDestination" ref="jmsServiceQueue"/>
</bean>
<bean id="myMessageListener"
class="com.myorg.jms.MessageDispatcher"/>
<bean id="jmsServiceContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="jmsConnectionFactory"/>
<property name="destination" ref="jmsServiceQueue"/>
<property name="messageListener" ref="myMessageListener"/>
<property name="errorHandler" ref="jmsErrorHandler" />
<property name="receiveTimeout" value="180000"/>
<property name="concurrentConsumers" value="1"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="pubSubNoLocal" value="true"/>
<property name="sessionTransacted" value="true"/>
<property name="sessionAcknowledgeMode" value="2" />
<property name="transactionManager" ref="jmsTransactionManager"/>
</bean>
<bean id="jmsTransactionManager" class="org.springframework.jms.connection.JmsTransactionManager">
<property name="connectionFactory" ref="jmsConnectionFactory"/>
</bean>
Setting the acknowledge="auto", the message is acknowledged before listener execution, so the message is deleted from queue.
I have also achieved the DLQ scenario in Spring Application by doing the following changes to your code.
First, we set the acknowledge="transacted" (Since we want guaranteed redelivery in case of exception thrown and Trans acknowledgment for successful listener execution)
<jms:listener-container container-type="default" connection-factory="connectionFactory" acknowledge=" transacted">
Next, since we want to throw the JMSException, we are implementing SessionAwareMessageListener.
public class MyMessageQueueListener implements SessionAwareMessageListener {
public void onMessage( Message message , Session session ) throws JMSException {
//DO something
if(success){
//Do nothing – so the transaction acknowledged
} else {
//throw exception - So it redelivers
throw new JMSException("..exception");
}
}
}
I have tested this. This seems working fine.
I have a problem with setting permission for existing node("Sites" folder). I have a group and I need to give her full control permission for "Sites" folder. I'm used the next xml for this
<cm:folder view:childName="cm:Sites">
<view:acl>
<view:ace view:access="ALLOWED">
<view:authority>GROUP_NOTEBOOK_PROJECT_CREATOR_GROUP</view:authority>
<view:permission>FullControl</view:permission>
</view:ace>
</view:acl>
<view:properties>
<cm:name>Sites</cm:name>
<sys:node-uuid>1e6f0610-a018-4966-ab37-c71e809dc6ed</sys:node-uuid>
</view:properties>
</cm:folder>
and next config context
<bean id="com.agilent.datastore.notebook.server.systemBootstrap" class="org.alfresco.repo.module.ImporterModuleComponent"
parent="module.baseComponent">
<property name="moduleId" value="${artifactId}" />
<property name="name" value="${name}" />
<property name="description" value="${description}" />
<property name="sinceVersion" value="${noSnapshotVersion}.${buildNumber}" />
<property name="appliesFromVersion" value="${noSnapshotVersion}.${buildNumber}" />
<!-- Uncomment next line if you want to execute bootstrap again -->
<!-- property name="executeOnceOnly" value="false" / -->
<property name="importer" ref="spacesBootstrap" />
<property name="bootstrapViews">
<list>
<props>
<prop key="uuidBinding">UPDATE_EXISTING</prop>
<prop key="path">/${spaces.company_home.childname}</prop>
<prop key="location">alfresco/extension/agilent/sites.acp</prop>
But when I'm bootstrap this folder I got exception Cannot insert duplicate key row in object 'dbo.alf_child_assoc' with unique index 'parent_node_id'.; nested exception is java.sql.SQLException: Cannot insert duplicate key row in object 'dbo.alf_child_assoc' with unique index 'parent_node_id'.
The best way to achieve what you want is to write a patch, that is a java class that extends the alfresco AbstractPatch.java class.
In the applyInternal method you first get hold of the sites-folder preferable with an xpath-search since this uses the nodeService in the background. Solr won't be available during the execution of this code since the patch is ran during bootstrap.
Declare you patch in a spring context file like this:
<bean id="patch.setPermissionsOnSitesFolderPatch" class="org.yourdomain.alfresco.patch.SetPermissionOnSitesFolderPatch" parent="basePatch">
<property name="id">
<value>patch.patch.setPermissionsOnSitesFolderPatch</value>
</property>
<property name="description">
<value>patch.setPermissionsOnSitesFolderPatch.description</value>
</property>
<property name="fixesFromSchema">
<value>0</value>
</property>
<property name="fixesToSchema">
<value>${version.schema}</value>
</property>
<property name="targetSchema">
<value>10000</value>
</property>
<property name="force" value="true" />
<property name="repository" ref="repositoryHelper"/>
</bean>
To complete the answer by #billerby you will also need a Java class to go along with that snippet. The Alfresco docs contain a good example. Using that this is what I came up with for my use-case:
Note I'm using Lombok, but that's just for convenience
public class UpdatePermissionsPatch extends AbstractPatch {
/**
* The Alfresco Service Registry that gives access to all public content services in Alfresco.
*/
#Setter private ServiceRegistry serviceRegistry;
/* Properties */
#Setter private String path;
#Setter private String authority;
#Setter private String permission;
#Setter private boolean allowed;
/** This will clear permissions for the specified authority if set to true */
#Setter private boolean clearPermissions;
private String getSuccessId() {
return getId() + ".result";
}
private String getErrorId() {
return getId() + ".error";
}
#Override
protected String applyInternal() throws Exception {
log.info("Starting execution of patch: {}", I18NUtil.getMessage(getId()));
// Get the store reference for the Repository store that contains live content
StoreRef store = StoreRef.STORE_REF_WORKSPACE_SPACESSTORE;
// Get root node for store
NodeRef rootRef = serviceRegistry.getNodeService().getRootNode(store);
// Do the patch work
setPermissions(getWipNodeRef(rootRef));
log.info("Finished execution of patch: {}", I18NUtil.getMessage(getId()));
return I18NUtil.getMessage(getSuccessId());
}
private void setPermissions(NodeRef nodeRef) {
PermissionService permsService = serviceRegistry.getPermissionService();
if (clearPermissions) {
permsService.clearPermission(nodeRef, authority);
}
permsService.setPermission(nodeRef, authority, permission, allowed);
}
private NodeRef getWipNodeRef(NodeRef rootNodeRef) {
NamespaceService nsService = serviceRegistry.getNamespaceService();
List<NodeRef> refs = searchService.selectNodes(rootNodeRef, path, null, nsService, false);
if (refs.size() != 1) {
throw new AlfrescoRuntimeException(I18NUtil.getMessage(getErrorId(),
String.format("Node could not be found, XPATH query %s returned %i nodes.", path, refs.size())
));
}
return refs.get(0);
}
}
And your bootstrap context xml will need to include something like this:
<bean
id="org.tutorial.folderUpdateWipPermissions"
class="org.tutorial.patch.UpdatePermissionsPatch"
parent="basePatch"
>
<property name="id" value="org.tutorial.bootstrap.patch.folderUpdateWipPermissions" />
<property name="description" value="org.tutorial.bootstrap.patch.folderUpdateWipPermissions.description" />
<property name="fixesFromSchema" value="0" />
<property name="fixesToSchema" value="${version.schema}" />
<property name="targetSchema" value="100003" />
<property name="serviceRegistry">
<ref bean="ServiceRegistry"/>
</property>
<property name="path" value="/${spaces.company_home.childname}/cm:Work_x0020_In_x0020_Progress" />
<property name="authority" value="GROUP_MyGroup" />
<property name="permission" value="Consumer" />
<property name="allowed" value="true" />
<property name="clearPermissions" value="true" />
</bean>