How to read properties in SQL file to replace placeholders/Environment variables? - sql

I have a flyway SQL script which inserts some data which is environment specific (Like deployment URL's) etc. I am having these values in a properties file. I want to use the same properties in this SQL file. I can do this by ant replace task/sed from bash file.In that case i need to run the script/ant target manually. Is there any other way to read properties in SQL file to read as ENVIRONMENT variable/replace placeholders?

I found a solution for this using flyway. We can use flyway placeholders.
<bean id="flyway" class="com.googlecode.flyway.core.Flyway"
init-method="migrate" scope="singleton">
<property name="dataSource" ref="dataSource" />
<property name="disableInitCheck" value="true"></property>
<property name="locations">
<list>
<value>migration-sql</value>
</list>
</property>
<property name="placeholders">
<map>
<entry key="key1" value="${value1}"></entry>
<entry key="key2" value="${value2}"></entry>
</map>
</property>
</bean>
properties file:
value1= value of key 1
value2= value of key 2
Reference:
Stackoverflow,
Flyway Feature request

Related

Ignite QueryEntity Based Configuration for C++?

<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="mycache"/>
<!-- Configure query entities -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<!-- Setting indexed type's key class -->
<property name="keyType" value="java.lang.Long"/>
<!-- Setting indexed type's value class -->
<property name="valueType"
value="org.apache.ignite.examples.Person"/>
<!-- Defining fields that will be either indexed or queryable.
Indexed fields are added to 'indexes' list below.-->
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"/>
<entry key="name" value="java.lang.String"/>
<entry key="salary" value="java.lang.Long "/>
</map>
</property>
<!-- Defining indexed fields.-->
<property name="indexes">
<list>
<!-- Single field (aka. column) index -->
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="id"/>
</bean>
<!-- Group index. -->
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg>
<list>
<value>id</value>
<value>salary</value>
</list>
</constructor-arg>
<constructor-arg value="SORTED"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
I understand the above XML configuration can be used to define an SQL entity in ignite with indexes. The documentation is better understandable from a code perspective either Java or NET because API is available. As we do most of the development C++ and API is not available , we would like to know few more details to use the XML configuration. Could anyone please answer below points?
1.Where does this configuration file can be used? Server side or client side (thin & thick) or both side.
2.Is it possible to change the field names, types and indexes once it has been created and loaded data in the same entity?#
3.<property name="valueType" value="org.apache.ignite.examples.Person"/> If not mistaken, we understand the value here is taken from a namespace and from a DLL (for example in c#) but How does ignite knows about the location of DLL or namespace to get load from? where does the binaries to be kept?
4.In the case of C++ , what binary file can be used to define the value type? .lib or .dll or some other way.
C++ Thick Client can use the XML config, see IgniteConfiguration.springCfgPath.
Think about the CacheConfiguration as the "starting" config for a cache. Most of it can't be changed later. A few things, like the set of SQL columns or indexes, can be changed via SQL DDL: ALTER TABLE..., CREATE INDEX..., etc. If something isn't available in the DDL, assume that it can't be changed without recreating the cache.
Check out this. The value type name will be mapped by each platform component - Java, C++, .NET - accordingly to the binary marshaller configuration. For example, it's common to use BinaryBasicNameMapper that will map all platform type names (with namespaces/packages) to simple names, so that different namespace/package naming conventions don't create a problem. When a class is needed to deserialize a value, it will be loaded via the regular platform-specific mechanism to load code. For Java, it'll be the classpath. For C++, I guess it's LD_LIBRARY_PATH. In any case, Ignite has nothing to do with that really.
Again, Ignite has nothing to do with that. Whatever works on your platform to load code can be used.
After few experiments, I found the solution and actually it is easy.
The value given at the valueType property is directly mapped to the binary object name when it create from the code.
for e.g below configuration
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="TEST"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
<property name="writeSynchronizationMode" value="FULL_SYNC"/>
<!-- Configure type metadata to enable queries. -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"/>
<property name="valueType" value="TEST"/>
<property name="fields">
<map>
<entry key="ID" value="java.lang.Long"/>
<entry key="DATE" value="java.lang.String"/>
</map>
</property>
</bean>
</list>
</property>
</bean>
the below C++ code works
template<>
struct BinaryType<examples::TEST> : BinaryTypeDefaultAll<examples::TEST>
{
static void GetTypeName(std::string& dst)
{
dst = "TEST";
}
static void Write(BinaryWriter& writer, const examples::RHO& obj)
{
writer.WriteInt64("ID", obj.Id);
writer.WriteString("DATE", obj.dt);
}
static void Read(BinaryReader& reader, examples::RHO& dst)
{
dst.Id = reader.ReadInt64("Id");
dst.dt = reader.ReadString("dt");
}
};

can not resolve property tag with job parameter

I am trying to concatenate the job parameter, #{jobParameters['arg1']} with myfeed.query to dynamically pick the right query from the properties file. But it's not getting resolved.
below is the exception log
Caused by: org.springframework.jdbc.BadSqlGrammarException: Executing query; bad SQL grammar [${myfeed.queryZONE1}]
below is the code snippet in the xml file.
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>${myfeed.query#{jobParameters['arg1']}}</value>
</property>
<property name="rowMapper">
<bean class="com.sgcib.loa.matrix.mapper.MyFeedRowMapper" />
</property>
</bean>
To do that, you will need to declare explicit properties for your PropertyPlaceholderConfigurer :
<bean id="propertiesConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties" ref="properties" />
</bean>
<bean id="properties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="location">
<value>file:xxxxxxx.properties</value>
</property>
</bean>
Then using Spring Expression Language (spEL), you can get the right property with :
<property name="sql" value="#{properties.getProperty('myfeed.query' + jobParameters['arg1'])}" /></property>
Note that this solution maintains compatibility with ${...} syntax.
The above solution does not work, tested solution is One ItemReader, 2 SQL Query, jdbcTemplate?
http://incomplete-code.blogspot.in/2013/06/dynamically-switch-sql-statements-in.html

Adding new facet to DSpace has no effect (DSpace 4.1)

I changed the discovery.xml file as described in the documentation to add a new facet over dc.type to our DSpace. When I finished reindexing and deleting the cache I see the new search filter at advanced search but not as a facet.
These are the changes I made to discovery.xml:
Added filter to sidbarFacets and SearchFilter:
<ref bean="searchFilterType" />
and this is the filter:
<bean id="searchFilterType" class="org.dspace.discovery.configuration.DiscoverySearchFilterFacet">
<property name="indexFieldName" value="type"/>
<property name="metadataFields">
<list>
<value>dc.type</value>
</list>
</property>
</bean>
Thanks in advance
The following modifications to discovery.xml on the latest DSpace master branch worked on my local setup:
https://github.com/bram-atmire/DSpace/commit/3f084569cf1bbc6c6684d114a09a1617c8d3de5d
One reason why the facet wouldn't appear in your setup, could be that you omitted to add it to both the "defaultconfiguration" as well as the specific configuration for the DSpace homepage.
After building and deploying, a forced discovery re-index using the following command made the facet appear:
./dspace index-discovery -f
Here is an example facet that I have configured in our instance. Try setting the facetLimit, sortOrder, and splitter. Re-index and see if that resolves the issue.
<bean id="searchFilterGeographic"
class="org.dspace.discovery.configuration.HierarchicalSidebarFacetConfiguration">
<property name="indexFieldName" value="geographic-region"/>
<property name="metadataFields">
<list>
<value>dc.coverage.spatial</value>
</list>
</property>
<property name="facetLimit" value="5"/>
<property name="sortOrder" value="COUNT"/>
<property name="splitter" value="::"/>
</bean>

BoneCP config in Spring-based application for Cloudbees

I use BoneCP in my Spring-based application.
<bean id="dataSource" class="com.jolbox.bonecp.BoneCPDataSource" destroy-method="close">
<property name="driverClass" value="com.mysql.jdbc.Driver" />
<property name="jdbcUrl" value="jdbc:mysql://ec2-23-21-211-???.compute-1.amazonaws.com:3306/?????" />
<property name="username" value="*****"/>
<property name="password" value="********"/>
<property name="idleConnectionTestPeriod" value="60"/>
<property name="idleMaxAge" value="240"/>
<property name="maxConnectionsPerPartition" value="3"/>
<property name="minConnectionsPerPartition" value="1"/>
<property name="partitionCount" value="1"/>
<property name="acquireIncrement" value="5"/>
<property name="statementsCacheSize" value="100"/>
<property name="releaseHelperThreads" value="3"/>
</bean>
Is there any short value for jdbcURL?
You can inject it via environmental variable through the CloudBees SDK.
1.Inject the datasource and the following environmental variables via bees app:bind
With the CloudBees SDK:
bees app:bind -a appName -db dbName -as mydb
It will automatically inject a datasource and will create these three environmental variables:
${DATABASE_URL_DB}
${DATABASE_USERNAME_DB}
${DATABASE_PASSWORD_DB}
Please, be aware that you will use on this way one active connection for the maxActive: '20' by default on the Tomcat JDBC Connection Pool.
2.Enable PlaceHolder on Spring framework and mark system-properties-mode as "OVERRIDE".
<context:property-placeholder location="classpath:spring/data-access.properties" system-properties-mode="OVERRIDE"/>
Example here.
3.On your datasource.xml configuration file, then you could use something like this:
value= "jdbc:"+ ${DATABASE_URL_DB}
Be aware that the recommended way to get the datasource on CloudBees is always using JNDI.
In this way, you will use our own implementation of the datasource, so you don't have to write the username, the password or the URL of the database. Instead of all these lines, you can just replace all of them for this one:
<jee:jndi-lookup id="dataSource" jndi-name="jdbc/mydb" resource-ref="true"/>

Running transformation script in Alfresco

Why isn't my transformation script running on any uploaded files beyond the first file?
I set up a transformation rule in Alfresco that listens to a folder. When a new file is placed into the folder, the rule triggers a script to run that takes a PDF without a text layer, breaks it into jpegs, OCRs the jpegs, then converts the jpegs into PDFs and merges the PDFs, returning an OCRed PDF with a text layer then copies the result into another folder so we know it got done.
Running the script at command line works. The first time I drop a file into the Alfresco folder (upload) it runs the script and copies the file. But any subsequent time I drop files into the folder, the script isn't run, but the file is still copied to the target folder. So I know the rule is being called, but the script doesn't seem to be running on the following files. I have logging on the script, so I know the script isn't even getting called. The rule is being applied to all new and modified files in the folder with no filters. Then it runs the Transform and Copy command using our custom OCR script and with the target folder being defined as the parent folder.
Below is my alfresco transformation extension:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
<beans>
<bean id="transformer.worker.PdfOCRTool" class="org.alfresco.repo.content.transform.RuntimeExecutableContentTransformerWorker">
<property name="mimetypeService">
<ref bean="mimetypeService"/>
</property>
<property name="transformCommand">
<bean name="transformer.pdftoocr.Command" class="org.alfresco.util.exec.RuntimeExec">
<property name="commandMap">
<map>
<entry key=".*">
<value>/opt/ocr/ocr.sh ${source} ${target}</value>
</entry>
</map>
</property>
<property name="errorCodes">
<value>1,2</value>
</property>
</bean>
</property>
<property name="explicitTransformations">
<list>
<bean class="org.alfresco.repo.content.transform.ExplictTransformationDetails">
<property name="sourceMimetype">
<value>application/pdf</value>
</property>
<property name="targetMimetype">
<value>application/pdf</value>
</property>
</bean>
</list>
</property>
</bean>
<bean id="transformer.proxy.PdfOCRTool" class="org.alfresco.repo.management.subsystems.SubsystemProxyFactory">
<property name="sourceApplicationContextFactory">
<ref bean="thirdparty"/>
</property>
<property name="sourceBeanName">
<value>transformer.worker.PdfOCRTool</value>
</property>
<property name="interfaces">
<list>
<value>org.alfresco.repo.content.transform.ContentTransformerWorker</value>
</list>
</property>
</bean>
<bean id="transformer.PdfOCRTool" class="org.alfresco.repo.content.transform.ProxyContentTransformer" parent="baseContentTransformer">
<property name="worker">
<ref bean="transformer.proxy.PdfOCRTool"/>
</property>
</bean>
</beans>
The transformation service is intended for converting items from one mimetype to another. I am not sure that converting from PDF to a second PDF is valid. You would be better implementing a custom Java repository action which then in turn uses a org.alfresco.util.exec.RuntimeExec bean to fire off the command.
Since your Spring config already defines a RuntimeExec bean, you could re-use this definition but wrap it instead in your own custom class which extends org.alfresco.repo.action.executer.ActionExecuterAbstractBase. In fact, if you take a look at the source of org.alfresco.repo.action.executer.TransformActionExecuter then that might give you some clues on how to go about implementing things.