How do I initialize a hsql in memory DB with populated logback tables - hsqldb

I am trying to setup a DBAppender with Logback to point to HSQL, but logback needs 3 tables to run.
Logback appender:
<appender name="data-db" class="ch.qos.logback.classic.db.DBAppender">
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>org.hsqldb.jdbcDriver</driverClass>
<url>jdbc:hsqldb:mem:.</url>
<user>sa</user>
<password></password>
</connectionSource>
</appender>
Is there a way of passing or getting HSQL to pick up the script when logback is be initialized? I am able to do this as part of a Java app, but when I try it from logback it creates an empty DB

Related

How can we inject properties into wso2 Micro Integrator and Enterprise Integrator using file.properties?

I want to use the .car file on another server without using integration studio. So I want to be able to change the hostname and port dynamically using a configuration file. My endpoint URL has variables in it {uri.var.x} that's why I can't use $FILE:x to get the complete URL from file.properties.
I have already tried How to read a property injected from file.properties in WSO2 - micro integrator? but it did not work.
You can simply read the Property from the file and assign it to the variable you desire. Then use it in your Endpoint configurations.
<property expression="get-property('file', 'x')" name="uri.var.x"/>
You can store the values in a properties file called file.properties in the MI_HOME/conf folder and it will be loaded automatically. If you are using a different fileName you can pass it to the server startup script like -Dproperties.file.path=/home/dev/dev.properties. Then you can read them through a Property Mediator.
Further, if you want to construct the full URL from multiple properties you can use Xpath functions.
<property expression="concat('https://', get-property('file', 'host'), ':', get-property('file', 'port'))" name="uri.var.x" scope="default" type="STRING" />
If the properties are not picked from the default file, pass the file path like below.
sh micro-integrator.sh -Dproperties.file.path=./conf/file.properties
Update on WSO2 EI
It seems file scope is not supported in EI. But instead, you can read variables from Environment variables with get-property('env', 'NAME_OF_VARIABLE')
<property expression="concat('https://', get-property('env', 'host'), ':', get-property('env', 'port'))" name="uri.var.x" scope="default" type="STRING"/>
If you want to read them from a properties file, you can do something like the below. Assuming you have a properties file like below.
stockQuoteEP=http://localhost:9000/services/SimpleStockQuoteService
ycr=test1234
host=localycr
port=6676
Add the following script to integrator.sh to export the properties as environment variables. You can improve the script as you require.
while read line; do
echo "Exporting $line"
export $line
done < /home/wso2/wso2ei-6.6.0/conf/file.properties
Then in your integration read them as below.
<property expression="concat('https://', get-property('env', 'host'), ':', get-property('env', 'port'))" name="uri.var.x" scope="default" type="STRING"/>
Update 2 on File Scope in Property mediator
As Sanoj mentioned, file scope in the property mediator is only available from MI 4.0 onward vanilla packs. If you have a WSO2 subscription you can get it as an update for both MI and EI.

How to execute a CustomSqlChange manually

I'm writing a CustomSqlChange for the first time and want to test the outcome by running it on my current database. Of course I could start up the application and execute all change sets via liquibase (including the one that executes my CustomSqlChange), but that takes a lot of time.
Is there a way to manually execute the java class implementing CustomSqlChange from my IDE (IntelliJ) as if it would be from liquibase? Could one maybe even debug that execution?
You can create a separate changelog file, where only your's custom change will be included. Point Liquibase to use it instead of base one. This will give you ability to debug it as well.
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog .....>
<changeSet id="custom-change" author="author" runOnChange="true" >
<customChange param="..." />
</changeSet>
</databaseChangeLog>

Websphere datasource configuration in IntelliJ

I'm trying to migrate J2EE, very heavy, old school application, from RAD 8.5.5.1 to IntelliJ 2016.1.1. DataSource building using JNDI.
I compiled and configured all components (right for now) except DataSource.
In RAD DataSource configured like this, in resource.xml:
<resources.jdbc:JDBCProvider xmi:id="JDBCProvider_1163951110780" name="DB2 DataSource" description="DB2 Universal JDBC Driver Provider" implementationClassName="com.ibm.db2.jcc.DB2ConnectionPoolDataSource">
<classpath>${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc.jar</classpath>
<classpath>${UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cu.jar</classpath>
<classpath>${DB2UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cisuz.jar</classpath>
<nativepath>${DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH}</nativepath>
<factories xmi:type="resources.jdbc:DataSource" xmi:id="DataSource_1163951270521" name="pensionjndi" jndiName="pensionjndi" description="DB2 Universal Driver Datasource" category="" authDataAlias="sec" relationalResourceAdapter="builtin_rra" statementCacheSize="150" datasourceHelperClassname="com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper">
<propertySet xmi:id="J2EEResourcePropertySet_1163951270522">
<resourceProperties xmi:id="J2EEResourceProperty_1163951270523" name="databaseName" type="java.lang.String" value="value" description="This is a required property. This is an actual database name, and its not the locally catalogued database name. The Universal JDBC Driver does not rely on information catalogued in the DB2 database directory." required="true"/>
<resourceProperties xmi:id="J2EEResourceProperty_1163951270524" name="driverType" type="java.lang.Integer" value="4" description="The JDBC connectivity-type of a data source. If you want to use type 4 driver, set the value to 4. If you want to use type 2 driver, set the value to 2. On WAS z/OS, driverType 2 uses RRS and supports 2-phase commit processing." required="true"/>
<resourceProperties xmi:id="J2EEResourceProperty_1163951270525" name="serverName" type="java.lang.String" value="serverName" description="The TCP/IP address or host name for the DRDA server. If custom property driverType is set to 4, this property is required." required="false"/>
<resourceProperties xmi:id="J2EEResourceProperty_1163951270526" name="portNumber" type="java.lang.Integer" value="50000" description="The TCP/IP port number where the DRDA server resides. If custom property driverType is set to 4, this property is required." required="false"/>
...
...
...
<resourceProperties xmi:id="J2EEResourceProperty_1175088739299" name="webSphereDefaultIsolationLevel" type="java.lang.Integer" value="2" description="" required="false"/>
</propertySet>
<connectionPool xmi:id="ConnectionPool_1163951270521" connectionTimeout="15" maxConnections="200" minConnections="5" reapTime="180" unusedTimeout="1800" agedTimeout="0" purgePolicy="EntirePool"/>
<mapping xmi:id="MappingModule_1163951296456" mappingConfigAlias="DefaultPrincipalMapping" authDataAlias="sec"/>
</factories>
I tried to define DataSource with same name (pensionjndi) using IntelliJ's DataSource and Driver window
DS IntelliJ Screen
No luck! Application doesn't recognize the DataSourse (but it looking for the RIGHT DS name "pensionjndi")
The question is: What is a right way to configure DataSource for IntelliJ Artifacts? (Using an existing DataSources)
If an additional information required, I'll edit the post..
I didn't found any example, or guide, for DataSource config for websphere.
Please HELP!?
The problem is solved by defining data source in Websphere Application Console. WAS console | Resorces | Data Sources
See the topic of IBM "Configuring a data source using the administrative console"
Here is discussion about the problem with IntelliJ support

Referencing a sibling changeset in a rollback section

I'm having an issue trying to rollback a changeSet by referring sibling-changeset.
master-changelog.xml
includes v.1.changes.xml (here is the table created)
includes v.2.changes xml (here the table dropped and I would like to refer a changeset from v.1.changes.xml as a rollback)
However no matter how do I reference the changeset in v.1.changes.xml it's not visible to v.2.changes.xml and I'm getting liquibase.exception.SetupException: liquibase.parser.core.ParsedNodeException: Change set not found.
master-changelog.xml
<include file="v1/v1.changes.xml" relativeToChangelogFile="true"/>
<include file="v2/v2.changes.xml" relativeToChangelogFile="true"/>
v1.changes.xml
<changeSet id="1" author="dima">
<createTable tableName="test-table">
<column name="test" type="number"></column>
</createTable>
</changeSet>
v2.changes.xml
<changeSet id="1" author="dima">
<dropTable tableName="test"/>
<rollback changeSetAuthor="dima" changeSetId="1" changeSetPath="src/main/resources/std/v1/v1.changes.xml"/>
</changeSet>
It appears that you're using Maven, so this answer will be Maven specific, as I have not been able to reproduce the solution on the command line.
First of all, it's possible that Liquibase is confused because you're using the same changeSet id in both files, and I'm not sure if it correctly scopes those ids to the changeSet file, or if the ids need to be global. You might first try changing the id on the second changeSet and see if it clears it up for you.
If that's not the issue, then the trick to getting this to work is to make sure your relative references are all in the context of the Java classpath. As I interpret your example, the classpath resources of your files would be:
std/master-changelog.xml
std/v1/v1.changes.xml
std/v2/v2.changes.xml
When running your migration, your changeLogFile setting should reference the classpath resource, not the disk file; i.e. std/master-changelog.xml instead of src/main/resources/std/master-changelog.xml. This puts the origin changelog in a classpath context rather than a file context.
In your v2.changes.xml, you then refer to the first change using the classpath resource name: v1/v1.changes.xml. This should allow Liquibase to find it correctly.
If you have more than one level of changeLog file inclusion, you might be running into this issue which prevents Liquibase from finding sibling changeLogs below the first level of inclusion. Until the pull request is merged and released, you'll be limited to a single level of file inclusion.
This solution is assuming you're using the Maven plugin, liquibase will still find the changelog, since Maven puts your resource files on the classpath by default. I also attach the plugin to the process-resources step (or later) so that the source resources will be in the target/classes directory when the migration is run.

Unable to bulk insert using NHibernate

I've tried adding bulk inserting to my application, but the Batcher is still NonBatchingBatcher with a BatchSize of 1.
This is using C#3, NH3RC1 and MySql 5.1
I've added this to my SessionFactory
<property name="adonet.batch_size">100</property>
And my code goes pretty much like this
var session = SessionManager.GetStatelessSession(type);
var tx = session.BeginTransaction();
session.Insert(instance);
I'm using HILO identity generation for the instances in question, but not for all instances on the database. The SessionFactory.OpenStatelessSession doesn't take a type, so it can't really know it can do batching on this type, or...?
After some digging into NHibernate, I found something in SettingsFactory.CreateBatcherFactory that might give some additional info
// It defaults to the NonBatchingBatcher
System.Type tBatcher = typeof (NonBatchingBatcherFactory);
// Environment.BatchStrategy == "adonet.factory_class", but I haven't
// defined this in my config file
string batcherClass = PropertiesHelper.GetString(Environment.BatchStrategy, properties, null);
if (string.IsNullOrEmpty(batcherClass))
{
if (batchSize > 0)
{
// MySqlDriver doesn't implement IEmbeddedBatcherFactoryProvider,
// so it still uses NonBatchingFactory
IEmbeddedBatcherFactoryProvider ebfp = connectionProvider.Driver as IEmbeddedBatcherFactoryProvider;
Could my configuration be wrong?
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" >
<session-factory name="my application name">
<property name="adonet.batch_size">100</property>
<property name="connection.driver_class">NHibernate.Driver.MySqlDataDriver</property>
<property name="connection.connection_string">my connection string
</property>
<property name="dialect">NHibernate.Dialect.MySQL5Dialect</property>
<property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property>
<!-- To avoid "The column 'Reserved Word' does not belong to the table : ReservedWords" -->
<property name="hbm2ddl.keywords">none</property>
</session-factory>
</hibernate-configuration>
I know this question is a year old, but there is a NuGet package that adds MySQL batching functionality to NHibernate. The reason that it's not baked directly into NHibernate is that the functionality required a reference to the MySQL.Data assembly, and the dev team didn't want the dependency.
IIRC, batching is currently supported for Oracle and SqlServer only.
As almost any other aspect of NH, this is extensible, so you can write your own IBatcher/IBatcherFactory and inject them via configuration.
Sidenote: current version of NH is 3.0 GA.
Really old question but...let's be completely correct
Another reason for batching not working can be use of stateless session (as in your case). Stateless session does not support batching. From documentation:
The insert(), update() and delete() operations defined by the
StatelessSession interface are considered to be direct database
row-level operations, which result in immediate execution of a SQL
INSERT, UPDATE or DELETE respectively. Thus, they have very different
semantics to the Save(), SaveOrUpdate() and Delete() operations
defined by the ISession interface.