Handle sqlplus substitution variables (&&vars) in Liquibase - liquibase

my project is trying to migrate to liquibase but the lack of support for bind variables is making this difficult.
During our deployment we have sql scripts containing sqlplus substitution variables, like for example.
-- load_seed.sql ---
insert into <table>
values('&&host', '&&port', '&&user');
The value of these variables is different per environment, therefore we define profiles like these.
<DEV_profile.sql>
DEFINE host='dev.company.org'
DEFINE port=4008
..
<UAT_profile.sql>
DEFINE host='uat.company.org'
...
and the we run the deployment like this:
./deploy.ksh DEV
---- deploy.ksh ---
sqlplus <<END
<connection>
#$1_profile
#load_seed
The correct profile is picked up at execution time and the variables replaced.
Could you please suggest how to handle a case like this with Liquibase?

The equivalent functionality in Liquibase is provided by changelog parameters.
In your changelog, you define parameters, which are basically key-value pairs, and liquibase decides which value to use based on the value of a context or a label or a dbms.
When you want to apply the changeset to a given environment, you specify the context or label on the command line or in the liquibase.properties. Liquibase can determine the dbms based on the connection URL.
Here's an example that is somewhat similar to what you describe:
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.6.xsd
http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd">
<property name="host" value="dev.company.org" context="DEV"/>
<property name="port" value="4008" context="DEV"/>
<property name="user" value="DEV_USER" context="DEV"/>
<property name="host" value="uat.company.org" context="UAT"/>
<property name="port" value="4321" context="UAT"/>
<property name="user" value="UAT_USER" context="UAT"/>
<changeSet id="1" author="joe">
<insert tableName="someTableName">
<column name="host" type="varchar(255)" value="${host}"/>
<column name="port" type="varchar(8)" value="${port}"/>
<column name="user" type="varchar(255)" value="${user}"/>
</insert>
</changeSet>
</databaseChangeLog>

https://docs.liquibase.com/concepts/basic/changelog-property-substitution.html
does not support sql changelog property substitution. you would have to migrate to (xml, yaml, json)

Related

NHibernate and Integer Columns as Version

I'm trying to create a DB Table, using an NHibernate *hbm.xml mapping file, that will have a Versioning Column for concurrency check. The Versioning column should be a nullable Integer.
Although the Database is created just fine, using the mapping file as reference, the following happen:
* The first record is inserted with a NULL value as the Version
* The update of the previously inserted records fails with a "Stale Data" exception
In other words, no matter what I do, the Version column is always NULL.
I'm somewhat new to the Concurrency Control using NHibernate, so I don't quite understand what I'm doing wrong..
If I use a Timestamp as a Version, everything works just fine. However, my requirement is to use an Integer.. Hence my problem.
This is my Mapping File:
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-lazy="true" auto-import="false" assembly="New1.Backend" namespace="New1.BO">
<class name="Natrio" table="`Natrios`" schema="`dbo`">
<cache usage="read-write" />
<id name="Id" column="`Id`" type="System.Int32">
<generator class="NHibernate.Id.Enhanced.TableGenerator">
<param name="increment_size">200</param>
<param name="segment_value">Natrios</param>
<param name="optimizer">pooled-lo</param>
</generator>
</id>
<version name="Version" column="`Version`" type="System.Nullable`1[[System.Int32, mscorlib]], mscorlib" generated="always" unsaved-value="0">
<column name="`Version`" not-null="false" sql-type="int" />
</version>
<property name="Attribute" column="`Attribute`" type="String" not-null="false" length="100" />
</class>
</hibernate-mapping>
Any thoughts and/or suggestions would be greatly appreciated!
Why do you need nullable version column? In any case I believe the issue is caused by unsaved-value="0" in your mapping. As default value for nullable column is null - NHibernate thinks that value is already generated and so it's never assigned. You need to set it to null - unsaved-value="null" to make it work with nullable columns. And unsaved-value="0" makes sense for not-nullable types. But better omit this attribute completely and let NHibernate to s
Another issue with generated attribute. It's about DB generation - so always means that this value is generated automatically by DB. You should remove it or specify it as generated="never".
I believe the following mapping should work for you:
<version name="Version">
<column name="`Version`" not-null="false" sql-type="int" />
</version>

Liquibase loadData as string, not CLOB resource

The Problem
I recently upgraded Liquibase to 3.6.2 from 3.4.2.
Loading seed data from a CSV into text fields now results in a CLOB resource error. Before it would simply insert the text as a value.
The Setup
I'm using Liquibase to manage migrations of my data.
I have a table with an code and description column. description is of type TEXT.
<changeSet author="" id="create-table-degrees">
<createTable tableName="degrees">
<column name="code"
type="varchar(2)">
<constraints primaryKey="true"/>
</column>
<column name="description"
type="text">
<constraints unique="true"/>
</column>
</createTable>
<rollback>
<dropTable tableName="degrees"/>
</rollback>
</changeSet>
I have seed data in a CSV:
code,description
"D1","MASTERS"
"D2","DOCTORATE"
I load it using loadData:
<changeSet author="" id="seed-degrees">
<loadData file="seeds/degrees.csv"
tableName="degrees" />
</changeSet>
The Error
Unexpected error running Liquibase: CLOB resource not found: MASTERS
The Question
Is there a way to keep Liquibase from interpreting seed values as file paths instead of strings, or do I need to manually define the column types as String in loadData.
e.g. I would like to avoid having to modify the old changeSet to:
<changeSet author="" id="seed-degrees">
<loadData file="seeds/degrees.csv"
tableName="roles">
<column name="description" type="string" />
</loadData>
</changeSet>
The workaround listed in CORE-3287: Anver S December 3, 2018, 3:07 PM
While adding an explicit column type definition as defined in original
stackoverflow post
<column name="description" type="string" />
does the trick - for me it effectively requires to update already
applied changesets which ideally I'd try to avoid.

Liquibase table name prefix

I'm currently developing a prototype for a Spring based plugin system. The idea is that plugins can use JPA entities and a liquibase changelog to maintain the database structure. In order to separate the tables created by plugins from the tables of the core system the plugins should be forced to use a prefix for table name.
For JPA/Hibernate that can be easily archived by using a naming strategy. But I've found no way to archive that for the liquibase changeset.
For example the plugin defines a changelog like follows
<changeSet id="2015-03-17-00-01" author="foo">
<createTable tableName="fooentity">
<column name="id" type="INT">
<constraints primaryKey="true" nullable="false" />
</column>
<column name="name" type="VARCHAR(100)">
<constraints nullable="false" />
</column>
</createTable>
</changeSet>
The table should be created with name "plugin_fooentity". The plugin itself should not know anything about the prefix since the prefix is given by the plugin/core system.
Would be great if someone can give me a hint for a possible solution.
Maybe you can use modifySql for this?
You would have to copy this to all changesets that you define but it should be possible.
It has a subtag called regExpReplace which you could use to define a general term like create table (\w*?) .* and replace this with create table plugin_$1.
For me it worked like this:
Using the modifySql statement
I created 2 properties:
<property name="table.prefix" value="TBL_"/>
<property name="schema.name" value="PUBLIC"/>
Then added the following statements:
<modifySql>
<regExpReplace replace="CREATE\ TABLE\ ${schema.name}.([\w]*)\ (.*)" with="CREATE TABLE ${schema.name}.${table.prefix}$1 $2"/>
</modifySql>
<modifySql>
<regExpReplace replace="CREATE\ UNIQUE\ INDEX\ ${schema.name}.([\w]*)\ ON\ PUBLIC.([\w]*)\((.*)\)" with="CREATE UNIQUE INDEX ${schema.name}.$1 ON ${schema.name}.${table.prefix}$2($3)"/>
</modifySql>
<modifySql>
<regExpReplace replace="CREATE\ INDEX\ ${schema.name}.([\w]*)\ ON\ ${schema.name}.([\w]*)\((.*)\)" with="CREATE INDEX ${schema.name}.$1 ON ${schema.name}.${table.prefix}$2($3)"/>
</modifySql>

How does one access analytical views using odata?

I have an analytical view and an .xsodata to expose it to web. The question is how is the access url formed? HANA documentation is insufficient here, and the same for the moderated SCN.
Here is my func_x_cview.xsodata:
service namespace "CTag" {
"MyPackage::FUNC_X_CALC_VIEW" as "CView" keys generate local "ID"
parameters via entity "InputParams" ;
}
http://awshana:8000/package/path/to/xsodata/file/$metadata shows:
<EntityType Name="InputParamsType">
<Key>
<PropertyRef Name="ATTRIBUTE"/>
<PropertyRef Name="ATTRIBUTE_VALUE"/>
<PropertyRef Name="category"/>
<PropertyRef Name="from_date"/>
<PropertyRef Name="process"/>
<PropertyRef Name="to_date"/>
</Key>
<Property Name="ATTRIBUTE" Type="Edm.String" Nullable="false" MaxLength="50"/>
<Property Name="ATTRIBUTE_VALUE" Type="Edm.String" Nullable="false" MaxLength="100"/>
<Property Name="category" Type="Edm.String" Nullable="false" MaxLength="50"/>
<Property Name="from_date" Type="Edm.DateTime" Nullable="false"/>
<Property Name="process" Type="Edm.String" Nullable="false" MaxLength="50"/>
<Property Name="to_date" Type="Edm.DateTime" Nullable="false"/>
<NavigationProperty Name="Results" Relationship="CTag.InputParams_CViewType"
FromRole="InputParamsPrincipal"
ToRole="CViewDependent"/>
</EntityType>
What should be the access url? Does the xsodata need any tweaking?
Thanks
--EDIT--
When trying url like suggested by ongis-nade to http://awshana:8000/Pkg/Proj_X/services/tagA.xsodata/InputParams%28%27category%27=%27abcd%27%29/Results?$select=exception_name then I get an error like the following:
<error>
<code/>
<message xml:lang="en-US">
No property ''category'' exists in type 'CTag.InputParamsType'.
</message>
</error>
This is confusing as we can see a property named category in the entity named InputParamsType in the $metadata query.
Removing the single quotes around category (also tried double-quoting) gives
http://awshana:8000/Pkg/Proj_X/services/tagA.xsodata/InputParams%28category=%27abcd%27%29/Results?$select=exception_name
<error>
<code/>
<message xml:lang="en-US">
The number of keys specified in the URI at position 27 does not match number of key properties for the resource 'CTag.InputParamsType'.
</message>
</error>
So a single quote is needed.
A step closer but still the same question. Do I need to qualify each parameter name somehow?
Thanks.
I believe the URL will be formed as:
http://awshana:8000/Pkg/Proj_X/services/tagA.xsodata/InputParams(category='abcd')/Results?
The "InputParams" name is of course reflected in your service definition
I also found a good example here: http://scn.sap.com/community/developer-center/hana/blog/2013/01/22/rest-your-models-on-sap-hana-xs
May be a bit late: But you have to specify a value for each key parameter.
Note also, that timestamps must be specified in ODatas Edm.DateTime format.
Example for your service:
http://server:8080/pathToService/tagA.xsodata/InputParams(ATTRIBUTE='?',ATTRIBUTE_VALUE='?',category='?',from_date=datetime'2014-01-01T00:00:00',process='?',to_date=datetime'2014-09-01T00:00:00')/Results

NHibernate - Trying to get it to use a SQL Server Row version

As per the StackOverflow question 'NHibernate and sql timestamp columns as version', I use the following mapping:
<version name="RowNumber" generated="always" unsaved-value="null" type="BinaryBlob">
<column name="RowNumber" not-null="false" sql-type="timestamp" />
</version>
<property name="CreateDate" column="CreateDate" type="DateTime" update="false" insert="false" />
(Other properties after this last).
But when I run my ASP.MVC app I get:
[Path]\Clients.hbm.xml(7,90): XML validation error: The element 'urn:nhibernate-mapping-2.2:version' cannot contain child element 'urn:nhibernate-mapping-2.2:column' because the parent element's content model is empty.
But as far as I can see 2.2 is the latest version of the mapping, so how can anyone put a column element inside the version element?
Sorry if this is really basic,
In case anyone else has this problem:
It works as Ayende Rahien specifies in this blog on NHibernate - but only (AFAIK) on version 2.1.n; I was using 2.0.n. I also think you need the object's field/property to be byte[], not System.Linq.Binary as that type has no default constructor (but I am not sure about this - I seemed to have to do this)
Example (excuse the names):
<version name="RowKludge" type="BinaryBlob" generated="always" unsaved-value="null" >
<column name="RowNumber"
not-null="false"
sql-type="timestamp"/>
</version>
A SQL server 'timestamp' is not your regular timestamp, hence the requirement that the type should be a binary blob.
Note that if you do migrate you will need to change the NHibernate configuration in Web/App config - most tutorials currently available seem to be for v.2.0 (or earlier) - you need an uptodate reference or tutorial for 2.1
A quick look in the documentation reveals that your mapping is not correct. It should be something like this:
<version name="RowNumber" column="RowNumber"
generated="always" unsaved-value="null"
type="Timestamp" />
Best Regards,
Oliver Hanappi