Jooq text array generated as object array - sql

given the following sql
create table something(
id BIGSERIAL,
something TEXT[] NOT NULL DEFAULT '{}',
PRIMARY KEY (id)
);
and instructing the code generator to use DDLDatabase
the generated item is in the form of
public final TableField<JSomethingRecord, Object[]> SOMETHING_
Looking around in the documentation I cannot find how can this be mapped in a String[].
same applies for varchar and varchar(255).
I shouldn't have to use an enforced type here as at least on of the three should be a valid datatype
and not fallback to OTHER as it happens with the UUID (for which I saw there is an example for enforced types)
Am I doing / understanding something wrong or this is an expected behaviour?
The database I am using is PostGres and the generator configuration is the following
<generator>
<database>
<name>org.jooq.meta.extensions.ddl.DDLDatabase</name>
<inputCatalog/>
<inputSchema>PUBLIC</inputSchema>
<properties>
<property>
<key>use-attribute-converters</key>
<value>true</value>
</property>
<property>
<key>scripts</key>
<value>src/main/resources/db/migration/*</value>
</property>
</properties>
</database>
<target>
<clean>true</clean>
<packageName>my.other.package</packageName>
<directory>target/generated-sources/jooq</directory>
</target>
</generator>
Thank you in advance

As of jOOQ 3.13, PostgreSQL's typed arrays are not yet supported by the DDLDatabase, because the current implementation of the DDLDatabase translates your DDL to H2 behind the scenes, and H2 1.4.200's ARRAY type does not support any other type of array than Object[].
This will change in the future, as:
H2 1.4.201 will support typed arrays like PostgreSQL: https://github.com/h2database/h2database/issues/1390
jOOQ will support running your DDL on an actual PostgreSQL database in test containers: https://github.com/jOOQ/jOOQ/issues/6551
jOOQ will support interpreting the DDL instead of running it on a third party database product: https://github.com/jOOQ/jOOQ/issues/7034
Until then, in order to use such PostgreSQL-specific features, I recommend using the classic approach of connecting to an actual PostgreSQL database instance.

Related

List of supported dbms values

In Liquibase I can set properties based on what DBMS I'm using, as mentioned here: Liquibase changeset by dbms type
For example:
<property name="val" dbms="postgresql" value="x"/>
<property name="val" dbms="h2" value="y"/>
My question is - where can I find a list of all valid/possible dbms values? In the Liquibase docs it just points to a page saying what databases are supported, but it does not give a corresponding dbms value. I know MYSQL is 'mysql', oracle is 'oracle', and so on, but where is the canonical list of values?
I searched the github repo for liquibase core, but can't find the magic class or enum that defines all these values.
Does anyone know where they are?
I don't think there is a canonical list of what values are accepted. From what I can tell, it takes the value you provide for dbms and compares it to the short name for the connected database, so it isn't a list that it is being compared to.
If you are not connected to the database, you can usually find the dbms short name via the connection url or by looking for the getShortName() function in the database files. For example, to connect to AWS redshift, the url is jdbc:redshift://endpoint:port/database and the dbms value you'd set is just redshift. This can also be confirmed by looking at the Redshift extension for Liquibase. The function getShortName() returns redshift
If you are connected to the database, you can easily find out what the value is by running liquibase status and have a precondition with dbms set to some value in a changelog. For example (in XML),
<preConditions>
<dbms type="mongo"/>
</preConditions>
results in Unexpected error running Liquibase: Validation Failed: 1 preconditions failed changelog_mongo.xml : DBMS Precondition failed: expected mongo, got mongodb
The expected value is the value you provide for dbms, and the got value is the connected database short name.
But to have a "list", I looked around and found the following ones listed at some point:
cockroachdb, db2, derby, edb, firebird, h2, hsqldb, informix, ingres, mariadb, mock, mssql, mysql, postgresql, sqlite, sybase

Liquibase Single Precondition for multiple Changesets

I have a number of changesets that I would like to run if a specific condition exists. For example run changesets 1, 2, and 3 only if sqlCheck executed with the excepted results.
I can copy the precondition into each changeset. However it feels like there should be a more efficient way of doing this. As the number of changesets grows, the files have a lot of these duplicates.
The preConditions element directly under databaseChangeLog seems to only configure dbms and runAs.
Is there a way to define a single preCondition that will be used by multiple change sets?
Any help is appreciated.
Unfortunately this is not possible but sometimes you can avoid using preconditions when you declare a property tag instead. For example when you have preconditions for different databases like Oracle and SQL Server which have different data types like
number in Oracle and float in SQL Server, you can instead of using a precondition for each database use a propertytag:
<property dbms="oracle" name="DECIMALTYPE" value="NUMBER" />
<property dbms="mssql" name="DECIMALTYPE" value="FLOAT" />

How do I differentiate between databases when using e.g. sequence

I just started using Liquibase and stumpled upon the problem to differentiate between the capabilities of different databases.
We would like to support multiple databases (Oracle, MySQL, Derby - to name three).
The all have different capabilities. In specific Oracle supports sequences whereas MySQL and Derby do not.
When I let hibernate generate the DDL I can choose different dialects and it will consider these different capabilities and generate a Sequencer when using Oracle and use a plain table (for ID-generation) when using Derby or MySQL.
Now, I know I can constraint changesets by specifying 'oracle' in the dbms attribute. But then how can I do the plain table solution for the other databases? There does not seem to be a 'not oracle' attribute for dbms.
How does anyone else handle this? (I could not find anything about it on the liquibase pages nor on the forum.)
Try using a precondition on the changset. Boolean operations are supported.
For example
<preConditions onFail="CONTINUE">
<or>
<dbms type="oracle" />
<dbms type="mysql" />
</or>
</preConditions>
An alternative approach is to put all your sequences in a changlog file which you include in your main changelog, and then do something like this:
<changeSet
dbms="oracle,db2,db2i"
author="mccallim (generated)"
id="1419011907193-1"
>
<createSequence
schemaName="${main.schema}"
...
That changeset only gets executed for the DBMSs listed.

How to map small binary objects properly in SQLite/NHibernate combo (wrong type affinity)?

Trying to store property of C#/.NET type byte[] in SQLite. Here is my mapping:
<class name="MyClass" lazy="false" table="MyTable">
<property name="MyProperty" type ="BinaryBlob" access="property" />
</class>
In SQL server it works like a charm even without the explicit type="BinaryBlob" in the mapping. In SQLite I've tried various types' combinations between SQL CREATE TABLE statements and in NHibernate data types, but each time getting either an exception that "the mapping can't be compiled" (because of type incompatibility) or an exception that a cast from the fetched datatype to the mapping type is impossible.
The value of MyProperty in insert statement looks like this: 0x7C87541FD3F3EF5016E12D411900C87A6046A8E8.
Update: continuing to debug System.Data.SQLite.SQLiteDataReader - looks like no matter what SQL type is (tried decimal, blob, unsigned big int) - the type affinity is always text.
What am I doing wrong, please (either technically or in general)? Any suggestion is welcomed.
The reason for text affinity was that the data was imported into a table from CSV (comma-separated values) file. Switching to the SQL file with a proper INSERT statement solved the problem.
Did you look at: How do i store and retrieve a blob from sqlite? There is an article on ayende.com as well here: Lazy loading BLOBS and the like in NHibernate. These links might help push you in the right direction to see what is going on.

NHibernate "database" schema confusion [.\hibernate-mapping\#schema]

I'm using NHibernate primarily against an MSSQL database, where I've used MSSQL schemas for the various tables.
In my NH mapping (HBM) files, I've specified the schema for each table in the mapping as follows:
<?xml version="1.0"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
auto-import="true"
schema="xyz"> <!-- schema specified -->
<class name="Customer">
<id name="Id">
<generator class="native" />
</id>
<property name="Name" />
</class>
</hibernate-mapping>
For my unit testing, I've been experimenting with SQLite, however my mappings fail now as NH reports that the database "xyz" cannot be found.
I understand there is a difference in interpretation of schema, so what is NH's interpretation/implementation, and what is the best approach when using schema's?
BTW: Searching the web using keywords like "nhibernate database schema" doen't yielded anything relevant.
The "standard" interpretation is that a table has a three-part name: "CATALOG.SCHEMA.TABLE" : these are the names used in the standard (ISO-SQL standard?) "information_schema" views. Hibernate (presumably also NHibernate) follows this convention, you can specify catalog and schema in a class mapping, and default_catalog and default_schema in a configuration.
In my own unit test environment (using Hypersonic), I massaged the Hibernate Configuration before building the SessionFactory: I myself did this to do things like setting HSQL-compatible IdentifierGenerators, but you can problably go through clearing the schema properties of the mapped classes.
In general, I try to avoid specifying schemas and catalogs in applications at all. In Oracle, I generally create synonyms so users see the objects in their own namespace; in PostgreSQL, set the search_path in the database configuration; in SQL Server, put all tables into 'dbo'.
The NHibernate.Mapping.Table class has an GetQualifiedName(NHibernate.Dialect.Dialect dialect) method, which is defined as follows:
public string GetQualifiedName(NHibernate.Dialect.Dialect dialect)
{
string quotedName = this.GetQuotedName(dialect);
return ((this.schema == null) ?
quotedName :
(this.GetQuotedSchemaName(dialect) + '.' + quotedName));
}
So there's basically no way you can make SQLite to ignore schema name other than to have a separate set of mappings for every scenario (or preprocessing them before compilation).
You can specify the schema (if you need it) in the configuration file using property default_schema. You can use multiple configuration files, or alter the one you're using - one for production and the other for test.
It's possible you can simply ignore the schema setting and use different credentials.