I'm using NHibernate primarily against an MSSQL database, where I've used MSSQL schemas for the various tables.
In my NH mapping (HBM) files, I've specified the schema for each table in the mapping as follows:
<?xml version="1.0"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
auto-import="true"
schema="xyz"> <!-- schema specified -->
<class name="Customer">
<id name="Id">
<generator class="native" />
</id>
<property name="Name" />
</class>
</hibernate-mapping>
For my unit testing, I've been experimenting with SQLite, however my mappings fail now as NH reports that the database "xyz" cannot be found.
I understand there is a difference in interpretation of schema, so what is NH's interpretation/implementation, and what is the best approach when using schema's?
BTW: Searching the web using keywords like "nhibernate database schema" doen't yielded anything relevant.
The "standard" interpretation is that a table has a three-part name: "CATALOG.SCHEMA.TABLE" : these are the names used in the standard (ISO-SQL standard?) "information_schema" views. Hibernate (presumably also NHibernate) follows this convention, you can specify catalog and schema in a class mapping, and default_catalog and default_schema in a configuration.
In my own unit test environment (using Hypersonic), I massaged the Hibernate Configuration before building the SessionFactory: I myself did this to do things like setting HSQL-compatible IdentifierGenerators, but you can problably go through clearing the schema properties of the mapped classes.
In general, I try to avoid specifying schemas and catalogs in applications at all. In Oracle, I generally create synonyms so users see the objects in their own namespace; in PostgreSQL, set the search_path in the database configuration; in SQL Server, put all tables into 'dbo'.
The NHibernate.Mapping.Table class has an GetQualifiedName(NHibernate.Dialect.Dialect dialect) method, which is defined as follows:
public string GetQualifiedName(NHibernate.Dialect.Dialect dialect)
{
string quotedName = this.GetQuotedName(dialect);
return ((this.schema == null) ?
quotedName :
(this.GetQuotedSchemaName(dialect) + '.' + quotedName));
}
So there's basically no way you can make SQLite to ignore schema name other than to have a separate set of mappings for every scenario (or preprocessing them before compilation).
You can specify the schema (if you need it) in the configuration file using property default_schema. You can use multiple configuration files, or alter the one you're using - one for production and the other for test.
It's possible you can simply ignore the schema setting and use different credentials.
Related
given the following sql
create table something(
id BIGSERIAL,
something TEXT[] NOT NULL DEFAULT '{}',
PRIMARY KEY (id)
);
and instructing the code generator to use DDLDatabase
the generated item is in the form of
public final TableField<JSomethingRecord, Object[]> SOMETHING_
Looking around in the documentation I cannot find how can this be mapped in a String[].
same applies for varchar and varchar(255).
I shouldn't have to use an enforced type here as at least on of the three should be a valid datatype
and not fallback to OTHER as it happens with the UUID (for which I saw there is an example for enforced types)
Am I doing / understanding something wrong or this is an expected behaviour?
The database I am using is PostGres and the generator configuration is the following
<generator>
<database>
<name>org.jooq.meta.extensions.ddl.DDLDatabase</name>
<inputCatalog/>
<inputSchema>PUBLIC</inputSchema>
<properties>
<property>
<key>use-attribute-converters</key>
<value>true</value>
</property>
<property>
<key>scripts</key>
<value>src/main/resources/db/migration/*</value>
</property>
</properties>
</database>
<target>
<clean>true</clean>
<packageName>my.other.package</packageName>
<directory>target/generated-sources/jooq</directory>
</target>
</generator>
Thank you in advance
As of jOOQ 3.13, PostgreSQL's typed arrays are not yet supported by the DDLDatabase, because the current implementation of the DDLDatabase translates your DDL to H2 behind the scenes, and H2 1.4.200's ARRAY type does not support any other type of array than Object[].
This will change in the future, as:
H2 1.4.201 will support typed arrays like PostgreSQL: https://github.com/h2database/h2database/issues/1390
jOOQ will support running your DDL on an actual PostgreSQL database in test containers: https://github.com/jOOQ/jOOQ/issues/6551
jOOQ will support interpreting the DDL instead of running it on a third party database product: https://github.com/jOOQ/jOOQ/issues/7034
Until then, in order to use such PostgreSQL-specific features, I recommend using the classic approach of connecting to an actual PostgreSQL database instance.
I just started using Liquibase and stumpled upon the problem to differentiate between the capabilities of different databases.
We would like to support multiple databases (Oracle, MySQL, Derby - to name three).
The all have different capabilities. In specific Oracle supports sequences whereas MySQL and Derby do not.
When I let hibernate generate the DDL I can choose different dialects and it will consider these different capabilities and generate a Sequencer when using Oracle and use a plain table (for ID-generation) when using Derby or MySQL.
Now, I know I can constraint changesets by specifying 'oracle' in the dbms attribute. But then how can I do the plain table solution for the other databases? There does not seem to be a 'not oracle' attribute for dbms.
How does anyone else handle this? (I could not find anything about it on the liquibase pages nor on the forum.)
Try using a precondition on the changset. Boolean operations are supported.
For example
<preConditions onFail="CONTINUE">
<or>
<dbms type="oracle" />
<dbms type="mysql" />
</or>
</preConditions>
An alternative approach is to put all your sequences in a changlog file which you include in your main changelog, and then do something like this:
<changeSet
dbms="oracle,db2,db2i"
author="mccallim (generated)"
id="1419011907193-1"
>
<createSequence
schemaName="${main.schema}"
...
That changeset only gets executed for the DBMSs listed.
Hey Guys,
I'm using nhibernate 2.2 and ran into a problem that I can't seem to find an answer to. My program is using a default schema assigned in the hibernate.cfg.xml file like this:
<property name="default_schema">MY_SCHEMA</property>
which works as advertised for all generated SQL statements, however I have statements in a formula that need to be assigned the default schema as well:
<property name="Count" type="int" formula="SELECT COUNT(*) FROM DETAILS WHERE DETAILS.ID = ID" />
MY_SCHEMA changes relatively often, so I need the SQL to be interpreted as <property name="Count" type="int" formula="SELECT COUNT(*) FROM MY_SCHEMA.DETAILS WHERE DETAILS.ID = ID" />
Is this possible without resorting to hardcoded schemas? Thanks!
Kevin
You can change your mappings on the fly when building the session factory.
Of course that's easier to do if you use a code-based mapping solution, like Fluent or ConfORM.
Trying to store property of C#/.NET type byte[] in SQLite. Here is my mapping:
<class name="MyClass" lazy="false" table="MyTable">
<property name="MyProperty" type ="BinaryBlob" access="property" />
</class>
In SQL server it works like a charm even without the explicit type="BinaryBlob" in the mapping. In SQLite I've tried various types' combinations between SQL CREATE TABLE statements and in NHibernate data types, but each time getting either an exception that "the mapping can't be compiled" (because of type incompatibility) or an exception that a cast from the fetched datatype to the mapping type is impossible.
The value of MyProperty in insert statement looks like this: 0x7C87541FD3F3EF5016E12D411900C87A6046A8E8.
Update: continuing to debug System.Data.SQLite.SQLiteDataReader - looks like no matter what SQL type is (tried decimal, blob, unsigned big int) - the type affinity is always text.
What am I doing wrong, please (either technically or in general)? Any suggestion is welcomed.
The reason for text affinity was that the data was imported into a table from CSV (comma-separated values) file. Switching to the SQL file with a proper INSERT statement solved the problem.
Did you look at: How do i store and retrieve a blob from sqlite? There is an article on ayende.com as well here: Lazy loading BLOBS and the like in NHibernate. These links might help push you in the right direction to see what is going on.
I was reading a blog-post of Ben Scheirman about some NHibernate tweaks he made in order to increase the performance.
In the end of article there is:
Lesson #7: Always make sure you’ve set hibernate.default_schema
What does he mean by hibernate.default_schema?
I'm not a dba so I can't give you a good definition of schema... (to me it is just 'database' in SQL Server).
In NHibernate you can specify the schema two places: in the mapping files, in the configuration.
The Mapping File lets you specify schema per class. This is good when you have classes coming from different schema in the same server.
The SessionFactory configuration lets you specify a default schema (default_schema option) that should be applied to all class mappings that don't explicitly set their schema. So its a catch all.
From reading your link it seems this is beneficial in performance because when you query table "Bar" without specifying the schema (say database is "Foo" so schema "Foo.dbo" in SQL Server) the query plan isn't cached. This is probably due to the SQL Server having to try and resolve which schema to use by your connection string (Initial Catalog, Database, etc) instead of having it explicit in the query ("Bar" implicit - not cached, "Foo.dbo.Bar" explicit - cached).
Again, I'm not a dba so these definitions suck :)
edit:
Here is a link to the configuration stuff (for NH 1.2 ... which is old ... but the default_schema option is there):
https://www.hibernate.org/hib_docs/nhibernate/1.2/reference/en/html/session-configuration.html