Liquibase unable to execute schema (Snowflake) with mixed case eg. (This_Schema) - schema

I have tried using liquibase tool for our snowflake db. It is all working with where SCHEMA name is in all CAPITAL(UPPERCASE). But liquibase is not picking up any of my schema's with mixed case, eg (This_Schema).
I have tried putting this but didn't help.
<defaultSchemaName>This_Schema</defaultSchemaName>
POM.XML configuration example:
<driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
<url>jdbc:snowflake://${env.SNOWFLAKE_ACCOUNT}.eu-central-1.snowflakecomputing.com/?db=${env.SNOWFLAKE_DB}&schema=${env.SNOWFLAKE_SCHEMA}&warehouse=${env.SNOWFLAKE_WH}&role=${env.SNOWFLAKE_ROLE}</url>
<username>${env.SNOWFLAKE_USERNAME}</username>
<password>${env.SNOWFLAKE_PASSWORD}</password>
Error setting up or running Liquibase: liquibase.exception.DatabaseException: SQL compilation error:
[ERROR] Schema 'LIQUIBASE_DB.THIS_SCHEMA' does not exist. [Failed SQL: CREATE TABLE THIS_SCHEMA.DATABASECHANGELOGLOCK (ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP_NTZ, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
NOTE: "This_Schema" is the name of my schema as it is showing here, but upon executing liquibase update this automatically changes to UPPERCASE value as in error above.

Found this comment in the README file from the liquibase snowflake extension.
The Snowflake JDBC drivers implementation of
DatabaseMetadata.getTables() hard codes quotes around the catalog,
schema and table names, resulting in queries of the form:
show tables like 'DATABASECHANGELOG' in schema "sample_db"."sample_schema"
This results in the DATABASECHANGELOG table not being found, even
after it has been created. Since Snowflake stores catalog and schema
names in upper case, the getJdbcCatalogName returns an upper case
value.
Could this explain your problems?...

Related

Flyway: Can't insert the value NULL in 'installed_on', table 'schema_version' column does not allow nulls. INSERT fails

Using Flyway-core:4.1.2 for database-migration. Added a new DDL file for flyway to execute. Flyway executes the DDL correctly and makes the corresponding changes to tables and columns. (We're adding a table and altering some previous columns in the new DDL). But, flyway fails to register this attempt to schema_version table: I get the following error:
Current version of schema [dbo]: 2.1
Unable to insert row for version '3.0' in metadata table [dbo].[schema_version]
Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.dbsupport.FlywaySqlException:
Message : Cannot insert the value NULL into column 'installed_on', table 'dbo.schema_version'; column does not allow nulls. INSERT fails.
Flyway successfully executes the DDL however, fails to logs it to the schema_version table due to NULL on installed_on. Any help will be greatly appreciated. Thanks in advance. !
In my case the error was that the database table flyway_schema_history had column installed_on defined like DATETIME NOT NULL while it should have been DATETIME DEFAULT GETDATE() NOT NULL.
The issue was resolved when I manually altered the column to include the default value definition.
My company has an number of databases which were created over a period of last 3 years, and i have noticed that the oldest and the youngest of them have the column set properly, while the ones from around 1.5 years have the column defined without the default. Perhaps it was a bug in some older versions of Flyway?

How to tell HSQLDB to allow identity definition for SERIAL?

I'm currently writing tests for a spring boot application which is using a postgreSQL database. During test I want to replace the database by some in-memory variant like H2 or HSQLDB. Sadly both do not behave the same as the postgreSQL database.
I have migrations that look like
CREATE TABLE foo(id BIGSERIAL PRIMARY KEY, ...)
This results in hsqldb telling me
SQL State : 42525
Error Code : -5525
Message : identity definition not allowed: FOO_ID
So apparently creating the matching sequence for the primary key is forbidden. Is there a way to tell hsqldb to accept this?
You need to set PostgreSQL compatibility mode in HSQLDB.
SET DATABASE SQL SYNTAX PGS TRUE
Your table definition is then accepted and converted internally to the SQL Standard equivalent.
CREATE TABLE FOO(ID BIGINT GENERATED BY DEFAULT AS IDENTITY(START WITH 1) NOT NULL PRIMARY KEY, ..

Modify SQL changeset

Liquibase provides a <validCheckSum> tag to allow for a new checksum to be specified in case we want to modify an existing changeset.
However, this tag is not a valid attribute for SQL-formatted changesets. There is runOnChange, but that's different.
Is there any way of achieving that?
Basically, I made a mistake on a changeset, and I can't even add a rollback command because liquibase detects the checksum change and spits an error, so I'm stuck.
EDIT
The concrete changeset that I'm trying to change is:
--liquibase formatted sql
--changeset myname:0
ALTER TABLE `customers`
CHANGE COLUMN `name` `firstName` VARCHAR(45) NULL;
--changeset myname:1
ALTER TABLE `customers`
ADD COLUMN `lastName` VARCHAR(45) NULL AFTER `firstName`;
And I keep it in a file changelog_1.05.sql. Finally, I include that file in my changelog.xml:
<include file="changelog_1.05.sql" relativeToChangelogFile="true"/>
I can't add <validCheckSum> because is a SQL-formatted file, so no xml tags can be added there.
Even though it is not documented, looking at the source it appears that validCheckSum is a valid attribute in a formatted sql changelog. You can see line 105 in FormattedSqlChangelogParser.java has code to look for this attribute.
I ended up here trying to use the validCheckSum with SQL files in Liquibase 3.9.0.
It works, but only when the "--validCheckSum" is in a new line without other attributes (as opposed to other attributes such as "--runAlways":
--changeset me:test --runAlways:true --splitStatements:false
--validCheckSum: 1:any
This seems to be due to the regex for parsing the attribute:
https://github.com/liquibase/liquibase/blob/17fcfe4f8dae96ddb4caa6793051e47ce64ad933/liquibase-core/src/main/java/liquibase/parser/core/formattedsql/FormattedSqlChangeLogParser.java#L87

Does Firebird Database support Schema? If so, how can I create a schema in Firebird DB through ISQL?

Does Firebird Database support Schema? If so, how can I create a schema in Firebird DB through ISQL? Please help me to create schemas in Firebird DB. I have tried to retrieve schema using
AbstractDatabaseMetaData.getSchemas()
But it is always retrieving empty resultset. Can anyone please help me in retrieving schemas? At least SYSTEM schema when there is no schema.
Firebird currently doesn't have schemas, and therefor Jaybird doesn't return any. This complies with the JDBC specification, which says:
If a given form of metadata is not available, an empty ResultSet will be returned.
Note that Firebird does have a CREATE SCHEMA, but that is simply an alias for CREATE DATABASE.
No, schema not supported, but you can create many databases files. Here manual for create database with Firebird tool. Also you can create database with IBExpert or similar tool.
Also, you can create sql-cript for automate it:
$ isql -q -i create-db.sql
Also you can run it from java code.
--Contents of create-db.sql
CREATE DATABASE '/my/path/my-db.fdb' page_size 8192 USER 'SYSDBA' PASSWORD 'masterkey';
CREATE EXCEPTION EX_SOME_EXCEPTION 'Some extension message';
CREATE TABLE ROOMS (
ID integer NOT NULL PRIMARY KEY,
Number char(10),
Name char(100),
Network char(100),
Memo char(100)
);
CREATE GENERATOR ROOMS_IDGEN;
SET TERM !! ;
CREATE TRIGGER ON_ROOMS_INS FOR ROOMS BEFORE INSERT AS
BEGIN
IF (NEW.ID IS NULL) THEN NEW.ID=GEN_ID(ROOMS_IDGEN, 1);
END !!
SET TERM ; !!
....

Purpose of uploading a schema file

I'm attempting to make a table for the first time using postgres and the examples I'm seeing are kind of throwing me off. When it comes to creating a schema, I have a schema.sql file that contains my schema as follows:
CREATE TABLE IF NOT EXISTS orders
(
order_id INTEGER NOT NULL,
order_amount INTEGER NOT NULL
);
COMMENT ON COLUMN orders.order_id IS 'The order ID';
COMMENT ON COLUMN orders.order_amount IS 'The order amount';
Now I'd upload that schema by doing the following:
psql -d mydb -f /usr/share/schema.sql
Now when it comes time to create the table I'm suppose to do something like this:
create table schema.orders(
order_id INT NOT NULL,
order_amount INT NOT NULL
);
The uploading of the schema.sql file is what confuses me. What is all the information inside the file used for. I thought by uploading the schema i'm providing the model to create the table, but running create table schema.orders seems to be doing just that.
What you call "upload" is actually executing a script file (with SQL DDL commands in it).
I thought by uploading the schema i'm providing the model to create the table
You are creating the table by executing that script. The second CREATE TABLE command is almost but not quite doing the same. Crucial difference (besides the missing comments): A schema-qualified table name. And your schema happens to be named "schema", which is a pretty bad idea, but allowed.
Now, the term "schema" is used for two different things:
The general database structure created with SQL DDL commands.
A SCHEMA which is similar to a directory in a file system.
The term just happens to be the same for either, but one has nothing to do with the other.
Depending on the schema search path, the first invocation of CREATE TABLE may or may not have created another table in a different schema. You need to understand the role of the search path in Postgres:
How does the search_path influence identifier resolution and the "current schema"