flyway:init don't find initial sql script - hsqldb

I make some experiments with flyway-maven-plugin. I've not an empty database, so I need an initial ddl script. I follow the instructions in the flyway wiki:
I put the sql script, named V1__Base_Migration.sql, in src/main/resources/db/migration.
The configuration of flyway-maven-plugin looks like the following one:
<build>
<plugins>
<plugin>
<groupId>com.googlecode.flyway</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>1.7</version>
<configuration>
<driver>org.hsqldb.jdbcDriver</driver>
<url>jdbc:hsqldb:hsql://localhost:9001/testdb</url>
<user>SA</user>
<password></password>
<schemas>PUBLIC</schemas>
<initialVersion>1</initialVersion>
<initialDescription>Base Migration</initialDescription>
</configuration>
</plugin>
</plugins>
</build>
When I call mvn install flyway:init on the cmd and look after the run in the database, I can find the version table of flyway but not the table, whose ddl is in the sql script.
When I have a look in the debug log of Maven, I cannot find any hint that the sql script was run.
[DEBUG] Excluded: classworlds:classworlds:jar:1.1
[DEBUG] Configuring mojo com.googlecode.flyway:flyway-maven-plugin:1.7:init from plugin realm ClassRealm[plugin>com.googlecode.flyway:flyway-maven-plugin:1.7, parent: sun.misc.Launcher$AppClassLoader#11799e7]
[DEBUG] Configuring mojo 'com.googlecode.flyway:flyway-maven-plugin:1.7:init' with include-project-dependencies configurator -->
[DEBUG] (f) driver = org.hsqldb.jdbcDriver
[DEBUG] (f) initialDescription = Base Migration
[DEBUG] (f) initialVersion = 1
[DEBUG] (f) schemas = PUBLIC
[DEBUG] (f) settings = org.apache.maven.execution.SettingsAdapter#1aa246e
[DEBUG] (f) url = jdbc:hsqldb:hsql://localhost:9001/testdb
[DEBUG] (f) user = SA
[DEBUG] -- end configuration --
[DEBUG] Database: HSQL Database Engine
[INFO] Hsql does not support locking. No concurrent migration supported.
[DEBUG] Schema: PUBLIC
[INFO] Creating Metadata table: schema_version (Schema: PUBLIC)
[DEBUG] Found statement at line 17: CREATE TABLE PUBLIC.schema_version (
version VARCHAR(20) PRIMARY KEY,
description VARCHAR(100),
type VARCHAR(10) NOT NULL,
script VARCHAR(200) NOT NULL,
checksum INT,
installed_by VARCHAR(30) NOT NULL,
installed_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
execution_time INT,
state VARCHAR(15) NOT NULL,
current_version BIT NOT NULL,
CONSTRAINT PUBLIC.schema_version_script_unique UNIQUE (script)
);
[DEBUG] Found statement at line 30: CREATE INDEX PUBLIC.schema_version_current_version_index ON PUBLIC.schema_version (current_version);
[DEBUG] Executing SQL: CREATE TABLE PUBLIC.schema_version (
version VARCHAR(20) PRIMARY KEY,
description VARCHAR(100),
type VARCHAR(10) NOT NULL,
script VARCHAR(200) NOT NULL,
checksum INT,
installed_by VARCHAR(30) NOT NULL,
installed_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
execution_time INT,
state VARCHAR(15) NOT NULL,
current_version BIT NOT NULL,
CONSTRAINT PUBLIC.schema_version_script_unique UNIQUE (script)
)
[DEBUG] Executing SQL: CREATE INDEX PUBLIC.schema_version_current_version_index ON PUBLIC.schema_version (current_version)
[DEBUG] Metadata table created: schema_version (Schema: PUBLIC)
Do I do anyhing wrong? It would be cool, if somebody can give me a hint, what I'm doing wrong.
You can find the whole Maven test project in [github] (https://github.com/skosmalla/flyway-maven-test)
Best regards,
Sandra

flyway:init is useful when you have existing tables in your production schema (say ABC and XYZ) and you decide to start using Flyway to manage your DB's lifecycle.
You can them dump the structure of the production schema in an sql script, say V0_9__Prod.sql to execute locally. This way you can align your dev DB with the current structure from PROD. As you add functionality, you can then add additional migrations like V1__Base_Migration.sql.
When deploying to PROD though, you do not want V0_9__Prod.sql to run again.
To avoid this, you can flyway:init the PROD schema with 0.9.
When it runs the migrations it will then skip V0_9__Prod.sql and move straight to V1__Base_Migration.sql.
If this situation does not apply to you, you can simply run flyway:migrate. No need for flyway:init first.

Related

DBT RUN - Getting Database Error using VS Code BUT Not Getting Database Error using DBT Cloud

I'm using DBT connected to Snowflake. I use DBT Cloud, but we are moving to using VS Code for our DBT project work.
I have an incremental DBT model that compiles and works without error when I issue the DBT RUN command in DBT Cloud. Yet when I attempt to run the exact same model from the same git branch using the DBT RUN command from the terminal in VS Code I get the following error:
Database Error in model dim_cifs (models\core_data_warehouse\dim_cifs.sql)
16:45:31 040050 (22000): SQL compilation error: cannot change column LOAN_MGMT_SYS from type VARCHAR(7) to VARCHAR(3) because reducing the byte-length of a varchar is not supported.
The table in Snowflake defines this column as VARCHAR(50). I have no idea why DBT is attempting to change the data length or why it only happens when the command is run from VS Code Terminal. There is no need to make this DDL change to the table.
When I view the compiled SQL in the Target folder there is nothing that indicates a DDL change.
When I look in the logs I find the following, but don't understand what is triggering the DDL change:
describe table "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS"
16:45:31.354314 [debug] [Thread-9 (]: SQL status: SUCCESS 36 in 0.09 seconds
16:45:31.378864 [debug] [Thread-9 (]:
In "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS":
Schema changed: True
Source columns not in target: []
Target columns not in source: []
New column types: [{'column_name': 'LOAN_MGMT_SYS', 'new_type': 'character varying(3)'}]
16:45:31.391828 [debug] [Thread-9 (]: Using snowflake connection "model.xxxxxxxxxx.dim_cifs"
16:45:31.391828 [debug] [Thread-9 (]: On model.xxxxxxxxxx.dim_cifs: /* {"app": "dbt", "dbt_version": "1.1.1", "profile_name": "xxxxxxxxxx", "target_name": "dev", "node_id": "model.xxxxxxxxxx.dim_cifs"} */
alter table "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS" alter "LOAN_MGMT_SYS" set data type character varying(3);
16:45:31.546962 [debug] [Thread-9 (]: Snowflake adapter: Snowflake query id: 01a5bc8d-0404-c9c1-0000-91b5178ac72a
16:45:31.548895 [debug] [Thread-9 (]: Snowflake adapter: Snowflake error: 040050 (22000): SQL compilation error: cannot change column LOAN_MGMT_SYS from type VARCHAR(7) to VARCHAR(3) because reducing the byte-length of a varchar is not supported.
Any help is greatly appreciated.

Full SQL statement logging on Dropwizard

I've a Dropwizard application using JDBI and SQL Server. I would like to get all SQL statements logged with their parameters but I don't seem to be able to.
This is what's usually recommended to do:
logging:
level: INFO
loggers:
"org.skife": TRACE
"com.microsoft.sqlserver.jdbc": TRACE
But this only logs statements, without the parameters:
TRACE [2016-07-08 16:40:27,711] org.skife.jdbi.v2.DBI: statement:[/* LocationDAO.detail */ EXEC [api].[GetCountryCodes] #CountryId = ?] took 487 millis
DEBUG [2016-07-08 16:37:44,499] com.microsoft.sqlserver.jdbc.Connection: ENTRY /* LocationDAO.detail */ EXEC [api].[GetCountryCodes] #CountryId = ?
Is there any way to get the actual statement run against the database?
Using p6spy seems to be the easiest way to go. Just add the dependency:
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>2.3.1</version>
</dependency>
On the database config, use the p6spy class instead and slightly modify your connection url
database:
driverClass: com.p6spy.engine.spy.P6SpyDriver
url: jdbc:p6spy:sqlserver://10.0.82.95;Database=psprd1

Liquibase diff without indexes fails when diffChangeLog declared

i'm new to liquibase and i was playing around with the diff command. It works perfectly fine but recently found this and i can't figure out why it's not functioning in this specific context.
so the main problem is that i want to compare two databases but without indexes. these are dynamically generated on primary keys and get different names, but are in fact equivalent. liquibase does not understand so i want to run diff without indexes.
so i add this to my pom.xml:
<diffTypes>tables, views, columns, primaryKeys, foreignKeys, uniqueconstraints</diffTypes>
it runs as expected, liquibase does not compare indexes.
in the next step, i want to generate the diff as changelog, so i add a diffChangeLog-File
<diffTypes>tables, views, columns, primaryKeys, foreignKeys, uniqueconstraints</diffTypes>
<diffChangeLogFile>src/main/diffs/diff_test.xml</diffChangeLogFile>
when running liquibase:diff, it fails:
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.4.1:diff (default-cli) on project liquibase_artifactID: Error setting up or running Liquibase: liquibase.command.CommandExecutionException: liquibase.exception.UnexpectedLiquibaseException: Could not resolve MissingObjectChangeGenerator dependencies due to dependency cycle. Dependencies:
[ERROR] [] -> Catalog -> []
[ERROR] [] -> Schema -> []
[ERROR] [Index] -> ForeignKey -> []
[ERROR] [] -> UniqueConstraint -> []
[ERROR] [] -> Column -> []
[ERROR] [] -> Table -> []
[ERROR] [] -> PrimaryKey -> []
[ERROR] [] -> View -> []
[ERROR] -> [Help 1]
Why does liquibase act like this? Is it "illegal" to generate a diffChangeLog without indexes?
When including indexes to diffTypes it works, but the generated Changelog is unusable because liquibase wants to change the indexes with createIndex and dropIndex. But these statements are not executable (it fails to drop an index on primary keys and it can't create an index when it already exists).
Any ideas how to generate a usable changelog without indexes? Or did i just miss something?
The answer to the question is there in the exception message:
Could not resolve MissingObjectChangeGenerator dependencies due to dependency cycle.
It then lists the dependencies.
Internally, Liquibase is generating a directed graph of dependencies and making sure that the dependencies are all satisfied. If you would like to see the code that does this, see the class DiffToChangeLog and its internal private class DependencyGraph

Liquibase - Error on SYSTEM.DATABASECHANGELOGLOCK while re-executing a migration

I'm using Liquibase 3.0.2, Ant task updateDatabase and change sets defined directly inside SQL scripts using comments like
--liquibase formatted sql
--changeset com.noemalife:1 dbms:oracle
etc.
The first run works fine, all change sets are executed and DB objects (oracle) are deployed. I can see DATABASECHANGELOG and DATABASECHANGELOGLOCK tables filled up.
Then I try to re-run the Ant task with the same exact configuration, expecting Liquibase to say something like "Ok, all is already deployed, nothig to do here."
But I get this instead:
C:\Users\dmusiani\Desktop\liquibase-test>ant migrate
Buildfile: build.xml
migrate:
[copy] Copying 1 file to C:\Users\dmusiani\Desktop\liquibase-test
BUILD FAILED
liquibase.exception.LockException: liquibase.exception.DatabaseException: Error executing SQL CREATE
TABLE SYSTEM.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIM
ESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracl
e:thin:#localhost:1521:WBMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-
00955: nome giÓ utilizzato da un oggetto esistente
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:122)
at liquibase.lockservice.LockServiceImpl.waitForLock(LockServiceImpl.java:62)
at liquibase.Liquibase.update(Liquibase.java:123)
at liquibase.integration.ant.DatabaseUpdateTask.executeWithLiquibaseClassloader(DatabaseUpda
teTask.java:45)
at liquibase.integration.ant.BaseLiquibaseTask.execute(BaseLiquibaseTask.java:70)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:357)
at org.apache.tools.ant.Target.performTasks(Target.java:385)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1189)
at org.apache.tools.ant.Main.runBuild(Main.java:758)
at org.apache.tools.ant.Main.startAnt(Main.java:217)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
Caused by: liquibase.exception.DatabaseException: Error executing SQL CREATE TABLE SYSTEM.DATABASECH
ANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR
2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracle:thin:#localhost:1521:W
BMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-00955: nome giÓ utilizza
to da un oggetto esistente
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:56)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:98)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:64)
at liquibase.database.AbstractJdbcDatabase.checkDatabaseChangeLogLockTable(AbstractJdbcDatab
ase.java:771)
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:95)
... 21 more
Caused by: java.sql.SQLException: ORA-00955: nome giÓ utilizzato da un oggetto esistente
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:754)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:963)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1192)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1731)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1701)
at liquibase.executor.jvm.JdbcExecutor$1ExecuteStatementCallback.doInStatement(JdbcExecutor.
java:86)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:49)
... 25 more
Total time: 1 second
C:\Users\dmusiani\Desktop\liquibase-test>
It seems to me that Liquibase is trying to re-create the DATABASECHANGELOGLOCK table.
I have this problem when I run Liquibase with the Oracle "system" user (my patch cares about creating a couple of other users, thus for testing purposes I use system directly to do that).
The other strange thing is that after the system's patch run succesfully, in the lock table I can still see the lock is active.
When I run other patches in other schemas(ex. the ones created by the system's patch), I have the patch completing successfully and the lock released in the lock table; relaunching that patch behaves as expected: Liquibase detects the patch is already in place ad does nothing.
This said, now my doubt is if Liquibase has problems, in the system schema, in detecting the lock table is already existing (and fails trying to deploy it) or if there is some kind of locking/commit problem.
Any suggestion is welcome
Thanks
Davide
I'm facing the same issue as you.
From what I see from the sources, when running as SYSTEM, the following condition (DatabaseSnapshot#include) is evaluated to true.
if (database.isSystemObject(example)) {
return null;
}
Because of that, the creation will always be attempted.
I'm gonna work further on a patch and keep you updated.
And here is a patch proposal.

Trouble running liquibase with different agent

I need to execute the same db-changelog by ant and then by spring. I hope that ant will run the changelog and when spring run, it will not do anything and just stop normally. Ant run the db-changelog successfully and then spring run but it throws an exception, part of the stack trace :
Reason: liquibase.exception.JDBCException: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: ERROR: relation "action" already exists; nested exception is org.springframework.beans.factory.BeanCreationException....
Any help will much appreciated.
Regards,
It does sound like it is trying to run the changelog again. Each changeSet in the changeLog is identified by a combination of the id, author, and the changelog path/filename. If you run "select * from databasechangelog" you can see the values used.
Your problem may be that you are referencing the changelog file differently from ant and spring therefore generating different filename values. Usually you will want to include them in the classpath so no matter where and how you run them they have the same path (like "com/example/db.changelog.xml")
I ran into this same problem and was able to fix it by altering the filename column of DATABASECHANGELOG to reference the spring resource path. In my case, I was using a ServletContextResource under the WEB-INF directory:
update DATABASECHANGELOG set FILENAME = 'WEB-INF/path/to/changelog.xml' where FILENAME = 'changelog.xml'