I'm currently experimenting with Liquibase to generate SQL for our database migrations. Due to some constraints within our environment, we need to generate the SQL "offline" and then have that executed against the target database(s) by a DBA.
I've been able to use updateSQL / rollbackSQL with the Maven plugin to generate the SQL and that seems to work fine.
However, the output does not include any of the metadata information - i.e. there are no creates for the DATABASECHANGELOG table and none of the inserts for that table are included in the generated script.
Is it possible to include the metadata information in the generated SQL?
I'm using Liquibase 3.1.1 (Maven plugin is the same version). I've also tried this from the command line and the behaviour is consistent - i.e. I get the actual changes generated, but not the metadata.
There is not support currently in 3.1.1. It will hopefully be added as a feature in 3.2. https://liquibase.jira.com/browse/CORE-1726.
Are you able to run updateSQL against a backup database that matches production? That will still not execute anything but will include the metadata statements as well. The backup would actually just need the databasechangelog table because that is all liquibase reads unless you are using preconditions.
Running the main method with the option "outputLiquibaseSql=true" as shown here:
liquibase.integration.commandline.Main.main(new String[]{"--changeLogFile=src/test/resources/db.changelog.xml"
,"--outputFile=target/updateSql.txt"
,"--url=offline:unknown?outputLiquibaseSql=true"
, "updateSQL"});
Generates SQL like:
-- *********************************************************************
-- Update Database Script
-- *********************************************************************
-- Change Log: src/test/resources/db.changelog.xml
-- Ran at: 12/04/20 11:51
-- Against: null#offline:unknown?outputLiquibaseSql=true
-- Liquibase version: 3.8.9
-- *********************************************************************
CREATE TABLE DATABASECHANGELOG (ID VARCHAR(255) NOT NULL, AUTHOR VARCHAR(255) NOT NULL, FILENAME VARCHAR(255) NOT NULL, DATEEXECUTED datetime NOT NULL, ORDEREXECUTED INT NOT NULL, EXECTYPE VARCHAR(10) NOT NULL, MD5SUM VARCHAR(35), DESCRIPTION VARCHAR(255), COMMENTS VARCHAR(255), TAG VARCHAR(255), LIQUIBASE VARCHAR(20), CONTEXTS VARCHAR(255), LABELS VARCHAR(255), DEPLOYMENT_ID VARCHAR(10));
-- Changeset src/test/resources/db.changelog.xml::createTable-example::liquibase-docs
CREATE TABLE public.person (address VARCHAR(255));
INSERT INTO DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('createTable-example', 'liquibase-docs', 'src/test/resources/db.changelog.xml', CURRENT_TIMESTAMP, 1, '8:49e8eb557129b33d282c4ad2fdc5d4d9', 'createTable tableName=person', '', 'EXECUTED', NULL, NULL, '3.8.9', '6688703163');
As it is running in "offline:unknown" mode it also outputs CSV which are the entries to put into the DATABASECHANGELOG table:
"ID","AUTHOR","FILENAME","DATEEXECUTED","ORDEREXECUTED","EXECTYPE","MD5SUM","DESCRIPTION","COMMENTS","TAG","LIQUIBASE","CONTEXTS","LABELS","DEPLOYMENT_ID"
"createTable-example","liquibase-docs","src/test/resources/db.changelog.xml","2020-04-12T11:51:43.178","2","EXECUTED","8:49e8eb557129b33d282c4ad2fdc5d4d9","createTable tableName=person",,"","3.8.9","()","","6688703163"
Related
This is the first version of my changelog.sql:
-- liquibase formatted sql
-- changeset kh:1
CREATE TABLE test_table (test_id INT, test_column VARCHAR(256), PRIMARY KEY (test_id))
--changeset kh:2
INSERT INTO test_table (test_id, test_column) VALUES(3,'saket');
This is an update of my changelog.sql (Added a column in the first changeset):
-- liquibase formatted sql
-- changeset kh:1
CREATE TABLE test_table (test_id INT, test_column VARCHAR(256), test_column2 VARCHAR(256), PRIMARY KEY (test_id))
--changeset kh:2
INSERT INTO test_table (test_id, test_column) VALUES(3,'saket');
I execute a liquibase update with the following command:
docker run --rm -v /changelog:/liquibase/changelog liquibase/liquibase \
--url=jdbc:postgresql://xxxxxxxxxx:5432/postgres \
--changelog-file=changelog.sql --username=xxxx \
--password=xxxx update
I get this error:
Caused by: liquibase.exception.ValidationFailedException: Validation Failed:
1 changesets check sum
changelog.sql::1::kh was: 8:46ea95d67274343c559a1c5ddc8ee33 but is now: 8:ab7361c532323a6a32bc79d230a46574
I understand that when running it in a productive environment, it should fail in order not to launch scripts again by mistake, but, in a non-productive environment, how should it work?
I imagine three solutions in a DevOps (NonProd) scenario:
Restore database as the first step and execute changelog for specific version.
For nonprod, ignore the checksum validation
Modify the changelog and add a new changeset with the change.- For my taste, this is the least correct
In a non-production environment, it may be necessary to make changes to SQL until the new functionality is validated. What is the best practice, and is there any other solution?
The checksum error indicates that you are modifying a changeset that has already been executed in the particular database target. Therefore, Liquibase does not understand why you are doing this modification.
The sql statement "CREATE TABLE test_table" cannot be executed again, so adding the additional column does not make sense.
If you want to add a column to a table that has already been created, you really only have 2 options:
Drop and recreate the table to include the new column
Alter the table to add the new column
Option 1 can be acomplished by using Liquibase rollback, followed by modifying the changeset, and then Liquibase update.
Option 2 can be accomplished by adding a new changeset.
Both are 100% valid options.
I have tried using liquibase tool for our snowflake db. It is all working with where SCHEMA name is in all CAPITAL(UPPERCASE). But liquibase is not picking up any of my schema's with mixed case, eg (This_Schema).
I have tried putting this but didn't help.
<defaultSchemaName>This_Schema</defaultSchemaName>
POM.XML configuration example:
<driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
<url>jdbc:snowflake://${env.SNOWFLAKE_ACCOUNT}.eu-central-1.snowflakecomputing.com/?db=${env.SNOWFLAKE_DB}&schema=${env.SNOWFLAKE_SCHEMA}&warehouse=${env.SNOWFLAKE_WH}&role=${env.SNOWFLAKE_ROLE}</url>
<username>${env.SNOWFLAKE_USERNAME}</username>
<password>${env.SNOWFLAKE_PASSWORD}</password>
Error setting up or running Liquibase: liquibase.exception.DatabaseException: SQL compilation error:
[ERROR] Schema 'LIQUIBASE_DB.THIS_SCHEMA' does not exist. [Failed SQL: CREATE TABLE THIS_SCHEMA.DATABASECHANGELOGLOCK (ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP_NTZ, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
NOTE: "This_Schema" is the name of my schema as it is showing here, but upon executing liquibase update this automatically changes to UPPERCASE value as in error above.
Found this comment in the README file from the liquibase snowflake extension.
The Snowflake JDBC drivers implementation of
DatabaseMetadata.getTables() hard codes quotes around the catalog,
schema and table names, resulting in queries of the form:
show tables like 'DATABASECHANGELOG' in schema "sample_db"."sample_schema"
This results in the DATABASECHANGELOG table not being found, even
after it has been created. Since Snowflake stores catalog and schema
names in upper case, the getJdbcCatalogName returns an upper case
value.
Could this explain your problems?...
I have an SQL server database with a lot of tables and data. I need to reproduce it locally in a docker container.
I have successfully exported the schema and reproduced it. When I dump data to an SQL file, it does not export automatically generated fields (Like ids or uuids for example)
Here is the schema for the user table:
create table user (
id_user bigint identity constraint PK_user primary key,
uuid uniqueidentifier default newsequentialid() not null,
id_salarie bigint constraint FK_user_salarie references salarie,
date_creation datetime,
login nvarchar(100)
)
When it exports and element from this table, I get this kind of insert:
INSERT INTO user(id_salarie, date_creation, login) VALUES (1, null, "example")
As a consequence, most of my inserts give me foreign key errors, because the ids generated by my new database are not the same as the ones in the old database. I can't change everything manually as there is way too much data.
Instead, I would like to have this kind of insert:
INSERT INTO user(id_user, uuid, id_salarie, date_creation, login) VALUES (1, 1, "manuallyentereduuid" null, "example")
Is there any way to do this with Datagrid directly? Or maybe a specific SQL server way of generating insert statements this way?
Don't hesitate to ask for more details in comments.
You need the option 'Skip generated columns' while configuring INSERT extractor.
It seems like Datagrip does not give you that possibility so I used something else : DBeaver. It is free and based on the Eclipse Environment.
The method is simple :
Select all the tables you want to export
Right click -> Export table data
From there you just have to follow the instructions. It outputs one file per table, which is a good thing if you have a consequent volumetry. I had trouble executing the whole script and had to split it when using Datagrip.
Hope this helps anyone encountering the same problem. If you find the solution directly in datagrip, I would like to know too.
EDIT : See the answer above
I'm kicking the tires on Snowflake DB and wanted to see how it works with Liquibase. I'm running into an issue when creating the databasechangelog table as Snowflake has a timestamp field but Liquibase is trying to issue SQL with data type of datetime.
I followed the idea on http://www.liquibase.org/databases.html and just created the databasechangelog table outside of liquibase deployment.
CREATE TABLE bruces.DATABASECHANGELOG (ID VARCHAR(255) NOT NULL, AUTHOR VARCHAR(255) NOT NULL, FILENAME VARCHAR(255) NOT NULL, DATEEXECUTED timestamp NOT NULL, ORDEREXECUTED INT NOT NULL, EXECTYPE VARCHAR(10) NOT NULL, MD5SUM VARCHAR(35), DESCRIPTION VARCHAR(255), COMMENTS VARCHAR(255), TAG VARCHAR(255), LIQUIBASE VARCHAR(20), CONTEXTS VARCHAR(255), LABELS VARCHAR(255))
And then I started the liquibase deployment via maven.
WARNING 1/24/17 5:03 PM: liquibase: Unknown database: Snowflake
[INFO] Executing on Database: jdbc:snowflake://*****.snowflakecomputing.com/?db=BRUCE_DB&warehouse=BRUCE_WH
INFO 1/24/17 5:03 PM: liquibase: Successfully acquired change log lock
INFO 1/24/17 5:03 PM: liquibase: Creating database history table with name: bruces.DATABASECHANGELOG
INFO 1/24/17 5:03 PM: liquibase: Successfully released change log lock
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15.432 s
[INFO] Finished at: 2017-01-24T17:03:56-06:00
[INFO] Final Memory: 16M/305M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.4.0:update (default) on project snowflake.snowflake_app: Error setting up or running Liquibase: SQL compilation error:
[ERROR] Unsupported data type 'TOK_DATETIME'. [Failed SQL: CREATE TABLE bruces.DATABASECHANGELOG (ID VARCHAR(255) NOT NULL, AUTHOR VARCHAR(255) NOT NULL, FILENAME VARCHAR(255) NOT NULL, DATEEXECUTED datetime NOT NULL, ORDEREXECUTED INT NOT NULL, EXECTYPE VARCHAR(10) NOT NULL, MD5SUM VARCHAR(35), DESCRIPTION VARCHAR(255), COMMENTS VARCHAR(255), TAG VARCHAR(255), LIQUIBASE VARCHAR(20), CONTEXTS VARCHAR(255), LABELS VARCHAR(255))]
[ERROR] -> [Help 1]
It would appear that liquibase can't find the databasechangelog table so it tries to create it and fails.
Not knowing anything at all about SnowflakeDB, I would suggest that the best approach is to write a new database implementation for SnowflakeDB. SQL dialects vary quite a bit, and if you are having issues early, you are likely just going to run into more issues as you move along.
The problem is that today Snowflake does not support the DATETIME data type. It does support DATE and TIMESTAMP, which are standard SQL.
There's an ongoing effort to add it to Snowflake, will ask the team working on it to add updates here.
I see that's added. Snowflake however, converts the data type to TIMESTAMP_NTZ while creating the attribute. Try using the Snowflake extension and creating the table using the XML and provide either TIMESTAMP_NTZ or TIMESTAMP_NTZ(9) or DATETIME. All seems to be the same in Snowflake -
in my project database model is changed periodically, but since database contains test data they have to re-enter each time.
script to insert data quickly becomes relevant. at the moment it is done manually. how this can be done using sql management studio?
I need a script with the data from the tables (to insert ready data to a new table), the script for the database model I have.
for example: i have table [dbo].[Users], table has 3 column [Id],[Login],[Email] and currency contain only one user(Id = 1, Login = 'Anton', Email = 'fake#mail.com')...i create script for my base and resul will be somthink like this
CREATE TABLE [dbo].[Users] (
[Id] int IDENTITY(1,1) NOT NULL,
[Login] decimal(1024) NOT NULL,
[Email] nvarchar(1024) NOT NULL,
);
INSERT INTO Users([Id],[Login],[Email]) VALUES(1, 'Anton', 'fake#mail.com')
There's actually a standalone official tool with which you can create schema and data dumps as sql scripts. Just follow the wizard and make sure you've set the checkbox to export data. As far as I know, it comes included into SQL Management Studio 2008 or later.