Trouble running liquibase with different agent - liquibase

I need to execute the same db-changelog by ant and then by spring. I hope that ant will run the changelog and when spring run, it will not do anything and just stop normally. Ant run the db-changelog successfully and then spring run but it throws an exception, part of the stack trace :
Reason: liquibase.exception.JDBCException: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: ERROR: relation "action" already exists; nested exception is org.springframework.beans.factory.BeanCreationException....
Any help will much appreciated.
Regards,

It does sound like it is trying to run the changelog again. Each changeSet in the changeLog is identified by a combination of the id, author, and the changelog path/filename. If you run "select * from databasechangelog" you can see the values used.
Your problem may be that you are referencing the changelog file differently from ant and spring therefore generating different filename values. Usually you will want to include them in the classpath so no matter where and how you run them they have the same path (like "com/example/db.changelog.xml")

I ran into this same problem and was able to fix it by altering the filename column of DATABASECHANGELOG to reference the spring resource path. In my case, I was using a ServletContextResource under the WEB-INF directory:
update DATABASECHANGELOG set FILENAME = 'WEB-INF/path/to/changelog.xml' where FILENAME = 'changelog.xml'

Related

SeaORM column "owner_id" referenced in foreign key constraint does not exist

I have an error with SeaORM, Rust and Postgres, I'm new in the world of the databases so sorry if this is a newbie question. I wanted to refresh the migrations with sea-orm-cli, but i got this error Execution Error: error returned from database: column "owner_id" referenced in foreign key constraint does not exist.
the full output of the command is this one:
Running `cargo run --manifest-path ./migration/Cargo.toml -- fresh -u postgres://postgres#localhost:5432/task_manager`
Finished dev [unoptimized + debuginfo] target(s) in 0.54s
Running `migration/target/debug/migration fresh -u 'postgres://postgres#localhost:5432/task_manager'`
Dropping table 'seaql_migrations'
Table 'seaql_migrations' has been dropped
Dropping table 'owner'
Table 'owner' has been dropped
Dropping all types
Applying all pending migrations
Applying migration 'm20221106_182043_create_owner_table'
Migration 'm20221106_182043_create_owner_table' has been applied
Applying migration 'm20221106_182552_create_comment_table'
Execution Error: error returned from database: column "owner_id" referenced in foreign key constraint does not exist
I thought it could be some problem with my code in the file called m20221106_182552_create_comment_table.rs, but I took a look to it and it doesn't seems to be a problem.
use sea_orm_migration::prelude::*;
use super::m20221106_182043_create_owner_table::Owner;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.create_table(
Table::create()
.table(Comment::Table)
.if_not_exists()
.col(
ColumnDef::new(Comment::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(Comment::Url).string().not_null())
.col(ColumnDef::new(Comment::Body).string().not_null())
.col(ColumnDef::new(Comment::IssueUrl).string().not_null())
.col(ColumnDef::new(Comment::CreatedAt).string())
.col(ColumnDef::new(Comment::UpdatedAt).string())
.foreign_key(
ForeignKey::create()
.name("fk-comment-owner_id")
.from(Comment::Table, Comment::OwnerId)
.to(Owner::Table, Owner::Id),
)
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(Comment::Table).to_owned())
.await
}
}
#[derive(Iden)]
pub enum Comment {
Table,
Id,
Url,
Body,
IssueUrl,
CreatedAt,
UpdatedAt,
OwnerId,
}
I was following this official tutorial from the SeaQL team but I don't know if there is something outdated, hope you can help me with this.
You are missing the OwnerId column in the comments table. You have to create it first and then create the FK constraint on it.
See in the tutorial there is
.col(ColumnDef::new(Chef::BakeryId).integer().not_null())
.foreign_key(
ForeignKey::create()
.name("fk-chef-bakery_id")
.from(Chef::Table, Chef::BakeryId)
.to(Bakery::Table, Bakery::Id),
)

DBT RUN - Getting Database Error using VS Code BUT Not Getting Database Error using DBT Cloud

I'm using DBT connected to Snowflake. I use DBT Cloud, but we are moving to using VS Code for our DBT project work.
I have an incremental DBT model that compiles and works without error when I issue the DBT RUN command in DBT Cloud. Yet when I attempt to run the exact same model from the same git branch using the DBT RUN command from the terminal in VS Code I get the following error:
Database Error in model dim_cifs (models\core_data_warehouse\dim_cifs.sql)
16:45:31 040050 (22000): SQL compilation error: cannot change column LOAN_MGMT_SYS from type VARCHAR(7) to VARCHAR(3) because reducing the byte-length of a varchar is not supported.
The table in Snowflake defines this column as VARCHAR(50). I have no idea why DBT is attempting to change the data length or why it only happens when the command is run from VS Code Terminal. There is no need to make this DDL change to the table.
When I view the compiled SQL in the Target folder there is nothing that indicates a DDL change.
When I look in the logs I find the following, but don't understand what is triggering the DDL change:
describe table "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS"
16:45:31.354314 [debug] [Thread-9 (]: SQL status: SUCCESS 36 in 0.09 seconds
16:45:31.378864 [debug] [Thread-9 (]:
In "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS":
Schema changed: True
Source columns not in target: []
Target columns not in source: []
New column types: [{'column_name': 'LOAN_MGMT_SYS', 'new_type': 'character varying(3)'}]
16:45:31.391828 [debug] [Thread-9 (]: Using snowflake connection "model.xxxxxxxxxx.dim_cifs"
16:45:31.391828 [debug] [Thread-9 (]: On model.xxxxxxxxxx.dim_cifs: /* {"app": "dbt", "dbt_version": "1.1.1", "profile_name": "xxxxxxxxxx", "target_name": "dev", "node_id": "model.xxxxxxxxxx.dim_cifs"} */
alter table "DEVELOPMENT_DW"."DBT_XXXXXXXX"."DIM_CIFS" alter "LOAN_MGMT_SYS" set data type character varying(3);
16:45:31.546962 [debug] [Thread-9 (]: Snowflake adapter: Snowflake query id: 01a5bc8d-0404-c9c1-0000-91b5178ac72a
16:45:31.548895 [debug] [Thread-9 (]: Snowflake adapter: Snowflake error: 040050 (22000): SQL compilation error: cannot change column LOAN_MGMT_SYS from type VARCHAR(7) to VARCHAR(3) because reducing the byte-length of a varchar is not supported.
Any help is greatly appreciated.

Create table name using username in Hive query running in Oozie workflow?

I've got a Hive SQL script/action as part of an Oozie workflow. I'm doing a CREATE TABLE AS SELECT to output the results. I want to name the table using the username plus an appended string (e.g. "User123456_output_table"), but can't seem to get the correct syntax.
set tablename=${hivevar:current_user()};
CREATE TABLE `${hiveconf:tablename}_output_table` AS SELECT ...
That doesn't work and gives:
Error while compiling statement: FAILED: IllegalArgumentException java.net.URISyntaxException: Relative path in absolute URI: ${hivevar:current_user()%7D_output_table
Or changing the first line to set tablename=${current_user()}; starts running the SELECT query but eventually stops with:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.ql.metadata.HiveException: [${current_user()}_output_table]: is not a valid table name
Or changing the first line to set tablename=current_user(); starts running the SELECT query but eventually stops with:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.ql.metadata.HiveException: [current_user()_output_table]: is not a valid table name
Alternatively, is there a way to pass the username from the Oozie workflow via a parameter?
I'm using Hue to do all this rather than the command line.
Thanks
This is wrong: set tablename=${hivevar:current_user()}; - it will not be resolved and substituted as is.
Hive does not calculate variables before substitution, it substitutes them as is, all functions in variables are NOT calculated. variables are just text replacement.
This:
set tablename=current_user();
CREATE TABLE `${hiveconf:tablename}_output_table` ...
gets resolved as
CREATE TABLE `current_user()_output_table` ...
And functions are not supported in table names, it will not work this way.
The solution is to calculate functions outside the script and pass them as parameters.
See this blog: https://prodlife.wordpress.com/2013/12/06/parameterizing-hive-actions-in-oozie-workflows/

How to fix org.apache.kafka.common.config.ConfigException: Missing required configuration "group.id" which has no default value

I have a Kafka topic setup and am attempting to create an external table in Hive to query the Kafka stream.
However, when querying the external table I get the error message
Error: java.io.IOException: org.apache.kafka.common.config.ConfigException: Missing required configuration "group.id" which has no default value. (state=,code=0)
Tried putting group.id in the server.properties when starting the Kafka server.
Tried putting group.id in external table properties.
CREATE EXTERNAL TABLE kafka_table2
(`timestamp` timestamp , `page` string, `newPage` boolean,
added int, deleted bigint, delta double)
STORED BY 'org.apache.hadoop.hive.kafka.KafkaStorageHandler'
TBLPROPERTIES
("kafka.topic" = "connect-test", "kafka.bootstrap.servers"="mykafka:9092","kafka.group.id"="1")
INFO : Completed compiling command(queryId=hive_20190426082255_729f8adb-bb23-4317-8f3f-2f9049b62bd7); Time taken: 0.6 seconds
INFO : Executing command(queryId=hive_20190426082255_729f8adb-bb23-4317-8f3f-2f9049b62bd7): select * from kafka_table2
INFO : Completed executing command(queryId=hive_20190426082255_729f8adb-bb23-4317-8f3f-2f9049b62bd7); Time taken: 0.018 seconds
INFO : OK
Error: java.io.IOException: org.apache.kafka.common.config.ConfigException: Missing required configuration "group.id" which has no default value. (state=,code=0)
You should put "kafka.consumer.group.id"="1" and not "kafka.group.id"="1" in TBLPROPERTIES.
See: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_set_consumer_producer.html

Liquibase - Error on SYSTEM.DATABASECHANGELOGLOCK while re-executing a migration

I'm using Liquibase 3.0.2, Ant task updateDatabase and change sets defined directly inside SQL scripts using comments like
--liquibase formatted sql
--changeset com.noemalife:1 dbms:oracle
etc.
The first run works fine, all change sets are executed and DB objects (oracle) are deployed. I can see DATABASECHANGELOG and DATABASECHANGELOGLOCK tables filled up.
Then I try to re-run the Ant task with the same exact configuration, expecting Liquibase to say something like "Ok, all is already deployed, nothig to do here."
But I get this instead:
C:\Users\dmusiani\Desktop\liquibase-test>ant migrate
Buildfile: build.xml
migrate:
[copy] Copying 1 file to C:\Users\dmusiani\Desktop\liquibase-test
BUILD FAILED
liquibase.exception.LockException: liquibase.exception.DatabaseException: Error executing SQL CREATE
TABLE SYSTEM.DATABASECHANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIM
ESTAMP, LOCKEDBY VARCHAR2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracl
e:thin:#localhost:1521:WBMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-
00955: nome giÓ utilizzato da un oggetto esistente
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:122)
at liquibase.lockservice.LockServiceImpl.waitForLock(LockServiceImpl.java:62)
at liquibase.Liquibase.update(Liquibase.java:123)
at liquibase.integration.ant.DatabaseUpdateTask.executeWithLiquibaseClassloader(DatabaseUpda
teTask.java:45)
at liquibase.integration.ant.BaseLiquibaseTask.execute(BaseLiquibaseTask.java:70)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:357)
at org.apache.tools.ant.Target.performTasks(Target.java:385)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1189)
at org.apache.tools.ant.Main.runBuild(Main.java:758)
at org.apache.tools.ant.Main.startAnt(Main.java:217)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
Caused by: liquibase.exception.DatabaseException: Error executing SQL CREATE TABLE SYSTEM.DATABASECH
ANGELOGLOCK (ID INTEGER NOT NULL, LOCKED NUMBER(1) NOT NULL, LOCKGRANTED TIMESTAMP, LOCKEDBY VARCHAR
2(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID)); on jdbc:oracle:thin:#localhost:1521:W
BMDINSERT INTO SYSTEM.DATABASECHANGELOGLOCK (ID, LOCKED) VALUES (1, 0): ORA-00955: nome giÓ utilizza
to da un oggetto esistente
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:56)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:98)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:64)
at liquibase.database.AbstractJdbcDatabase.checkDatabaseChangeLogLockTable(AbstractJdbcDatab
ase.java:771)
at liquibase.lockservice.LockServiceImpl.acquireLock(LockServiceImpl.java:95)
... 21 more
Caused by: java.sql.SQLException: ORA-00955: nome giÓ utilizzato da un oggetto esistente
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:754)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:963)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1192)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1731)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1701)
at liquibase.executor.jvm.JdbcExecutor$1ExecuteStatementCallback.doInStatement(JdbcExecutor.
java:86)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:49)
... 25 more
Total time: 1 second
C:\Users\dmusiani\Desktop\liquibase-test>
It seems to me that Liquibase is trying to re-create the DATABASECHANGELOGLOCK table.
I have this problem when I run Liquibase with the Oracle "system" user (my patch cares about creating a couple of other users, thus for testing purposes I use system directly to do that).
The other strange thing is that after the system's patch run succesfully, in the lock table I can still see the lock is active.
When I run other patches in other schemas(ex. the ones created by the system's patch), I have the patch completing successfully and the lock released in the lock table; relaunching that patch behaves as expected: Liquibase detects the patch is already in place ad does nothing.
This said, now my doubt is if Liquibase has problems, in the system schema, in detecting the lock table is already existing (and fails trying to deploy it) or if there is some kind of locking/commit problem.
Any suggestion is welcome
Thanks
Davide
I'm facing the same issue as you.
From what I see from the sources, when running as SYSTEM, the following condition (DatabaseSnapshot#include) is evaluated to true.
if (database.isSystemObject(example)) {
return null;
}
Because of that, the creation will always be attempted.
I'm gonna work further on a patch and keep you updated.
And here is a patch proposal.