MyBatis Migrations: does "migrate new" automatically generate the changes? - sql

With the migrate new command, it generates a sql file:
-- // create blog table
-- Migration SQL that makes the change goes here.
-- //#UNDO
-- SQL to undo the change goes here.
Does the command detect my table and data changes and fill in with the script, such as Alter Table query, automatically? Or I have to fill in the scripts manually?

It was answered here:
Migrations does not do any introspection or reverse engineering of
your database. You have to enter the DDL changes your self.

Related

Liquibase Update Command doesn't drop elements (tables, functions, procedure...) from my database despite SQL script absent from my solution

I use liquibase tool to manage a postgres database. I work as the following :
I have a solution composed of different folders containing SQL scripts responsible for schema creation, tables creations, types creation, procedures creation, etc... Then, I have a ChangeLog file in xml format, containing the following informations :
includeAll path="#{Server.WorkingDirectory}#/02 - Schema" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/03 - Types" relativeToChangelogFile="false
includeAll path="#{Server.WorkingDirectory}#/04 - Tables" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/05 - Fonctions" relativeToChangelogFile="false"
includeAll path="#{Server.WorkingDirectory}#/06 - Stored Procedures" relativeToChangelogFile="false"
I run liquibase via command line :
liquibase --changeLogFile=$(Changelog.File.Name) --driver=$(Driver.Name) --classpath=$(Driver.Classpath) --url=$(BDD.URL) --username=$(BDD.Login) --password=$(BDD.Password) update
This enable Liquibase to take all the SQL scripts in the different folders listed in the changelogFile, compare it with the current database at url $(BDD.URL), and generate a delta script containing all the SQL queries to be executed to have a database corresponding to my solution.
This works well when I add new scripts (new tables or procedures) or modify existing scripts, my database is correctly updated by the command line, as expected. BUT it does not do anything when I delete a script from my solution.
To be more factual, here is what I want :
I have a SQL file containing the query "CREATE TABLE my_table" located in the folder "04 - Tables".
I execute the update command above, and it creates the table "my_table" in my database.
I finally do not want this table in my database any more. Thus I would like to simply remove the corresponding SQL script from my solution, and then run again the "update" command to simply remove my table in my database, generating automatically a "DROP TABLE my_table" by the liquibase "update" command. But this is not working as Liquibase doesn't record any change when I remove a sql file (whereas it does when I add or modify a file).
Does anyone know a solution to this ? Is there a specific command to drop an element when there is no "CREATE" query for this element, in a SQL solution ?
Many thanks in advance for you help :)
You will need to explicitly write a script to drop the table.
Other option is to rollback the change IF YOU HAVE Specified the Rollback SQL as part of your original SQL script.
There is a Pro Version option to rollback a single update , with free / community version, you can rollback last few changes in sequence
ex; I did "liquibase rollbackCount 5" will rollback the last 5 changes that were applied ONLY IF I HAD Coded the rollback sql needed as part of my script.
My Sql script sample that included the code to rollback is
--rollback drop TABLE test.user1 ; drop table test.cd_activity;
CREATE TABLE test.user1 (
user_type_id int NOT NULL
);
CREATE TABLE test.cd_activity (
activity_id Integer NOT NULL
userid int);

Flyway always execute repeatable migrations

Is it possible to execute repeatable migrations in flyway even when the checksum is the same? The problem I am facing is having a view which extends another table with additional rows and the view doesn't get updated automatically.
An example here:
R__person_view.sql
CREATE OR REPLACE VIEW person_view AS
SELECT p.*, e.name FROM person p, entity e
WHERE /* not relevant here ... */;
If this migration is executed at first it will work fine. If I add another migration, where I modify the table person, the changes are not adapted, because the view migration checksum did not change.
Yes, from Flyway 6.3.0 it has been possible to have repeatable migrations run each time using the timestamp placeholder in a comment, ensuring that Flyway sees this as being changed afresh each time. For example:
R__UtilityProcedures.sql
-- ${flyway:timestamp}
create or replace procedure my_important_proc

Wait for CockroachDB Command to Finish

I currently have a .sql file with the likes of:
DROP VIEW IF EXISTS vw_example;
CREATE VIEW vw_example as
SELECT a FROM b;
When running this command as part of a flyway migration, if the view already exists, it fails, as if the create command is not waiting for the DROP IF EXISTS to finish.
I know SQL server has a GO type keyword. Is there a way to sort of tell cockroachdb to wait for the first command?
As per the issue mentioned in the link, it is better to have the drop and create scripts in different migration files as flyway runs each migration in a single transaction.

Redgate migration scripts not running on deployment

I've been reading through the Redgate documentation on migration scripts and I'm trying to add a new column to a table that has a foreign key from another table.
Here's what I have done:
Added the new column, made it null-able and created the
relationship to a new table then I've committed the changes.
I then add static data to the new table so that the migration can
run. I commit this static data.I then add a blank migration script,
and set all null values on the column I've created in the last
commit to be the Id of one of the records in the related table. I
then commit this change.
I then run a deployment of both commits to my testing environment where
records already exist.
The problem I'm having is that the column gets created but the script seems like its not running as the column values stay null. I've verified that the script should actually change the columns as I've attempted to run it manually and it executes successfully.
Am I doing something wrong when using these scripts? Thanks.
I was creating blank migration scripts which lead to SQL Compare to set the column as not null. You have to specifically create a migration script on the schema change that requires it or SQL Compare will override all changes.

How to set Database Audit Specification for all the tables in db

I need to create an audit to track all CRUD events for all the tables in a database ,
now i have more than 100 tables in the DB , is there a way to create the specification which will include all the tables in the DB ?
P.S : I am using SQL Server 2008
I had the same question. The answer is actually simpler than expected and doesn't need a custom C# app to generate lots of SQL to cover all the tables. Example SQL below. The important point was to specify database and public for INSERT/UPDATE/DELETE.
USE [master]
GO
CREATE SERVER AUDIT [CancerStatsAudit]
TO FILE
( FILEPATH = N'I:\CancerStats\Audit\'
,MAXSIZE = 128 MB
,MAX_ROLLOVER_FILES = 64
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 1000
,ON_FAILURE = CONTINUE
,AUDIT_GUID = '5a0a18cf-fe42-4171-ad01-5e19af9e27d1'
)
ALTER SERVER AUDIT [CancerStatsAudit] WITH (STATE = ON)
GO
USE [CancerStats]
GO
CREATE DATABASE AUDIT SPECIFICATION [CancerStatsDBAudit]
FOR SERVER AUDIT [CancerStatsAudit]
ADD (INSERT ON DATABASE::[CancerStats] BY [public]),
ADD (UPDATE ON DATABASE::[CancerStats] BY [public]),
ADD (DELETE ON DATABASE::[CancerStats] BY [public]),
ADD (EXECUTE ON DATABASE::[CancerStats] BY [public]),
ADD (DATABASE_OBJECT_CHANGE_GROUP),
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
WITH (STATE = ON)
GO
NB: DATABASE_OBJECT_CHANGE_GROUP and SCHEMA_OBJECT_CHANGE_GROUP are not needed for auditing INSERT, UPDATE and DELETE - see additional notes below.
Additional notes:
The example above also includes the DATABASE_OBJECT_CHANGE_GROUP and the SCHEMA_OBJECT_CHANGE_GROUP. These were included since my requirement was to also track CREATE/ALTER/DROP actions on database objects. It is worth noting that the documentation is wrong for these.
https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-action-groups-and-actions
The above page states DATABASE_OBJECT_CHANGE_GROUP tracks CREATE, UPDATE and DELETE. This is not true (I've tested in SQL Server 2016), only CREATE is tracked, see:
https://connect.microsoft.com/SQLServer/feedback/details/370103/database-object-change-group-audit-group-does-not-audit-drop-proc
In fact, to track CREATE, UPDATE, DELETE, use SCHEMA_OBJECT_CHANGE_GROUP. Despite the above learn.microsoft.com documentation page suggesting this only works for schemas, it actually works for objects within the schema as well.
Change Data Capture
You can use the Change Data Capture functionality mechanism provided by SQL Server 2008.
http://msdn.microsoft.com/en-us/library/bb522489.aspx
Note that this will only do Create, Update and Delete.
Triggers and Audit tables
Even for 100 tables, you can use a single script that will generate the audit tables and the necessary triggers. Note, that this is not a very good mechanism - it will slow down the control will not be returned unless the trigger execution is complete.
Found a way to create Database Audit specification ,
wrote a c# code that dynamically generated sql statement for all the tables and all the actions I needed and executed the resultant string.
Frankly the wizard provided is no help at all if you are creating a Database Audit Specification for more than a couple of tables.