Maintaing order of executing sql scripts from different folders in flyway - sql

I am executing flyway script as a part of my project setup
In the directory there are 2 folders DDL and DML, to setup I first ran all the scripts in the DDL folder and then cleared the flyway history table and then ran all the scripts in the DML folder.
Problem : If run the scripts in DML folder without clearing flyway history table it errors out and says files are modified.
I want to preserve the history on execution of the scripts. How can i achieve that?
Error : ERROR: Validate failed: Migrations have failed validation Migration checksum mismatch for migration version 2 -> Applied to database : 1962665489 -> Resolved locally : -223568245 Either revert the changes to the migration, or run repair to update the schema history.
Error makes sense because there are files with same version number.
Desired result :
flyway_schema_history table
installed_rank
version
description
type
script
checksum
installed_by
installed_on
execution_time
success
1
1
DDL Script 1
SQL
V1__DDL_Script_1.sql
885232507
postgres
43:06.9
30
TRUE
2
2
DDL Script 2
SQL
V2__DDL_Script_2.sql
1962665489
postgres
43:07.0
22
TRUE
3
3
DDL Script 3
SQL
V3__DDL_Script_3.sql
1491548605
postgres
43:07.0
28
TRUE
4
1
DML Script 1
SQL
V1__DML_Script_1.sql
9491548656
postgres
43:07.0
28
TRUE
5
2
DML Script 2
SQL
V2__DML_Script_2.sql
1436548605
postgres
43:07.0
24
TRUE
6
3
DML Script 3
SQL
V3__DML_Script_3.sql
2691548605
postgres
43:07.0
28
TRUE
Data in the folder looks something like this:
DDL
-- V1__DDL_Script_1.sql
-- V2__DDL_Script_2.sql
-- V3__DDL_Script_3.sql
DML
-- V1__DML_Script_1.sql
-- V2__DML_Script_2.sql
-- V3__DML_Script_3.sql

Related

pglogical replication slot creation fails

I am using Flyway to execute sql commands for database migrations. So for our postgres cluster we want to create a replication slot with pglogical. The command used is -
SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical');
The migration sql file is as follows -
CREATE EXTENSION pglogical;
SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical');
These commands fail with the error -
ERROR: cannot create logical replication slot in transaction that has performed writes
Location : /opt/amazon/migration/V0003__migration.sql (/opt/amazon/migration/V0003__migration.sql)
Line : 5
Statement : SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'pglogical')
at org.flywaydb.core.internal.command.DbMigrate.doMigrateGroup(DbMigrate.java:345)
at org.flywaydb.core.internal.command.DbMigrate.access$900(DbMigrate.java:53)
But when I try to execute these statements by logging into the cluster using pgAdmin/psql I am able to create the replication slot. How do I fix this? Any help would be appreciated.

Sql batch processing in nodejs, should revoke changes if any one query fails to execute

In nodejs i want to insert data 3 different tables, if any one query fails to execute i want all the changes done by the other queries to revoke the changes in the table.
See this part of mysql package documentation: https://github.com/mysqljs/mysql#transactions
Only once you execute all 3 queries you can call connection.commit() to apply changes or connection.rollback() to revert changes as shown on the example on the link.

Can't load an SQL dump into Oracle 11g

I was given an SQL dump from an Oracle 11g database. It contains all of the statements to reproduce the database. Unfortunately, it fails at the very first one
CREATE SEQUENCE "XXX"."YYY"
INCREMENT BY 1
START WITH 129004
MAXVALUE 1000000000000000000000000000
NOMINVALUE
NOCYCLE
CACHE 20
NOORDER
GO
With the error SQL command not properly ended. If I throw out the GO at the end, the error user or role '' does not exist. What am I doing wrong? I tried this both in sqlplus command line client and in Oracle SQL developer.
Needless to say, I'm very new to Oracle, so please be easy on me :D.

How to make PREPARE TRANSACTION work

As per Postgres Documentation - Once prepared, a transaction can later be committed or rolled back with COMMIT PREPARED or ROLLBACK PREPARED, respectively. Those commands can be issued from any session, not only the one that executed the original transaction.
I am trying to import data from csv into database tables and for this, I am using the
COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' }
all this is done in a shell script.
Now the issue is that I am executing psql command and passing this command as parameter via the -c option ( I start transaction via the command
prepare transaction 'some-id' in that command).
I want to create a Savepoint and rollback to it incase of any errors.
After a few other tasks in the shell script, I check for errors that the previous psql statement have produced and when I then try to rollback using the command
Prepared Rollback 'transaction-id' ( in separate psql command with sql statements )
It reports "No "transaction-id" found"
Am I getting the concept wrong or missing something in the process?
Is this happening because I am issuing psql command multiple time and each is resulting in new transaction ?
For your prepare to work, the COPY and PREPARE must be in the same session. Since your question lacks concrete commands, I'm assuming that when you write:
Prepared Rollback 'transaction-id' ( in separate psql command with sql statements )
You're using different psql commands to COPY and PREPARE. This is wrong. Combine the COPY and PREPARE to the same session.
E.g.
$ psql -c "BEGIN; COPY tablename FROM '/tmp/sql'; PREPARE TRANSACTION 'foobar';" db
$ while /bin/not-ready-to-commit ; do sleep 1 ; done
$ psql -c "COMMIT PREPARED 'foobar';" db
The PREPARE TRANSACTION works by writing the current transaction to the disc and exiting the transaction process in the current session. This is why you need a BEGIN: it starts the transaction you want to prepare. All commands you want to be affected by the prepeare must come after the transaction has been started (in your case the COPY command). When the PREPARE TRANSACTION is issued, the transaction you are currently in is written to disk with the identifier you give. Any statements issued after the transaction is prepared are no longer part of the transaction. So doing BEGIN; PREPARE... ; COPY runs the COPY operation without a transaction.
Here's an example in psql shell:
demo=# DELETE FROM foo;
DELETE 4
demo=# BEGIN; -- start a transaction
BEGIN
DEMO=# COPY foo FROM '/tmp/sql'; -- do what you want to commit later
COPY 4
demo=# PREPARE TRANSACTION 'demo'; -- prepare the transaction
PREPARE TRANSACTION
demo=# ROLLBACK; -- this is just to show that there is no longer a transaction
NOTICE: there is no transaction in progress
ROLLBACK
demo=# SELECT * FROM foo; -- the table is empty, copy waiting for commit
a | b
---+---
(0 rows)
demo=# COMMIT PREPARED 'demo'; -- do the commit
COMMIT PREPARED
demo=# SELECT * FROM foo; -- data is visible
a | b
---+---
1 | 2
3 | 4
5 | 6
7 | 8
(4 rows)
Edit: You must enable prepared transactions in postgresql.conf:
max_prepared_transactions = 1 # or more, zero (default) disables this feature.
If max_prepared_transactions is zero, psql reports that the transaction id is not found, but does not warn you about this feature being disabled. Psql gives a warning for PREPARE TRANSACTION but it's easy to miss if your shell scripts print stuff after the prepare statement.
PREPARE TRANSACTION is for distributed transactions across multiple servers, usually used by transaction monitors or similar application servers (e.g. EJB).
Simply wrap your copy in a regular transaction block:
START TRANSACTION;
COPY ....;
COMMIT;
If you want a savepoint in the middle, use SAVEPOINT some_name and then you can rollback to that savepoint.

How to Manage SQL Source Code?

I am in charge of a database.
It has around 126 sprocs, some 20 views, some UDFs. There are some tables that saves fixed configuration data for our various applications.
I have been using a one-big text file that contained IF EXIST ...DELETE GO CREATE PROCEDURE... for all the sprocs, udfs, views and all the insert/updates for the configuration scripts.
In the course of time, new sprocs were added, or existing sprocs were changed.
The biggest mistake (as far as I am aware of) I have made in creating this BIG single text file is to use the code for new/changed sprocs at the beginning of the text file. I, however, I forgot to exclude the previous code for the new/changed sprocs. Lets illustrate this:
Say my BIG script (version 1) contains script to create sprocs
sp 1
sp 2
sp 3
view 1
view 2
The databse's version table gets updated with the version 1.
Now there is some change in sp 2. So the version 2 of the BIG script is now:
sp2 --> (newly added)
sp1
sp2
sp3
view 1
view 2
So, obviously running the BIG script version 2 will not going to update my sp 2.
I am kind of late of realise this with 100+ numbers of sprocs.
Remedial Action:
I have created a folder structure. One subfolder for each sproc/view.
I have gone through the latest version of the BIG script from the bgeinning and placed the code for all scripts into respective folders. Some scripts are repeated more than once in the BIG script. If there are more than on block of code for creating a specific sproc I am putting the earlier version into another subfolder called "old" within the folder for that sproc. Luckily I have always documented all the changes I made to all sprocs/view etc - I write down the date, a version number and description of changes made as comment in the sproc's code. This has helped me a lot to figure out the the latest version of code for a sprocs when there are more than one block of code for the sproc.
I have created a DOS batch process to concatenate all the individual scripts to create my BIG script. I have tried using .net streamreader/writer which messes up with the encoding and the "£" sign. So I am sticking to DOS batch for the time being.
Is there any way I can improve the whole process?
At the moment I am after some way to document the versioning of the BIG script along with its individual sproc versions. For example, I like to have some way to document
Big Script (version 1) contains
sp 1 version 1
sp 2 version 1
sp 3 version 3
view 1 version 1
view version 1
Big script (version 2) has
sp 1 version 1
sp 2 version 2
sp 3 version 3
view 1 version 1
view 2 version 1
Any feedback is welcomed.
Have you looked at Visual Studio Team System Database Edition (now folded into Developer Edition)?
One of things it will do is allow to maintain the SQL to build the whole database, and then apply only the changes to update the target to the new state. I believe that it will also, given a reference database, create a script to bring a database matching the reference schema up to the current model (e.g. to deploy to production without developers having access to production).
The way we do it is to have separate files for tables, stored procedures, views etc and store them in their own directories as well. For execution we just have a script which executes all the files. Its definitely a lot easier to read than having one huge file.
To update each table for example, we use this template:
if not exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
begin
CREATE TABLE [dbo].[MyTable](
[ID] [int] NOT NULL ,
[Name] [varchar](255) NULL
) ON [PRIMARY]
end else begin
-- MyTable.Name
IF (SELECT COL_LENGTH('MyTable','Name')) IS NULL BEGIN
ALTER TABLE MyTable ADD [Name] [varchar](255) NULL
PRINT 'MyTable.Name CREATED.'
END
--etc
end
When I had to handle a handful of SQL tables, procedures and triggers I did the following :
All files under version control (CVS at that time but look at SVN or Bazaar for example)
One file per object named after the object
a makefile stating the dependencies between files
It was an oracle project and every time you change a table you have to recomple its triggers.
And my trigges used several modules so had to be recompiled also when their dependent modules were updated ...
The makefile avoid the "big file" approach : you don't have to execute ALL your code for every change.
Under windows you can download "NMAKE.exe" to use makefiles
HTH
Please see my answer to a similar question, which may help:
Database schema updates
Some additional points:
When we make a Release, e.g. for Version 2, we concatenate together all the Sprocs from that have a modified date more recent than the previous Release.
We are careful to add at least one blank line to the bottom of each Sproc script, and to start each Sproc script with a comment - otherwise concatenation can yield "GOCREATE NextSproc" - which is a bore!
When we run the concatenated script we sometimes find that we get conflicts - e.g. calling sub-Sprocs that don't already exist. We duplicate the code for such Sprocs at the bottom of the script - so they are recreated a second time - to ensure that SQL Server's dependency table is correct. (i.e. we sort this out at the QA stage for the Release)
Also, we put a GRANT permissions statement at the bottom of each Sproc script, so that when we Drop / Create an SProc we re-Grant the permissions. However, if your Permissions are allocated to each user, or are differently assigned on each server, then it may be better to use ALTER rather than CREATE - but that is a problem if the SProc does not already exist, so then it is best to do:
IF NOT EXIST ...
CREATE PROCEDURE MySproc AS SELECT 'STUB'
GRANT EXECUTE Permissions
and then that Stub is immediately replaced by the real ALTER Sproc statement.