Deploying to multiple schemas using Flyway - schema

I have a question regarding Flyway and managing multiple schemas. I have multiple schemas (schema1, schema2, schema3) with different deployment schedules and different folder locations (sql/schema1, sql/schema2, sql/schema3) with different code.
I want to Flyway to create the schemas before the code deployment but how do I set this up in a single config file? I read the Flyway doc (https://flywaydb.org/documentation/faq#multiple-schemas) but is the example using a single config file? or do i need to create multiple config files (one per schema)?
Can i achieve the same setting comma delimited schema list? will "Schema1" only look in the "sql/Schema1" location? I really dont want Schema1 pulling code from a different folder i.e. sql/Schema2, etc.
Thanks in advance!

When using Flyway with multiple schemas, you need to explicitly say in the sql statements which schema the sql is going to change. You can do this by putting an ALTER SESSION SET CURRENT_SCHEMA=schema1 at the top of each migration file, or prefixing all your statements like CREATE TABLE schema1.bananas.
If this is not practical, it would be best to create a number of config files, each with a single schema specified, and a single location specified. e.g.
flyway.schemas=schema1
flyway.locations=filesystem:sql/schema1
Then you can run Flyway with each config file individually to migrate that particular schema.

Related

Generate changeLog for single sql files with liquibase

I have a huge database with many SQL files. Is there any way to generate a changelog for single SQL files and not for the whole database? I have stored some SQL files local on my hard drive and use liquibase via command line. If there is no way to do that with local SQL files, is there a way to generate a changelog for single tables of my database?
What you are looking for is not possible. A database does not remember the SQL that was executed to get the database into a certain state. Here is a real simple example. Say that you first run some SQL to create a table with two columns 'name' and 'id'. Then you run some more sql to add a third column 'active'. The database does not remember that two separate operations were run to get into that state. When Liquibase generates a changelog for that database, it basically has to ask the database 'what is the current state of things?' and so it would have a changeset that creates the table with all three columns.
It is possible to have liquibase generate smaller changelog files, but you should probably take a step back and ask yourself why you want to do that.

Run a initial Liquibase script

This is my 2nd day using Liquibase.
I have a 'backup' or 'Repositry' with the database that I need to create locally on my PC.
I have looked at the documentation, but Im realy not 100% clear on how to run it.
Ive updated the Liquibase.properties file to reflect the correct paths and username and passwords.
How do you run the update command to generate the tables and test data.
Windows 7
The Liquibase documentation on 'Adding Liquibase to an existing project' is probably the best place to start. Basically, you want to set the properties file so that it refers to the existing 'backup' database, and then run liquibase generateChangeLog
This will connect to the existing database and generate a file that contains the structure of the existing database expressed (typically) in an XML file called a changelog. You then create a new properies file that will connect to your local database and use liquibase update to apply the changelog to the local database and populate the structure. Note that this does not typically transfer the data from the existing database to the new database, just the structure - the tables, keys, indexes, etc. If you want to have test data as well, you can either export that data from the existing database, or you might look into crafting the changesets manually. To export the data, a command like this would be used:
java -jar liquibase.jar --changeLogFile="./data/<insert file name> " --diffTypes="data" generateChangeLog

executing a common sql file using liquibase

I have a situation to handle, i have my liquibase structured as per the best practices recommended. I have the change log xml structured as given below
Master XML
-->Release XML
-->Feature XML
-->changelog XML
In our application group, we run updateSQL to generate the consolidated sql file and get the changes executed through our DBA group.
However, the real problem I have is to execute a common set of sql statements during every iteration. Like
ALTER SESSION SET CURRENT_SCHEMA=APPLNSCHEMA
as the DBA executes the changes as SYSTEM but the target schema is APPLNSCHEMA.
How to include such common repeating statements in Liquibase changelog.
You would be able to write an extension (http://liquibase.org/extensions) that injects it in. If you need to do it per changeLog, it may work best to extend XMLChangeLogParser to automatically create and add a new changeSet that runs the needed SQL.
You could make a changeSet with the attribute 'runAlways' set to true and include the SQL.
As far as I know, there isn't a way to have Liquibase itself do this. I suggest that you wrap Liquibase with your favorite scripting language such that you run a command "generateSQLforThoseCrazyDBAs" that runs Liquibase and then prepends the SQL you need to the output created by Liquibase.

How to store/organize DDL script?

First of all, I'm using MySQL on the cloud ( Amazon RDS ). My database definition script has statements to create views, triggers, stored procedures, users, grant permissions to users plus insert some data (e.g look-up tables ) etc. This script has 2000 lines of SQL code. I keep this script in just one file and I execute it using : mysql --user=myusername --password=mypassword << my.script.sql. This file is protected by SVN.
The issue with having all the SQL code in one file is that it's difficult to see the SVN history for just one item ( say I want to see the SVN history for the table Task and the view TaskView ).... So my question is : how do people store such scripts ? Do professional people store each item ( table,view,stored procedure ) in its own file in a directory ? If so , one has to make a script that deploys all the mini SQL scripts in a folder ? Do people just make a script that looks for every .SQL file and dumps it on the DB ? Do people use various folders to organize such a script ? E.g one folder for views, one folder for tables, one folder for stored procedures ?
Cheers !
We have following folder structure
+ddl
....group1_ddl.sql
....group2_ddl.sql
+procedures
---level1
......single_sp.sql
......another_sp.sql
---level2
......another_uses_level1_sp.sql
---leveln
......remaining_sp.sql
+views
--level1
......group_of_views.sql
As you can see we have 3 top level folders, each for ddl, sps and views
DDL
90% of time we have one ddl script for all the tables
Some times we mainitain ddl scripts separately which can be separated logically
ex: staging_ddl.sql, aggrigate_ddl.sql
ddl script includes PK and FK constriants and also additional indeces
Stored Procedures
Note the multiple folders (level1, level2), since our our entire ETL
& business is implemented in stored procedures so we have lot of sps
(dozens) with hundreds of lines of code. Since we are wrote modular
coding we have some sps depending on other sps. So the sps which
depend on other sps go to higher level
ex: In our scenario main_sp.sql is one sp which runs the entire workflow, this sp intern calls rest of the sps in the sequential order and they intern may or may not call other sps
so main_sp.sql goes to the level3, child_sp.sql goes to level2,
grand_child_sp.sql goes to level1
file name is same as sp name
Views:
If your views are less complex and you think you can maintain easily
you can manage them in a single script.
But in our case they are some views with nearly over 2000 lines so
we maintained them in one script per view.
Mostly we try to avoid using a view in another view, in case we did
it then we maintain the multiple level hierachy as explained above
otherwise we maintain single script per view
file name is same as view name
This is how I have been managing the scripts successfully since over 7 years.
Hope this helps

Bteq Scripts to copy data between two Teradata servers

How do I copy data from multiple tables within one database to another database residing on a different server?
Is this possible through a BTEQ Script in Teradata?
If so, provide a sample.
If not, are there other options to do this other than using a flat-file?
This is not possible using BTEQ since you have mentioned both the databases are residing in different servers.
There are two solutions for this.
Arcmain - You need to use Arcmain Backup first, which creates files containing data from your tables. Then you need to use Arcmain restore which restores the data from the files
TPT - Teradata Parallel Transporter. This is a very advanced tool. This does not create any files like Arcmain. It directly moves the data between two teradata servers.(Wikipedia)
If I am understanding your question, you want to move a set of tables from one DB to another.
You can use the following syntax in a BTEQ Script to copy the tables and data:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH DATA AND STATS;
Or just the table structures:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH NO DATA AND NO STATS;
If you get real savvy you can create a BTEQ script that dynamically builds the above statement in a SELECT statement, exports the results, then in turn runs the newly exported file all within a single BTEQ script.
There are a bunch of other options that you can do with CREATE TABLE <...> AS <...>;. You would be best served reviewing the Teradata Manuals for more details.
There are a few more options which will allow you to copy from one table to another.
Possibly the simplest way would be to write a smallish program which uses one of their communication layers (ODBC, .NET Data Provider, JDBC, cli, etc.) and use that to take a select statement and an insert statement. This would require some work, but it would have less overhead than trying to learn how to write TPT scripts. You would not need any 'DBA' permissions to write your own.
Teradata also sells other applications which hides the complexity of some of the tools. Teradata Data Mover handles provides an abstraction layer between tools like arcmain and tpt. Access to this tool is most likely restricted to DBA types.
If you want to move data from one server to another server then
We can do this with the flat file.
First we have fetch data from source table to flat file through any utility such as bteq or fastexport.
then we can load this data into target table with the help of mload,fastload or bteq scripts.