Can Liquibase Autopick new changesets files like Flyway? - liquibase

I am new to both Liquibase and Flyway. Was trying to do some Hello Worlds. I successfully ran basic SQL ( create insert etc ) using both Liquibase and Flyway. Was interested in running them from command line.
Flyway :
was kind of easy to start with
I had to just Put sql file in correct naming format 'V1_xxxx.sql' in correct folder 'flyway/sql' & Run 'flyway migrate'
the best part was it automatically picked up any new sql file given the correct file name.
LiquiBase :
had to spend some time to understand and use it
Need to give correct file name each time
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog2.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
Is there a way in liquibase to automatically pick the new xml files ? like Flyway i could just give folder name and Liquibase could use its table DATABASECHANGELOG to find the deltas and execute same.
Second Question for Liquibase only
In windows in order to run command successfully i had to change the changeLogFile parameter ... from ...
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=com/example/db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
to
liquibase --driver=com.mysql.jdbc.Driver --classpath=/path/to/classes --changeLogFile=./db.changelog1.xml --url="jdbc:mysql://localhost/example" --username=dev migrate
i.e. i changed my present working directory to com/example and then modified the changeLogFile param to point to a file in current folder and execute command.
Is there a way i can point to changeLogFile in another folder (apart from current folder)

One thing you can do to make liquibase a bit easier to use from the commandline is to create a file named liquibase.properties and save that in the directory where you are running the command. If I remember correctly, the command line will look for files with that name and use the properties in that file rather than requiring all the options on the command line. See http://www.liquibase.org/documentation/liquibase.properties.html and http://www.liquibase.org/documentation/command_line.html#using_a_liquibase.properties_file for more details. The docs there have this:
If you do not want to always specify options on the command line, you
can create a properties file that contains default values. By default,
Liquibase will look for a file called “liquibase.properties” in the
current working directory, but you can specify an alternate location
with the --defaultsFile flag. If you have specified an option in a
properties file and specify the same option on the command line, the
value on the command line will override the properties file value.
Yes, you can have liquibase automatically load files from a directory. You have to have a simple changelog.xml that is referenced from the command line or your properties file, but then that changelog can just reference another directory that contains more changelog files. The <includeAll> tag is used for this purpose (see http://www.liquibase.org/documentation/includeall.html) for more details.
Also, yes, you can put the changelog file wherever you like.

Related

How can I conditionally include large scripts in my ssdt post deployment script?

In our SSDT project we have a script that is huge and contains a lot of INSERT statements for importing data from an old system. Using sqlcmd variables, I'd like to be able to conditionally include the file into the post deployment script.
We're currently using the :r syntax which includes the script inline:
IF '$(ImportData)' = 'true'
BEGIN
:r .\Import\OldSystem.sql
END
This is a problem because the script is being included inline regardless of whether $(ImportData) is true or false and the file is so big that it's slowing the build down by about 15 minutes.
Is there another way to conditionally include this script file so it doesn't slow down the build?
Rather than muddy up my prior answer with another. There is a special case with a VERY simple option.
Create separate SQLCMD input files for each execution possibility.
The key here is to name the execution input files using the value of your control variable.
So, for example, your publish script defines variable 'Config' which may have one of these values: 'Dev','QA', or 'Prod'.
Create 3 post deployment scripts named 'DevPostDeploy.sql', 'QAPostDeploy.sql' and 'ProdPostDeploy.sql'.
Code your actual post deploy file like this:
:r ."\"$(Config)PostDeploy.sql
This is very much like the build event mechanism where you overwrite scripts with appropriate ones except you don't need a build event. But you are dependent upon naming your scripts very specifically.
The scripts referenced using :r are always included. You have a couple of options but I would first verify that if you take the script out it improves the performance to where you want it to get to.
The simplest approach is to just keep it outside of the whole build process and change your deploy process so it becomes a two step thing (deploy DAC then deploy script). The positives of this are you can do things outside of the ssdt process but the negatives are you don't get things like auto disabling of constraints on tables changing in the deployment.
The second way is to not include the script in the deploy when you build but create an AfterBuild msbuild task that adds the script as a post deploy script in the dacpac. The dacpac is a zip file so you can use the .net packaging Api to add a part called postdeploy.sql which will then be included in the deployment process.
Both of these ways mean you lose verification so you might want to keep it in a separate ssdt project which has a "same database" reference to your main project, it will slow down the build when it changes but should be quick the rest of the time.
Here is the way I had to do it.
1) Create a dummy post-deploy script.
2) Create build configurations in your project for each deploy scenario.
3) Use a pre-build event to determine which post deploy configuration to use.
You can either create separate scripts for each configuration or dynamically build the post-deploy script in your pre-build event. Either way you base what you do on the value of $(configuration) which always exists in a build event.
If you use separate static scripts, your build event only needs to copy the appropriate static file, overwriting the dummy post-deploy with whichever script is useful in that deploy scenario.
In my case I had to use dynamic generation because the decision about which scripts to include required knowing the current state of the database being deployed to. So I used the configuration variable to tell me which environment was being deployed to and then used an SQLCMD script with :OUT set to my Post-Deploy script location. Thus my pre-build script would then write the post-deploy script dynamically.
Either way, once build completed and the normal deploy process started the Post-Deploy script contained exactly the :r commands that I wanted.
Here's an example of the SQLCMD script I invoke in pre-build.
:OUT .\Script.DynamicPostDeployment.sql
PRINT ' /*';
PRINT ' DO NOT MANUALLY MODIFY THIS SCRIPT. ';
PRINT ' ';
PRINT ' It is overwritten during build. ';
PRINT ' Content IS based on the Configuration variable (Debug, Dev, Sit, UAT, Release...) ';
PRINT ' ';
PRINT ' Modify Script.PostDeployment.sql to effect changes in executable content. ';
PRINT ' */';
PRINT 'PRINT ''PostDeployment script starting at''+CAST(GETDATE() AS nvarchar)+'' with Configuration = $(Configuration)'';';
PRINT 'GO';
IF '$(Configuration)' IN ('Debug','Dev','Sit')
BEGIN
IF (SELECT IsNeeded FROM rESxStage.StageRebuildNeeded)=1
BEGIN
-- These get a GO statement after every file because most are really HUGE
PRINT 'PRINT ''ETL data was needed and started at''+CAST(GETDATE() AS nvarchar);';
PRINT ' ';
PRINT 'EXEC iESxETL.DeleteAllSchemaData ''pExternalETL'';';
PRINT 'GO';
PRINT ':r .\PopulateExternalData.sql ';
....
I ended up using a mixture of our build tool (Jenkins) and SSDT to accomplish this. This is what I did:
Added a build step to each environment-specific Jenkins job that writes to a text file. I either write a SQLCMD command that includes the import file or else I leave it blank depending on the build parameters the user chooses.
Include the new text file in the Post Deployment script via :r.
That's it! I also use this same approach to choose which pre and post deploy scripts to include in the project based on the application version, except that I grab the version number from the code and write it to the file using a pre-build event in VS instead of in the build tool. (I also added the text file name to .gitignore so it doesn't get committed)

Execute scripts by relative path in Oracle SQL Developer

First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"

How to configure liquibase not to include file path or name for calculating checksum?

I found that liquibase uses the full path of the change log file to calculate the checksum.
This behavior restricts to modify change log file names and tries to reapply the change sets again once renamed the file.
Is there a way to configure liquibase to use only the changelog id to
calculate cuecksum?
Please provide your valuable thoughts.
Use the attribute logicalFilePath of the databaseChangeLog tag.
Upstream developers recommend to use logicalFilePath and suggest to perform direct update on DATABASECHANGELOG.FILENAME column:
https://forum.liquibase.org/t/why-does-the-change-log-contain-the-file-name/481
to fix broken entries with full paths.
If you set hashes DATABASECHANGELOG.MD5SUM to null that triggers hashes recalculation on next LiquiBase run. It is necessary as hash algorithm includes moving parts too into the result.
One really similar issue- you may just want to ignore the portion of the path before the changelog-master.xml file. In my scenario, I've checked out a project in C:\DEV\workspace and my colleague has the project checked out in C:\another_folder\TheWorkspace.
I'd recommend reading through http://forum.liquibase.org/topic/changeset-uniqueness-causing-issues-with-branched-releases-overlapped-changes-not-allowed-in-different-files first.
Like others have suggested, you'll want the logicalFilePath property set on the <databaseChangeLog> element.
You'll also need to specify the changeLogFile property in a certain way when calling liquibase. I'm calling it from the command line. If you specify an absolute or relative path to the changeLogFile without the classpath, like this, it will include the whole path in the DATABASECHANGELOG table:
liquibase.bat ^
--changeLogFile=C:\DEV\more\folders\schema\changelog-master.xml ^
...
then liquibase will break if you move your migrations to any folder other than that one listed above. To fix it (and ensure that other developers can use whatever workspace location they want), you need to reference the changelogFile from the classpath:
liquibase.bat ^
--classpath=C:\DEV\more\folders ^
--changeLogFile=schema/changelog-master.xml ^
...
The first way, my DATABASECHANGELOG table had FILENAME values (I might have the slash backwards) like
C:\DEV\more\folders\schema\subfolder\script.sql
The second way, my DATABASECHANGELOG table has FILENAME values like
subfolder/script.sql
I'm content to go with filenames like that. Each developer can run liquibase from whatever folder they want. If we decide we want to rename or move an individual SQL file later on, then we can specify the old value in the logicalFilePath property of the <changeSet> element.
For reference, my changelog-master.xml just consists of elements like
<include file="subfolder/script.sql" relativeToChangelogFile="true"/>
I have faced the same problem and found solution below.
If you are using liquibase sql format then simply put below in your sql file:
--liquibase formatted sql logicalFilePath:<relative SQL file path like(liquibase/changes.sql)>
If you are using liquibase xml format then simply put below in your xml file:
<databaseChangeLog logicalFilePath=relative XML file path like(liquibase/changes.xml)" ...>
...
</databaseChangeLog>
After adding above logicalFilePath attribute, run the liquibase update command.
It will put relative file path whatever you put in logicalFilePath in FILENAME column of table DATABASECHANGELOG

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out

Nunit results

I am using Nunit results for managerial view of all the tests. After reading nunit doc it says it automatically update the results xml file after running the tests. But In my case it keep showing me the old results in index file where as updated reults in actual file. Any idea how can I update the index file according to the latest results.
According to NUnit-Console 2.4.8 command line docs the xml output is written by default to TestResult.xml in the working directory. You can use the /xml command line option to specify a different file name. For example:
nunit-console /xml:console-test.xml nunit.tests.dll
My guess is that either you are specifying a filename with the /xml flag but looking in vain for updated results in TestResult.xml or you are not using the /xml flag and looking in vain for updated results in a file with some other name. Probably the former.