Maven execution to perform several plugins in sequence - maven-2

I'm working on something that is using Hibernate for database access. I've got everything set up and working so that I can use mvn hibernate3:hbm2ddl to build the database schema, and I'm using mvn liquibase:update to populate initial data into the database (DBUnit was my first try but I couldn't get it to work with Oracle and Liquibase just worked first time).
My problem is that if I execute hbm2ddl to drop and re-create the schema then the Liquibase DATABASECHANGELOG tables are left intact, meaning that Liquibase won't re-create the data the next time it's run. To get around this I've configured up mvn sql:execute to perform a drop on the two tables in question, but this means that to be safe if I want to build the database from scratch I now need to execute "mvn hibernate3:hbm2ddl sql:execute liquibase:update"
What I'd really like is to be able to configure up something that will execute the sql:execute command when the hibernate3:hbm2ddl command is run, so that I know that doing that one command will leave me in a nice clean database state. Failing that, a configuration that will run a number of commands in sequence automatically, so I could configure up for example "mvn execute:db-rebuild" to run the three commands above automatically.
I've seen mention of mojo-executor but no examples on actually how to use it. I'm not even sure if it's the right tool for what I want...

Why don't you bind these different things to a particular thing like the integration-test phase. The order of the plugins will define the order of exectutions. Than you get rid of hand calling mvn ...

Related

How to run Sequelize down migration for single file

I'm working on a migration using Sequelize. If the migration up method throws an error, the migration is not logged in the database as having completed. So, if I run db:migrate:undo, it instead runs down on the previous (and working) migration. As a result, I have a half executed migration where the schema for it remains in the database because the corresponding down method is never run by Sequelize. So, I need to either somehow force a single down method to run (which I'm not seeing an option for). Or, I need to manually clean up my database every time I run a failing migration, which can be a real pain for complicated migrations where I'm constantly going through trial and error. Is there an easier way to doing this?
sequelize db:migrate:undo --name 20200409091823-example_table.js
Use this command to undo any particular migration
manually insert the migration into your migrations table so that sequelize will think it has completed.
To verify check the status of your migrations before and after you edit the table.
db:migrate:status
Everything listed as "up" is something that can go "down" and vice versa.
There is no way to do it as of now... There is an open issue for this in Sequelize cli repo
I tried something and it worked for me:
rename your migration file and make sure it comes first alphabetically (make it the first one in migration files)
comment code in the second migration file
run sequelize db:migrate
This will run only the first migration file.
Don't forget to uncomment the migration file you commented before

Liquibase: Deleting Sql File used in Changeset

we are using liquibase for our DB-Deployment. We are currently doing some refactoring. There is an old changeset which is running an sql script. This has been executed in the past. Now this sql script (or the statement in it) is outdated and I would like to deleted from our VCS System. I did that and tried to deploy again, but I am getting an exception saying that the sql file is not there, although my Changeset has runAlways="false" and runOnChange="false".
Does this mean that I have to always keep the specific sql script always in my CVS although it is deprecated?
According how checksum works on Liquibase, you can't delete it. Technically you run that changeset one time, and something happened on DB, then liquibase created a checksum for your file. Now if you change some part of it, liquibase will say your file is updated and you can't do that.
So in summary, your file is part of the story of your application build, even if it is no longer useful, and you can't get rid of it.
If you are not in production case, one solution could be create a dump from your current database as init.sql and start from that.
I hope this helps you.

How to exclude changesets in build trigger?

I have a build plan in Bamboo (5.10.0, build 51017), and in the end of my build process, I push changes to my Git repository (Bitbucket Server), with a message with the following format: <build key>: Commit performed by the build server..
My build plan key is AAB-AC, and the commit message always starts with the key of the build, such as AAB-AC7-JOB1-75 (${bamboo.buildResultKey}).
I have tried many different regular expressions in the Exclude changesets field of the Advanced options of my repository, but a new build is always triggered whenever a build completes.
Here are a few examples of the patterns I have tried:
^AAB-\S*-\S*-\d*:
^AAB-AC\S*-\S*-\d*:.*$
^AAB-AC
AAB-AC
^AAB-AC\S*-\S*-\d*:.*\n
^AAB-AC.*$
^AAB-AC.*-.*-.*:
Commit performed by the build server
For each of these regular expressions, whenever I run a build manually, a new build is started right after my build finishes and the Bamboo server enters an infinite loop and endlessly builds my app.
How can I make Bamboo ignore the commits performed by my build plan?
Thank you for your help!
Most of those should work, we use ^Tag:.*$ in "Exclude changesets" to exclude commits like "Tag: v1.0.0" from triggering a build.
This worked in Bamboo 4, but doesn't work since we upgraded to 5.10.2 build 51019. So my guess is that this is a bug in Bamboo
I finally managed to make this work...
I have branches on my plan and I found that branch plans have their own repository definition with their own Exclude changesets setting.
So I tried all the regex combinations for nothing since the value specified on the branch plan was used, and not the value I defined on the main plan...
It seems that the ^ character is used to negate the regex, instead of matching the beginning of the string...
So the pattern ^AAB-AC seems to match everything that does not contain AAB-AC
All the commits I pushed to my server that didn't contain AAB-AC were ignored
All the commits pushed by my build plan triggered a new build
So I fixed my regex and I updated all my branch plans and everything seems to work properly.
Thank you for your help,
Best regards!

How to automate source control with Oracle database

I work in an Oracle instance that has hundreds of schemas and multiple developers. We have a development instance where developers can integrate their work before test or production.
We want to have source control for all the DDL run in this integrated development database. Currently this is done through a product Red Gate which we run manually after we make a change to the database. Redgate finds the changes between what is in the schema and what was last checked into source control and makes a script of the differences and puts this into source control.
The problem however is of course that running regdate can take some time and people run it infrequently or not at all for small changes. Also redgate will only look in one schema at a time and it would be VERY time consuming to manually run it against all schemas to guarantee that they are up to date. However if the source controlled code cannot be relied upon it becomes less useful...
What would seem to be ideal would be to have some software that could periodically (even once a day), or when triggered by DDL being run, update the source control (preferably github as this is used by other teams) from all the schemas.
I cannot seem to see any existing software which can be simply used to do this.
Is there a problem with doing this? (there is no need to address multiple developers overwriting each others work on the same day as we have this covered in a separate process) Is anyone doing this? Can anyone recommend a way to do this?
We do this with help of a PL/SQL function, a python script and a shell script:
The PL/SQL function can generate the DDL of a whole schema and returns this as CLOB
The python script connects to the database, fetches the DDL and stores it in files
The shell script runs the Source Control to add the modifications (we use Bazaar here).
You can see the scripts on PasteBin:
The PL/SQL function is here: http://pastebin.com/AG2Fa9zL
The python program (schema_exporter.py): http://pastebin.com/nd8Lf0gK
The shell script:
The shell script:
python schema_exporter.py
d=$(date +%Y-%m-%d__%H_%M_%S)
bzr add
bzr st | grep -q -E 'added|modified' && commit -m "Database objects on $d"
exit 0
This shell script is configured to run from cron every day.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.
To me it seems like your way of working is backwards: developers run DDL against the DB in an unordered fashion and then you need an automated tool for inferring the changes (and the DDL) that was run.
The process would be in better control if you did the following instead:
Developers write DDL as SQL scripts, preferably using a migration tool such as Flyway (http://flywaydb.org/documentation/migration/sql.html).
Migration scripts are checked into version control
Migration scripts are periodically run against the DB (e.g. by the migration tool)
In this workflow, the DB would only get altered through automated migration scripts and no-one is allowed to do changes manually. Could this work for you?
(I develop the Oracle tools for Redgate)
Actually using the tools you can already what I think you're asking for using Schema Compare for Oracle.
You can compare multiple schemas either in the UI or via the command line - I think what you're after is automating the command line tool which can create difference scripts, sync between source and destination (live, snapshot or scripts) and generate reports.
You can automate the command line to sync to a scripts folder which is your source code checkout and then subsequently run a command to commit the changes.
I think that's all good :)
We built a commerical tool that bridges Oracle with Git. It helps you manage your database objects with Git. Basically, the database becomes the working directory for the developer. You can perform git operations in the database such as reset, commit, branch, merge etc... and the database code is updated automatically. It might be worth taking a look: https://www.gitora.com

Can Liquibase detect if it has already run?

I have a small set of scripts that manage the build/test/deployment of an app. Recently I decided I wanted to switch to Liquibase for db schema management. This script will be working both on the developer machines where it regularly blow away and rebuild their database and also on deployed environment where we will only be adding new changesets.
When this program first runs on a deployed environment I need to detect if Liquibase has run or not and then run changelogSync to sync with the existing tables.
Other than manually checking if the database changelog table exists is there a way for the Liquibase API to let me know that it has already run at least once?
I'm using the Java core library in Groovy
The easiest way is probably ((StandardChangeLogHistoryService) ChangeLogHistoryServiceFactory.getInstance().getChangeLogService(database)).hasDatabaseChangeLogTable()
The ChangeLogHistoryService interface returned by liquibase.changelog.ChangeLogHistoryServiceFactory doesn't have a method to check if the table exists, but the StandardChangeLogHistoryService implementation does.