I am working on jhipster spring using angular js and database as "liquibase".Why We need to delete whole database when we have done change in our db-changelog.xml?.if i have add one field to old table in database then i have a get exception t_user table is already exist.which mean we have to remove t_user table or loss our data.please help and provide any other way to change our database without deleting whole database.
Thanks in advance
Yesterday, we released the version 0.11; which supports the generation of the changelogs containing only your changes. The changes are applied automatically on the database. No need to drop your database now.
Try it. http://jhipster.github.io/2014/02/19/jhipster-release-0.11.0.html
I have not used jhipster at all, but normal liquibase usage is to not keep dropping the database but rather append new changeSets to your db-changelog.xml file. For example, if you originally had a changeSet and need to add a column you append an changeSet. That way you don't lose data and liquibase keeps track of which changes have been ran against your databases.
Related
We are trying to apply liquibase to our project. With current state, my database has been recorded by liquibase, but what happen if one developer edit the database directly not using liquibase (edit tables that created by liquibase)? Because sometime he wants to use other tools for change the database quickly (such as PLSQL, SQL Developer,...), how we can create update changelog file (only include new changeset) in this case ?
Thank you very much !
I have a requirement where if a table of a DB gets mistakenly dropped, we need it back, with or without the data. We already use Flyway for migration, is there any way we can achieve this using Flyway or otherwise?
I think you could hack a solution in place using callbacks (SQL or Java) but you've got to ask how can a table get deleted if you are using flyway to control migrations and amendments to your database in the first place.
This is fundamentally what flyway is intended to prevent as the following snippet from the flyway FAQ confirms and the solution may be to close the possibility of external amendments being applied in the first place.
Can I make structure changes to the DB outside of Flyway?
No. One of the prerequisites for being able to rely on the metadata in the database and having reliable migrations is that ALL database changes are made by Flyway. No exceptions. The price for this reliability is discipline. Ad hoc changes have no room here as they will literally sabotage your confidence. Even simple things like adding an index can trip over a migration if it has already been added manually before.
It seems not to be possible with versioned migrations, since they are applied only once, or with repeatable migrations, because they are reapplied only if check sum changes.
Another option - is to create a callback, which will run after migration.
For example, afterMigrate callback could do it, you just need to create a script named afterMigrate.sql in the location, used to load migrations. Now you just need to make a SQL-script to recreate some table if it not exists.
Some vendors support such an options, for example, with PostgreSQL you can use CREATE TABLE query with the IF NOT EXISTS option, to create a table only it doesn't exists.
I have an existing database and have used the generateChangeLog command line to create the initial changelog. This works fine :-)
But now I want the developers to use all the tools/processes they know/use already to develop the database and code and use a script to generate any incremental change sets as appropriate.
That is: do a diff against the current state of the developer's database (url/username/password in the properties file) using the current changelog (changeLogFile in the properties file) as the base reference.
There seems no easy way to do this - the best I've come up with is:
Create a new temporary database.
Use liquibase to initialise the temp database (to what is currently in the changelog) by overriding the connection url: liquibase --url=jdbc:mysql://localhost:3306/tempbase update
Use liquibase to generate a changeset in the changelog by diff'ing the two databases:
liquibase --referenceUrl=jdbc:mysql://localhost:3306/tempbase --referenceUsername=foo --referencePassword=baz diffChangeLog
Drop the temporary database.
Synchronise the changeset: liquibase changelogSync
but there must be a better way...
You are right that liquibase cannot compare a changelog file with a database. The only real option is to compare your developer database with an actual liquibase-managed database, or at least one temporarily created.
What I would suggest as the better way is to consider shifting the developers to author liquibase changeSets in the first place. It is different tooling than they may be used to, but it has the huge advantage that they will know that the change they wanted to make is the one that will make it all the way to production. Any diff-based process (such as using diffChangeLog) will usually guess right about what changed, but not always and those differences are often not noticed until into production.
Liquibase has various features such as formatted SQL changelogs that are designed to make the transition from developers working directly against their database to tracking changes through Liquibase because once that transition is made many things get much easier.
With Liquibase Pro you can create a snapshot file that accomplishes the same thing. And then use the snapshot file to compare your database updates.
https://www.liquibase.org/documentation/snapshot.html
I mention Pro because it takes care of stored logic comparisons as well.
I'm writing an application that is using a database (currently MySQL 4) to store data.
It is likely that I will make changes to this in the form of updates later to add additional data. Updating the application is simple, it essentially comes down to overwriting the program files with the new ones. However how do I go about updating the database schema?
The database is remote and so my application might exist in several places, so simply dumping the ALTER and CREATE statements in an installer would result in the changes being made multiple times, and I have been asked explicitly for an automatic solution that allows for the application copies to be updated over a transition period, and for schema updates to be automatic.
I considered examining the schema at start-up to look for missing tables and columns, and adding them as needed, however this does not seem like a clean solution. I also considered putting some kind of “schema version” number on the database, but can’t see any way to do this short of a single row table with an int “Version” column which doesn’t seem a good way either.
I can highly recommend Liquibase. It really does work - I've used it and was very impressed.
Essentially, it keeps its own log of statements run on a database and runs them only if not already run/needed. It is XML driven and allows you to use optional pre- and post-execution statements and conditions. You check your XML files into your source control and invoke it from your build tool. It's even suitable for driving production releases.
It's magic.
Rather than rolling your own system for versioning your database it's probably worth looking into an existing framework that will manage it for you.
I use liquibase and have integrated into my build using the maven plugin. Worth checking out!
Just as you proposed, add a table where you store the current version of the database schema. Then you only have to apply the changes between your last schema update and the new release, and set the new version number accordingly. I've done this to update our production database about 300 times, it just works.
I've read about how you can generate changelog.xml from an existing schema. That's fine, but I have existing systems that I don't want to touch, except to bring in new changes. I also have completely new systems which require all changes be applied.
So, I want to get liquibase to only perform migrations from changeset X when running on an existing system. I.e. that system's DB is at revision X-1 (but no liquibase sys tables), and I don't want any preceeding migrations applied.
Many thanks,
Pat
I would recommend a slightly different approach, as commented in this Liquibase forum thread
generate a changelog from your existing schema. The liquibase CLI can do that for you. I usually take the resulting XML and smooth it out a bit (group related changes into single changelogs, do vendor-specific cleanups and so on), but Liquibase does most of the legwork.
run that changelog against the existing database (changelogSync command), but only marking it as applied (without actually modifying the schema).
use liquibase for applying new changes from that point on.
I think the easiest would be to execute the initial setup on an empty database at first and export the entry(ies) liquibase does insert into the DATABASECHANGELOG table. Then I'd export these entries and insert them manually into one of the target databases into their DATABASECHANGELOG table, so liquibase does not execute the "change" there again.
Of course I'd test all that with test dumps on a test machine... :)