In our build server, we have a number of feature branches get deployed against one database. The problem is sometimes some buggy scripts in one branch causes LB to exit without releasing the lock. The problem is there is no easy way to find out what branch caused this. We may have up to 30 branches getting deployed constantly as there are new changes against the branch.
Is there any way (or can we have new feature in Liquibase) to set the instance name and the name can be stored in LOCKEDBY column of table DATABASECHANGELOGLOCK so we can easily find out what branch/instance caused the issue?
Currently, LOCKEDBY has only IP in it which is the same for all the instances.
You can specify a system property which gets insert into the LOCKEDBY column:
System.setProperty("liquibase.hostDescription", "some value");
I think to achive this you need to patch Liquibase somewhere here:
https://github.com/liquibase/liquibase/blob/ed4bd55c36f52980a43f1ac2c7ce8f819e606e38/liquibase-core/src/main/java/liquibase/lockservice/DatabaseChangeLogLock.java
https://github.com/liquibase/liquibase/blob/ed4bd55c36f52980a43f1ac2c7ce8f819e606e38/liquibase-core/src/main/java/liquibase/lockservice/StandardLockService.java
to fetch additional variable somehow (property file/env variable/etc) and store in the table.
Btw, be careful with deploying multiple branches with the same database instance, because it is possible that you will make a change in DB structure for one branch, that will break another one.
Related
I've been reading through the Redgate documentation on migration scripts and I'm trying to add a new column to a table that has a foreign key from another table.
Here's what I have done:
Added the new column, made it null-able and created the
relationship to a new table then I've committed the changes.
I then add static data to the new table so that the migration can
run. I commit this static data.I then add a blank migration script,
and set all null values on the column I've created in the last
commit to be the Id of one of the records in the related table. I
then commit this change.
I then run a deployment of both commits to my testing environment where
records already exist.
The problem I'm having is that the column gets created but the script seems like its not running as the column values stay null. I've verified that the script should actually change the columns as I've attempted to run it manually and it executes successfully.
Am I doing something wrong when using these scripts? Thanks.
I was creating blank migration scripts which lead to SQL Compare to set the column as not null. You have to specifically create a migration script on the schema change that requires it or SQL Compare will override all changes.
I accidentally deleted my database tables and I need to get them back. I have tried running update-database, but I only get:
Cannot find the object "dbo.ArticleComments" because it does not exist or you do not have permissions.
I also tried running Update-Database -TargetMigration:"name_of_migration" with the migration name but resulted in:
Cannot find the object "dbo.ArticleComments" because it does not exist or you do not have permissions.
I need to know how to get my database tables back with their columns (empty or not I don't care)
This may be the issue on your situation.
check about this problematic table dbo.ArticleComments.If you renamed or deleted it,then it'll give above kind of error.B'cos when you created the old migration script that was there.Now it's not there.When you try to run the same old migration script, now that table is not on your DbSet or having with different name.
Solution :
If that is the case,then you have to manually edit your migration file to reflect the current table changes.
Is there anyway to reference the changeLog id in your sql? We're in the middle of moving our database over to liquibase, but we already have an Environment table which has key value pairs, one of which is "database_version". I want all our liquibase changes to also update the value for the "database_version" with the change_log id.
There is nothing built in, but the extension system (http://liquibase.org/extensions) will allow you to implement it yourself.
The easiest approach would probably be to add a liquibase.changelog.visitor.ChangeExecListener implementation updates your table after each change is executed.
At work we have a table to hold settings which essentially contains the following columns:
PARAMNAME
VALUE
Most of the time new settings are added but on rare occasions, settings are removed. Unfortunately this means that any scripts which might have previously updated this value will continue to do so despite the fact that the update results in "0 rows updated" and leads to unexpected behaviour.
This situation was picked up recently by a regression test failure but only after much investigation into why the data in the system was different.
So my question is: Is there a way to generate an error condition when an update results in zero rows updated?
Here are some options I have thought of, but none of them are really all that desirable:
PL/SQL wrapper which notices the failed update and throws an exception.
Not ideal as it doesn't stop anyone/a script from manually doing an update.
A trigger on the table which throws an exception.
Goes against our current policy of phasing out triggers.
Requires updating trigger every time a setting is removed and maintaining a list of obsolete settings (if doing exclusion).
Might have problems with mutating table (if doing inclusion by querying what settings currently exist).
A PL/SQL wrapper seems like the best option to me. Triggers are a great thing to phase out, with the exception of generating sequences and inserting history records.
If you're concerned about someone manually updating rather than using the PL/SQL wrapper, just restrict the user role so that it does not have UPDATE privileges on the table but has EXECUTE privileges on the procedure.
Not really a solution but a method to organize things a bit:
Create a separate table with the parameter definitions and link to that table from the parameter value table. Make the reference to the parameter definition required (nulls not allowed).
Definition table PARAMS (ID, NAME)
Actual settings table PARAM_VALUES (PARAM_ID, VALUE)
(changing your table structure is also a very effective way to evoke errors in scripts that have not been updated...)
May be you can use MERGE statement
here is a link for it
http://www.oracle-developer.net/display.php?id=203
The merge statement allows you to combine insert and update in the same query, so in case the desired row does not exist you may insert a record in a buffer table to indicate the the row does not exist or else you can update the required record
Hope it helps
I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.