What happens when two app servers in cluster start LiquiBase update (via Grails)? - grails-orm

I am planning to use LiquiBase via grails database-migration plugin. When I start a cluster of two servers with a new version of the DB schema, and both servers attempt to start schema upgrade, what will happen?
Does either grails database-migration plugin or LiquiBase itself has protection against concurrent upgrade attempt?

In addition to your database tables, Liquibase creates a databasechangelog and a databasechangeloglock table to manage its state. databasechangelog contains information about the migrations that you've already run, and databasechangeloglock is there to guard against concurrent attempts at running migrations.
When the first cluster instance starts up it will obtain a lock by inserting a row in the databasechangeloglock table, and run any missing migrations. When the second one starts it'll be blocked until the lock is freed, then it'll obtain a lock and since there won't be any un-run migrations, it won't do anything.

Related

Adding table(s) to replicated DB

I recently added two new tables to a db that is currently being transactionally replicated. Short of dropping and recreating the entire publication is there a way to quickly add these two new tables to the existing publication? Will I have to take an entirely new snapshot? I only ask because this is a production db and cant be stopped until nighttime, lockups will cause major issues.
Thanks - Travis
DISCALIMER: All replication has the potential to lock entire databases as it reads the entire log. Changes should be thoroughly tested outside of Production and implemented off hours.
For basic transactional replication, you can use sp_addarticle and sp_addsubscription for each table without affecting the existing subscriptions. If you initialized the current subscription with sp_addsubscription #article = 'all'; (default), it may not let you add additional articles in which case you will have to drop existing subscriptions or create a new publication.
You won't necessarily have to take a snapshot for the existing subscriptions even if you do have to drop them, but you take responsibility for keeping the data in sync. You should use triggers or other methods to lock down changes before dropping subscriptions, and recreate them using sp_addsubscription #sync_type='replication support only'; If all subscriptions are created this way, a snapshot will not be generated. If only the new articles are subscribed with #sync_type='automatic' then only those articles will be present in the new snapshot. Afterwards, you should verify data integrity between publisher and subscriber.

Will Redis lock all available db when you run `KEYS` command on a specific db?

As you now in Redis database when you run KEYS * command the Redis will lock database until keys return all keys.
I want to create 2 separate db in Redis and create some key in each of them ,then select one of them and run keys command on that db.
Will Redis lock all available db till answer ready or only lock selected db?
TL;DR: yes.
Redis doesn't lock - it blocks on (almost1) all commands because it is single threaded. When the server executes a command, be it a simple GET or the evil KEYS, it is busy serving it and does nothing else. The longer it takes a command to complete, the longer the server is blocked.
KEYS is a long-running command because it always traverses the entire keyspace (regardless the pattern), not to mention the potentially-huge reply it makes.
That same single thread of execution also handles numbered, a.k.a. shared, databases. Any operation you perform on one of the databases blocks the entire server, all databases included. More information can be found at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances/
1 BGSAVE, for example, is one of the few commands that do not block. As of v4, there's also UNLINK and more are planned to be added.

Inserted data is not shown in Oracle db using a direct query

I was not able to find a solution for this question online, so I hope I can find help here.
I've inherited a Java web application that performs changes to an Oracle database and displays data from it. The application uses a 'pamsdb' user ID. The application inserted a new row in the one of the tables (TTECHNOLOGY). When the application later queries the db, the result set includes the new row (I can see it in print outs and in the application screens).
However, when I query the database directly using sqldeveloper (using the same user id 'pamsdb'), I do not see the new row in the modified table.
A couple of notes:
1) I read here and in other locations that all INSERT operations should be followed by a COMMIT, otherwise other users cannot see the changes. The Java application does not do COMMIT, which I thought could be the source of the problem, but since I'm using the same user ID in sqldeveloper, I'm surprised I can't see the changes there.
2) I tried doing COMMIT WORK from sqldeveloper, but it didn't change my situation.
Can anyone suggest what's causing the discrepancy and how can it be resolved?
Thanks in advance!
You're using the same user, but in a different session. Once session can't see uncommitted changes made in another session, for any user - they are independent.
You have to commit from the session that did the insert - i.e. your Java code has to commit for its changes to be visible anywhere else. You can't make the Java session's changes commit from elsewhere, and committing from SQL Developer - even as the same user - only commits any changes made in that session.
You can read more about connections and sessions, and transactions, and the commit documentation summarises as:
Use the COMMIT statement to end your current transaction and make permanent all changes performed in the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all savepoints in the transaction and releases transaction locks.
Until you commit a transaction:
You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
You can roll back (undo) any changes made during the transaction with the ROLLBACK statement (see ROLLBACK).
The "other users cannot see the changes" really means other user sessions.
If the changes are being committed and are visible from a new session via your Java code (after the web application and/or its connection pool have been restarted), but are still not visible from SQL Developer; or changes made directly in SQL Developer (and committed there) are not visible to the Java session - then the changes are being made either in different databases, or in different schemas for the same database, or (less likely) are being hidden by VPD. That should be obvious from the connection settings being used by the two sessions.
From comments it seems that was the issue here, with the Java web application and SQL Developer accessing different schemas which both had the same tables.

Master data services deployment

What is the best approach to keep Production,dev and test enviroments in sync?
We have Master Data Services database in our development, Test and Production environments. Data is been entered into Production and we need to keep our test and development servers in Sync. I couldn't find the documentation to handle this.
I am not sure if this process is correct-
For moving updated data from Development we are following this process-
create second version of the model and make the changes in it and then deploy the 2nd version to test and prod.
Can we do this same above process from Production to test and Development to keep them in Sync?
Thanks
Two options come to mind:
Snapshot replication
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Log shipping
SQL Server Log shipping allows you to automatically send transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually.
MDS has tool which is called MDSModelDeploy. You can create package with all business rules, schema and data. Ship it over to some other machine and.
clone model (preserving keys, etc)
update model
More information here

How is liquibase rollback supposed to work?

I am just getting started with liquibase and it seems quite useful. My biggest issue is with rollback.
I'm baking my liquibase changelog into the jar that has my data layer in it and on app startup, I'm migrating automatically using the changelog in the jar in the app. If I'm only moving forward, this works fine.
But if I have two branches each working on that data layer jar and I want to switch back and forth between them using the same DB, it doesn't work because the changelog in one branch has different changesets than the other. In and of itself, that isn't a problem, but when I swap branches and start my app, it doesn't know how to rollback the changesets from the other branch because they are not in the changelog yet.
Is the answer here to just be careful? Always use separate DBs?
Why not put rollback into the DATABASECHANGELOG table in the DB so unknown changesets can be rolled back without the changelog file?
You are right that rollback simply looks at the applied changes in the DATABASECHANGELOG table and rolls changeSets back based on what is in the changelog. It could store the rollback info in the DATABAESCHANGELOG table, but it doesn't for a variety of reasons including simplicity, space, and security. There are also times it can be nice to roll back changes based on updated changeSet rollback info rather than what was set when the changeSet was first executed.
In your case, rollback is more complex because you looking to switch branches frequently. What I've generally found is that feature branches tend to make relatively independent changes and so even if you change between branches, you can leave the database changes in because they created new tables or columns which the other code simply ignores. There are definitely times this is not true, but within the development environment you find the problems right away and can address them as needed. When you do find times you need to roll back changes from another branch you can either remember to roll back changes before switching branches. Some groups don't bother with rollback at all and just rebuild their development database when needed (liquibase "contexts" are very helpful for managing test/dev data).
As you move from development to QA and production you normally don't have to deal with the same level of branch changes and so there is normally no difference between the changeSets you are looking to roll back and what is in the changelog.