We use Liquibase for configuration management across multiple MS SQL Server DB instances. If all DBs have executed all available Liquibase scripts, is the following a reliable query to confirm that all DBs are in sync? I am looking for a way to do this from a DB script and not Maven or any other command line utility.
select top 1 ID from DATABASECHANGELOG
order by DATEEXECUTED desc
That might work for really simple scenarios, but if your changelogs have any kind of 'conditional' parts you are going to need some more logic than that.
What you really want to know is whether the set of changes applied to each database is the same. Since changesets are identified by id+author, you should get both those columns from DATABASECHANGELOG and then do set comparison on those to see that the sets were exactly equal.
I would suggest tagging your database. Two supported mechanisms:
tagDatabase refactor command, contained in a changeset
command line tag option
I personally would favor the first option so that the versioning is built into the changeset files. The second option is useful when performed as part of your application's upgrade process (Create a rollback marker).
Finally, once your database is tagged the latest version can be retrieved using SQL:
SELECT TOP 1 cl.tag
FROM DATABASECHANGELOG cl
WHERE cl.tag is not null
ORDER BY 1 DESC
Obviously this approach assumes that your tag strings have a numeric component, so that they sort as expected.
Besides the conditional changeSets that #SteveDonnie mentioned, the order that changeSets execute in can vary depending on how and when you are updating your changelog and database. For example, if you have changeSets A,B and then developer X adds C and developer Y updates D then X's order is A,B,C and Y's order is A,B,D. When they both merge their changes together the final order may be A,B,C,D and when X runs the final version his order will be A,B,C,D but Y's final order will be A,B,D,C.
Y's database will be fully up to date, but a SELECT TOP 1 will return "C" for him even though you were expecting "D". It may or may not be a scenario you run into, but is another reason why a simple single row select will not tell you if databases are up to date or not.
Liquibase does have a "status" command you can run against a database and a changelog and it will return whether the database is up to date or not.
Related
At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.
I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.
I'm using SQL Server 2005 and I want to synchronize two tables which have the same definition but exist in different databases. MERGE INTO only exists in 2008 and I'd prefer a syntax where I don't have to specify columns in the UPDATE. So I stumbled upon various posts using the following syntax:
UPDATE Destination FROM (Source INTERSECT Destination)
INSERT INTO Destination FROM (Source EXCEPT Destination)
But when I try to execute it I get:
Incorrect syntax near the keyword 'FROM'.
How can I get this working? I have multiple tables which I need to synchronize and I don't want to specify all the columns in every statement.
Thanks for any hint!
According to Books Online the update command requires the set keyword, and it must come before the optional from keyword. The insert command doesn't have a stand alone from keyword, the from only exists as part of a select statement either as a derived table source or within a common table expression.
The link you reference is not showing valid SQL Server 2005 syntax.
"How can I get this working? I have multiple tables which I need to synchronize and I don't want to specify all the columns in every statement."
For update, you must specify all the columns. For insert if the source and destination have the same struture then you can use insert into TARGTET_TABLE_NAME select * from SOURCE_TABLE_NAME BUT that is not recommended for production code, if the source or destination change, the statement would break. If source and destination differ, then you must specify columns on at least one side of the insert.
I'm sorry if this doesn't answer your question, but assuming the whole reason for this is in the interest of saving time, can't you just right-click the source table and generate the INSERT script, then right-click the destination table and generate a blank SELECT script, then combine the two? This will only work if a kill-and-fill is acceptable in your environment.
My scenario:
I can change the ordinal position of a column in a table.Is there a way to change the ordinal position of a column in a table without recreating the table?
No, you have to recreate the table if you wish to achieve this. (SQL SERVER)
Even when you do this in SSMS, you will see that the script that is generated also recreates the table.
Not in SQL Server - Not sure about other RDBMSs.
You can create a View with the desired ordinal positions but the only time I can think that would be useful is if you are using SELECT * which is a practice that should be avoided anyway.
Hi it depends on the database system you use.
For example in some it is possible to remove and add a column and you can do it in a procedure part where you also can refill it.
But in general it shouldn't matter as you can define the returned data order in your select statement. Is not that enough for you?
Without recreating the Table is Not possible. However, if your concern is about loosing the data here is an option provided by SQl Server Management Studio.
Note: I have used Sql Server 2019 Developer Edition.
Right Click on the Table name and Choose Design Option
Using your Cursor Drag the position of your Column to your desired Position
SQlServer Table Design Options
If you want to do it at script level, You can see the idea below provided by SSMS
Enable the "Auto Generate Change Script" Option available in Tools Menu --> Options --> Designers --> Table and Database Designers.
Enabling the Auto Generate Change Script Option
When you drag the Column in SSMS it will automatically creates the Script for you.
The High level Idea in the auto generated Script is,
Creating a Table with Temp_YourTableName with desired Order of Columns
Copying all the Data from the Original Table to new Temp_YourTableName
Drop the Original Table
Renaming the Temp_YourTableName to Original YourTableName
of course doing everything with Transaction scope to avoid any data loss while the script is executing.
I found a good reason why some time we need to do this here. Interestingly, it is based on Context and not to do anything with Technical.
Say for example, Original Address Table Contains, Street Address 1, City, State, Zip and Country columns. If the requirement Changes to include a new Columns like Street Address 2 this would be meaning full.
I have made an update statement in a table in SQL 2008 which updated the table with some wrong data.
I didn't have a backup for the DB.
It's some important dates which got updated.
Is there anyway where i can recover the old data from the table.
Thanks
SNA
Basically no unless you want to use a commercial log reader and try go through it with a fine tooth comb. No backup of the database can be an 'update resume, leave town' scenario - harsh but it just should not happen.
Andrew basically has called it. I just want to add a few ideas you can consider if you are desperate:
Are there any reports or printouts lying around? Perhaps you can reconstruct the data from there.
Was this data entered via a web application? If so, there is a remote chance you can find the original data in the web server logs, depending upon how the app was constructed, etc.
Does this app interface (pass data to) any other applications? They may have a buffered copy of data...
Can the data be derived from any other existing data? Is there an audit log table, or another date in your schema based on this one, from which you can reconstruct the original date?
Edit:
Some commenters are mentioning that is is a good idea to test your update/delete statements before running them. For this to become habit, it helps if you have an easy method. I usually create my DELETE statements like this:
--delete --select *
from [User]
where UserID=27
To run the select in order to test your query, highlight everything from select onwards. To then run the delete if you are satisfied with the filter criteria, highlight everything from delete onwards. The two dashes in front of delete are so that if the query accidentally gets run, it will just crash due to invalid syntax.
You can use a similar construct for UPDATE statements, although it is not quite as clean.
SQL server keeps log for every transation.So you can recover your modified data from the log as well without backup.
Select [PAGE ID],[Slot ID],[AllocUnitId],[Transaction ID]
,[RowLog Contents 0], [RowLog Contents 1],[RowLog Contents 3],[RowLog Contents 4]
,[Log Record]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.student' + ''))
AND Operation in ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
And [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
Here is the artcile, that explains step by step, how to do it.
http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
Imran
Thanks for all the responses.
The problem was actually accidentally ---i missed to select the where condition in the update statement.---Rest !.
It was a quick 5 minutes task --Like just changing the date to test for one customer data--so we didn't think of taking a backup.
Yes of course you are true ..This is a lesson.
Now onwards i will be careful to write "my update statements in a transaction." or "test my update statements"
Thanks once again--for spending your time to give some insight rather ignoring the question since the only answer is "NO".
Thanks
SNA
Always take a backup before major UPDATE statements, even if it's not used, there's the peace of mind
Especially with Red Gate's Object Level Restore, one can restore individual table/row now given a backup file
Good luck, I'd suggest finding an old copy elsewhere (DEV/QA) etc...
Isn't it possible to do a rollback on an UPDATE statement?
Late one but hopefully useful…
If database is in full recovery mode then all transactions are logged in transaction log and can be retrieved. Problem is that this is not natively supported because this is not the main purpose of the transaction log.
Options are:
Commercial tools such as Apex Log (more expensive, more options) or Quest Toad (less expensive, less options for this purpose main focus is on SQL Server management)
Trying to do this yourself, like user1059637 pointed out. Problem with this approach is that it can’t read transaction log backups and is more tedious.
It comes down to how much your data is worth to you in terms of time and $.