flyway database script logging - sql

I am currently evaluating Flyway software as a deployment option for our
company. We run our database deployments on an ORACLE database and
currently spool the output from a sqlplus session for logging purposes. We
use this to verify feedback information such as were objects created
successfully, were packages and functions, etc. compiled without errors,
verify amount of records entered and so forth.
Is there similar logging functionality in Flyway? Currently the only
logging we have found is in the server logs. We can tell from these logs
that a script has completed successfully or has triggered an ORA error but
we are curious as to whether this is the extent of the database logging
options or not.
Thank you,

We used the command line method for running flyway and turned on debug output (-X). Along with a lot of other output it also logs more information about the SQL migrations run (eg content of repeatable migrations) and the number of records affected. This is not perfect however it helped us a lot in capturing more information about what was applied.
See https://flywaydb.org/documentation/commandline/ as it is not documented for each individual command as it applies to flyway itself.

Related

Environment specific migrations cause issues with copying a database to different environments

We've made use of environment specific migrations for things like seeding data, data correction, applying table grants. There are times when we'd like to take a copy of production, for example, and import it to another lower environment, either as a periodic refresh, or to start a new test environment. However, as expected, we end up with various failures like Detected applied migration not resolved locally and Detected resolved migration not applied to database. I see there are various flags (ignoreIgnoredMigrations, ignoreMissingMigrations and outOfOrder) to allow us to bypass these issues.
Are there best practices for handling scenarios like I described? Is there a way to run an environment specific migration that doesn't file an entry in the flyway_schema_history table? Other approaches to this issue that I haven't mentioned?
Thanks in advance for any insights.
We have used ignoreMissingMigrations as one approach around this issue.

SQLServer 2012 Installed copy showing problems:

My problem is like this: I had a copy of SqlServer 2012 installed on my machine. It's been there for over 3 years without any glitches at all. Just 4-5 days ago, a problem sprouted up. When I started Management Studio it told me that
msdb got corrupted so it cannot be opened.
The complete message is something like this:
Cannot display policy health state at the server level, becuase the user doesn't have permission. Permission to the database msdb is required for this feature to work correctly.
So what could be wrong here? What sudden changes/anomalies could have crept in that has made this unstable? Someone told me it could be due to a wide range of possibilities. The reason could be anything. Even some nuget packages affect the database. Initially I though this could have been an issue with login, permissions etc. So I tried to run as administrator also. No, it did not cure this problem. If you try to create a new database it simply tells me, that I can't do it. The message is something like this:
An exception occurred while executing a T-SQL statement or batch.[Microsoft.SqlServer.ConnectionInfo]. Database msdb cannot be opened. It has been marked as SUSPECT by recovery. [Microsoft Sql Server, Error:926]
How do I recover from this? Can you provide me some guidance? Or a clue where precisely to look for the hints of problem? All my work is stalled. Any kind of assistance in recovering my ailing sqlserver installation will be humbly received.
So, I'm requesting you all to show me the way.
Thanks in anticipation.
I fixed mine with Solution C from the following website. my MSDB was corrupt and not loading so I stopped the services and replaced it with the files from the template in the SQL Server directory.
https://www.mssqltips.com/sqlservertip/3191/how-to-recover-a-suspect-msdb-database-in-sql-server/
"The templates are saved in "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Templates" (the path varies by version and install choices, this is the default for SQL Server 2012). By shutting down the instance and replacing the bad MSDB data (msdbdata.mdf) and transaction log (msdblog.ldf) files with the template files I was able to restart the instance without error!" (just incase the website link doesn't work I have quoted it here).
Fissh
If your MSDB is corrupted, restore from your most recent backup. That's the safest thing to do and that's why we have backups to begin with.
If you do not have a backup of MSDB, you have a couple of options.
Recreate it. Detailed instructions here: https://msdn.microsoft.com/en-us/library/dd207003(v=sql.110).aspx#CreateMSDB. This is the best way to ensure you get a clean, functional MSDB and is the fastest way to get up and running again. IMPORTANT: Doing this means you lose all jobs, backup history, etc... that is stored in MSDB. Remember to recreate all maintenance jobs after you're done else you're just waiting for the next thing to fall over (e.g. transaction log backups no longer run, tlogs grow till you run out of disk space - now you can't run any queries that will commit transactions).
DBCC CHECKDB WITH Repair_allow_data_loss is another option which you'll probably find if you google/bing the issue. This might work but it is not recommended. The problem is you don't really know what will be lost. It works by deleting what it can't read then fix the links to get the database physically functional again. Once that's done, you'll have to go back and figure out what remains and is still functional. That is tedious and error prone. Besides, if you're gonna do this very thorough manual check to ensure all your jobs are intact, you're better off just re-creating them on a new, clean MSDB.

How to automate source control with Oracle database

I work in an Oracle instance that has hundreds of schemas and multiple developers. We have a development instance where developers can integrate their work before test or production.
We want to have source control for all the DDL run in this integrated development database. Currently this is done through a product Red Gate which we run manually after we make a change to the database. Redgate finds the changes between what is in the schema and what was last checked into source control and makes a script of the differences and puts this into source control.
The problem however is of course that running regdate can take some time and people run it infrequently or not at all for small changes. Also redgate will only look in one schema at a time and it would be VERY time consuming to manually run it against all schemas to guarantee that they are up to date. However if the source controlled code cannot be relied upon it becomes less useful...
What would seem to be ideal would be to have some software that could periodically (even once a day), or when triggered by DDL being run, update the source control (preferably github as this is used by other teams) from all the schemas.
I cannot seem to see any existing software which can be simply used to do this.
Is there a problem with doing this? (there is no need to address multiple developers overwriting each others work on the same day as we have this covered in a separate process) Is anyone doing this? Can anyone recommend a way to do this?
We do this with help of a PL/SQL function, a python script and a shell script:
The PL/SQL function can generate the DDL of a whole schema and returns this as CLOB
The python script connects to the database, fetches the DDL and stores it in files
The shell script runs the Source Control to add the modifications (we use Bazaar here).
You can see the scripts on PasteBin:
The PL/SQL function is here: http://pastebin.com/AG2Fa9zL
The python program (schema_exporter.py): http://pastebin.com/nd8Lf0gK
The shell script:
The shell script:
python schema_exporter.py
d=$(date +%Y-%m-%d__%H_%M_%S)
bzr add
bzr st | grep -q -E 'added|modified' && commit -m "Database objects on $d"
exit 0
This shell script is configured to run from cron every day.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.
To me it seems like your way of working is backwards: developers run DDL against the DB in an unordered fashion and then you need an automated tool for inferring the changes (and the DDL) that was run.
The process would be in better control if you did the following instead:
Developers write DDL as SQL scripts, preferably using a migration tool such as Flyway (http://flywaydb.org/documentation/migration/sql.html).
Migration scripts are checked into version control
Migration scripts are periodically run against the DB (e.g. by the migration tool)
In this workflow, the DB would only get altered through automated migration scripts and no-one is allowed to do changes manually. Could this work for you?
(I develop the Oracle tools for Redgate)
Actually using the tools you can already what I think you're asking for using Schema Compare for Oracle.
You can compare multiple schemas either in the UI or via the command line - I think what you're after is automating the command line tool which can create difference scripts, sync between source and destination (live, snapshot or scripts) and generate reports.
You can automate the command line to sync to a scripts folder which is your source code checkout and then subsequently run a command to commit the changes.
I think that's all good :)
We built a commerical tool that bridges Oracle with Git. It helps you manage your database objects with Git. Basically, the database becomes the working directory for the developer. You can perform git operations in the database such as reset, commit, branch, merge etc... and the database code is updated automatically. It might be worth taking a look: https://www.gitora.com

sql server batch database alters, batch database changes - best and safest way

We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.

Another Oracle sql monitoring tool

Probably has been asked before, but i'm looking for a utility, which can
Identify a particular session and record all activity.
Able to identify the sql that was executed under that session.
Identify any stored procedures/functions/packages that were executed.
And able to show what was passed as parameters into the procs/funcs.
I'm looking for a IDE thats lightweight, fast, available and won't take 2 day's to install, i.e something I can get down, install and use in the next 1 hour.
Bob.
if you have license for Oracle Diagnostic/Tuning Packs, you may use Oracle Active Session History feature ASH
The easiest way I can think of to do this is probably already installed in your database - it's the DBMS_MONITOR package, which writes trace files to the location identified by user_dump_dest. As such, you'd need help from someone with access to the database server to access the trace files.
But once, you've identified the SID and SERIAL# of the session you want to trace, you can just call:
EXEC dbms_monitor.session_trace_enable (:sid, :serial#, FALSE, TRUE);
To capture all the SQL statements being run, including the values passed in as binds.