How to use liquibase:diff to migrate production database - liquibase

The development database is managed by liquibase. The production database is still empty. Based on the documentation I ran mvn liquibase:diff to get the differences between the development and production databases. The command generates a database changelog in xml containing a list of changeset.
I guess the next step is to use that diff change log and apply it to the production database. But I can't find the correct maven command to run in the documentation.

You want to use the update command as documented here: http://www.liquibase.org/documentation/maven/maven_update.html

Related

ASP.NET Core MVC : once I have published a project, how do I force migrations to update to the database?

I have a project I have been working on in ASP.NET Core MVC, and now that I have published it:
I want to be able to force all migrations onto the database active on our server (my local database is just a copy so naming and table names and etc are the same).
Once I copy across the files to the server folder and run it, currently none of the migrations update the database and I get missing columns and tables and etc errors.
I have tried using cmd on the published file with
dotnet AppName.dll database update
as many people have suggested, but this didn't work at all. It doesn't seem to read the command at all.
Can someone please give me one straight answer? I have been struggling to find an answer that doesn't explain what to do or has old no longer working methods.
You can use
app.UseDatabaseErrorPage();
in your StartUp.Configure.
This will display the Apply Migrations Page you know from Developing.
Another way would be to create a migration script in development and use it against the production database. You can create an idempotent script with cmd while being in the Project Directory (where the StartUp.cs is located).
dotnet ef migrations script --output "migration.sql" --idempotent
This will create a script (a new file migration.sql) that will take any database that is on a prior level in the migration history to the current level.
You might generate a script of your migrations and install the script on your deployment environment. The advantage is you can store the migration scripts for example in a folder and keep track of what has been installed or what needs to be installed in the next release.
A migration script is generated with command Script-Migration:
Script-Migration -From <PreviousMigration> -To <LastMigration>
Be sure to check the EF Documentation (Entity Framework Core tools reference) before you run the script.

VSTS build doesn't pickup the dacpac file (hosted agent in cloud)

I'm trying to use VSTS to deploy into my database, the problem is in one of the steps I need to pick up the dacpac file and deploy it to the Azure SQL server but it fails:
in that step, I'm using "Execute Azure SQL: DacpacTask" which is provided by Microsoft in VSTS.
there is a filed to do it which is called "DACPAC File" and the documentation said to use it like this:
$(agent.releaseDirectory)\AdventureWorksLT.dacpac
but it gave me the below error:
No files were found to deploy with search pattern
d:\a\1\s\$(agent.releaseDirectory)\AdventureWorksLT.dacpac
so I did a cheating and put the below value in it:
d:\a\1\s\AdventureWorksLT.dacpac
it does work but obviously, it won't work forever as I need to use an environment variable, something like :
$(agent.releaseDirectory)\AdventureWorksLT.dacpac
any suggestion?
I've had this same problem. I wasn't able to find detailed documentation, but from experimenting, this is what I found.
I'm assuming that your DACPAC is created as part of a Build Solution task. After the build completes and the DACPAC is created, it exists in a sub-folder of the $(System.DefaultWorkingDirectory) directory.
Apparently, the Azure SQL Database Deployment task cannot access the $(System.DefaultWorkingDirectory) folder. So the file must be copied somewhere where it can be accessed. So here's what I did:
The Visual Studio Build task builds the solution, including the DACPAC. The resulting DACPAC is placed in a $(System.DefaultWorkingDirectory) sub-folder.
Add a Copy Files task as your next step. The Source Folder property should be "$(System.DefaultWorkingDirectory)". The Contents property should be "**/YourDacPacFilename.dacpac". The Target folder should be $(build.artifactstagingdirectory). The "**/" tells VSTS to search all subfolders for matching file(s).
Add an Azure SQL Database Deployment task to deploy the actual DACPAC. The DACPAC file will be in the $(build.artifactstagingdirectory).
I had the same problem and I solved it by removing the old artifact from the release and adding it again to take the correct alias of the new artifact.
That's why the Azure SQL Database Deployment task says it doesn't have access to the $(System.DefaultWorkingDirectory) folder, the artifact has changed and you must make sure you're using the new one that is saved in the azure pipeline.

Run sql script automatically when building Spring boot project

I have a simple Spring Boot rest Api with a MySql database. It currently only has one table, so in order to create the table if it doesn't exist, I just have some Java code that does the job when the server is initialized.
But I don't think that's the right way to go. I believe a better way would be to have an external sql file which would be run from Spring each time I run my project.
So, let's say I have a file called TABLES.sql with all the db tables. How can I configure Spring to run this automatically each time it boots?
Thanks!
EDIT:
Just for further clarification, I have configured my project to run on a Docker db on a "dev" environment, and on a real instance on a "prod" environment. And the db user, pass etc are all configurable. I'm just messing around basically to learn stuff. :)
What you need is a schema migration tool. We use Flyway in our project and it works great.
With this you write incremental SQL scripts which all run to give you the final version of database. To actually run the migrations you need to use flyway's migrate goal.
mvn -P<profile> flyway:migrate
If you use Hibernate, a file named import.sql in the root of the classpath will be executed on startup. This can be useful for demos and for testing if you are careful, but probably not something you want to be on the classpath in production. It is a Hibernate feature (nothing to do with Spring).
Get more info here: Database initialization.
OK, I think I've found what I were looking for. I can have a schema-mysql.sql file with all my table creates etc, according to this spring guide:
Initialize a database using Spring JDBC
This is basically what I want
Thanks for your help though!

Post deployment parameters in DacPac

I'm creating a DacPac in TeamCity by building a sql project. The resulting DacPac has a post deployment script that I would like to update either on deployment or before it is created with a version number. Is it possible to set this parameter either in TeamCity or on deployment of the DacPac?
The sqlpackage.exe command line looks like
C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /Sourcefile:#{SourceFolder} /TargetDatabaseName:DBName /TargetServerName:#{SqlServer}
Where "#{}" is a parameter on octopus deploy server. The post deployment script in the SQL Project looks like :
declare #version varchar(10)
set #version = 'z'
IF EXISTS (SELECT * FROM tVersion)
UPDATE VersionTable SET Version = #version
ELSE
INSERT INTO VersionTable VALUES (#version)
The way I have been doing it is by using file content replacer on teamcity to replace 'z' with a version number but this method is not ideal. This could lead to errors in the future if another dev were to check in the the file with a different parameter that didn't fit the regular expression used in the file content replacer build feature.
You have a couple of different approaches you can take, the first one is the easiest in that you define a SqlCmd variable in your .sqlproj (properties of the project, SQLCMD variables tab) and reference that it your post deploy script. When you deploy you can override the variable by using /v:variable_name= (If you aren't using sqlpackage.exe to deploy, what are you using? Octopus deploy?).
The second way is harder but is pretty straight forward, the dacpac can be read from and written to using the .net packaging api, there is a stream (file) called postdeploy.sql (open it as a zip file and it is obvious which one is the post deploy file), you can read it, change your specific value and then write it back again.
For more manual editing of a dacpac see:
https://github.com/GoEddie/Dacpac-References
Ed

Instead of automatically updating db, generate sql script with fluent n-hibernate for production enviornment

I've glanced over the different documentation, though have not found anything that addresses this. I'm looking at using fluentmigrator for future projects, though for staging/production practices have to do schema updates through a dba. I am allowed to do what I want for other environments such as testing, dev and local.
The purpose of the tool is entirely defeated if I have to write the scripts to do the changes anyway. However, it occurred to me, what if I didn't? So my question is this: Is it possible to have fluent migrate spit out a sql script to a file, instead of actually running the transaction?
In my experimentation I created a console app that uses the same DAL assembly as the main project, and leverages the migrator logic, so that whenever I run the console app, it updates the db from scratch, or from the nearest point depending on my choice. We use TeamCity, so thought it could be cool to have it run the app and place the script ( as an artifact) in a folder as a step in the build process for our DBA in the environments updating the schema myself would be frowned upon.
The FluentMigrator Command Line Runner will generate SQL scripts, without applying the changes to the database, if you use:
--preview=true
--output=true
--outputFilename=output1.sql
And, if you install the FluentMigrator.Tools package in addition to the FluentMigrator package, you will have access to the Command Line Runner from the migration project's build output directory (Migrate.exe).
Note:
The generated scripts will contain inserts into the FluentMigrator version table.
After reading Command Line Runner Options I'd use --verbose=true to output the sql script and --output to save it to a file. There seems to be no 'dry run' option however - you'll need to run the migration in some preproduction environment to obtain the script.
Give it a shot as I've admittedly never tried it.