I have a simple Spring Boot rest Api with a MySql database. It currently only has one table, so in order to create the table if it doesn't exist, I just have some Java code that does the job when the server is initialized.
But I don't think that's the right way to go. I believe a better way would be to have an external sql file which would be run from Spring each time I run my project.
So, let's say I have a file called TABLES.sql with all the db tables. How can I configure Spring to run this automatically each time it boots?
Thanks!
EDIT:
Just for further clarification, I have configured my project to run on a Docker db on a "dev" environment, and on a real instance on a "prod" environment. And the db user, pass etc are all configurable. I'm just messing around basically to learn stuff. :)
What you need is a schema migration tool. We use Flyway in our project and it works great.
With this you write incremental SQL scripts which all run to give you the final version of database. To actually run the migrations you need to use flyway's migrate goal.
mvn -P<profile> flyway:migrate
If you use Hibernate, a file named import.sql in the root of the classpath will be executed on startup. This can be useful for demos and for testing if you are careful, but probably not something you want to be on the classpath in production. It is a Hibernate feature (nothing to do with Spring).
Get more info here: Database initialization.
OK, I think I've found what I were looking for. I can have a schema-mysql.sql file with all my table creates etc, according to this spring guide:
Initialize a database using Spring JDBC
This is basically what I want
Thanks for your help though!
Related
I have a project I have been working on in ASP.NET Core MVC, and now that I have published it:
I want to be able to force all migrations onto the database active on our server (my local database is just a copy so naming and table names and etc are the same).
Once I copy across the files to the server folder and run it, currently none of the migrations update the database and I get missing columns and tables and etc errors.
I have tried using cmd on the published file with
dotnet AppName.dll database update
as many people have suggested, but this didn't work at all. It doesn't seem to read the command at all.
Can someone please give me one straight answer? I have been struggling to find an answer that doesn't explain what to do or has old no longer working methods.
You can use
app.UseDatabaseErrorPage();
in your StartUp.Configure.
This will display the Apply Migrations Page you know from Developing.
Another way would be to create a migration script in development and use it against the production database. You can create an idempotent script with cmd while being in the Project Directory (where the StartUp.cs is located).
dotnet ef migrations script --output "migration.sql" --idempotent
This will create a script (a new file migration.sql) that will take any database that is on a prior level in the migration history to the current level.
You might generate a script of your migrations and install the script on your deployment environment. The advantage is you can store the migration scripts for example in a folder and keep track of what has been installed or what needs to be installed in the next release.
A migration script is generated with command Script-Migration:
Script-Migration -From <PreviousMigration> -To <LastMigration>
Be sure to check the EF Documentation (Entity Framework Core tools reference) before you run the script.
The development database is managed by liquibase. The production database is still empty. Based on the documentation I ran mvn liquibase:diff to get the differences between the development and production databases. The command generates a database changelog in xml containing a list of changeset.
I guess the next step is to use that diff change log and apply it to the production database. But I can't find the correct maven command to run in the documentation.
You want to use the update command as documented here: http://www.liquibase.org/documentation/maven/maven_update.html
I've glanced over the different documentation, though have not found anything that addresses this. I'm looking at using fluentmigrator for future projects, though for staging/production practices have to do schema updates through a dba. I am allowed to do what I want for other environments such as testing, dev and local.
The purpose of the tool is entirely defeated if I have to write the scripts to do the changes anyway. However, it occurred to me, what if I didn't? So my question is this: Is it possible to have fluent migrate spit out a sql script to a file, instead of actually running the transaction?
In my experimentation I created a console app that uses the same DAL assembly as the main project, and leverages the migrator logic, so that whenever I run the console app, it updates the db from scratch, or from the nearest point depending on my choice. We use TeamCity, so thought it could be cool to have it run the app and place the script ( as an artifact) in a folder as a step in the build process for our DBA in the environments updating the schema myself would be frowned upon.
The FluentMigrator Command Line Runner will generate SQL scripts, without applying the changes to the database, if you use:
--preview=true
--output=true
--outputFilename=output1.sql
And, if you install the FluentMigrator.Tools package in addition to the FluentMigrator package, you will have access to the Command Line Runner from the migration project's build output directory (Migrate.exe).
Note:
The generated scripts will contain inserts into the FluentMigrator version table.
After reading Command Line Runner Options I'd use --verbose=true to output the sql script and --output to save it to a file. There seems to be no 'dry run' option however - you'll need to run the migration in some preproduction environment to obtain the script.
Give it a shot as I've admittedly never tried it.
Please don't be too harsh, because I do not grasp this entirely correctly still, but msbuild/msdeploy is giving me some headaches lately.
Hopefully someone can provide a textual aspirin of some kind? So here is what I want to do:
I have a web application project, that has multiple configurations, thus multiple web.config-transforms.
I would like to deploy this project from command line.
I would rather not want to modify its project file. (I want to be able to do this for several web applications so as least as editing as possible is much appreciated)
I would like to be able to build it only once and then deploy the different configurations from it.
So far I deployed from command line using something like this:
msbuild D:\pathToFile\DeployVariation01.csproj
/p:Configuration=Debug;
Platform=AnyCpu;
DeployOnBuild=true;
DeployTarget=MSDeployPublish;
MSDeployServiceURL="localhost";
DeployIisAppPath="DeployApp/DeployThis01";
MSDeployPublishMethod=InProc
And this performs just what I want, except it only deploys the "Debug"-Configuration.
How can I, with minimal adjustments, make it deploy my other configurations as well?
I was thinking maybe I could build a package that includes all my configurations and then deploy from that and decide "while deploying" which configuration to deploy?
Unfortuanetly I am pretty much stuck here, the approaches I have read about all seem to require some modifications to project files, is there a way around that?
UPDATE:
I am still not really where I want to be here :).
But I looked into this PackageWeb-approach (also interesting video about that here) and it seems pretty nice; I can now build a package that includes all my transforms and then deploy from that as often as I want into multiple configurations.
One thing that I dislike about this is that I have to store my password in plain text into the generated parameters file for the powershell script, does someone know a way around this, I really would rather have that being an encrypted password.
Also other approaches to solve my original problem are still appreciated.
I am working on the same problem and am taking two paths using Microsoft Web Deploy or MSDeploy which is now in version 3.0.
I first compile the project using MSBUILD using the Package target passing in system.configuration, system.packagelocation. The Package Target generates a set of package files including a {PackageName}.SetParameters.xml file. The SetParameters.xml file by default allows on-publish changes to ConnectionStrings without recompiling when using msdeploy.exe to publish the file. The publish transformation process can also be customized by adding a parameters.xml file to the process defining additional parameterized web.config settings which can be changed at deploy time.
After the initial build I use the {PackageName}.deploy.cmd file generated by MSBUILD during the Package process to deploy the package to the target website. The Package process essentially duplicates the process you are currently doing from MSBUILD in that I can publish one Build-Configuration web.config transform from one compile. The process provides a consistent deployment process that can target remote servers from a central CI environment, which is great from a purely deployment process. The PackageBuild/Deploy process is parameterized within TeamCity, requiring changes to only a few parameters to setup a new deployment.
Like you, I cannot, however, compile a single version of code and deploy to multiple servers using the process as it exists today - which is my current focus. I want to parameterize the transform in a Continuous Deployment, build-once-deploy-many pattern to Dev, QA, User Testing, Staging, and Production.
I anticipate using one of two methods:
Create a Parameters.xml file for each project defining the variable deployment parameters along with a custom {ServerName}.SetParameters.xml for each target deployment, both to be used in conjunction with msdeploy.exe.
a. I am not sure defining a parameters.xml is a flexible enough process for my needs as the current project inserts and removes a variable number of web.config settings. Implementing a parameters file incorporating all of the variables could be too complex for my taste. I would also end up creating all of the target transformations, instead of the current developers initiated process. Not ideal.
I am following up on very recent updates to VS2012 Web Tools 2012.2 which allow tying a web.config transform to the publish profiles (profile.pubxml) now stored under SolutionName/Properties/PublishProfiles in VS2012.
VS2012 release 2012.2 adds the capability to create a second transform tied to the publish profile. The resulting transform process first runs the build configuration transformation, followed by the publish transformation, i.e. Release Transform followed by TargetServer Transform. Sayed Hashimi has a great YouTube video demonstrating the entire process using MSBUILD.
What is not entirely clear is whether the second transform is supported separately from the build using MSDeploy in a Continuous Deployment, build-once-deploy-many Pattern, or if the publish transformation is only supported during a separate Package/Build for each target transformation.
Option 1 will definitely work for some environments and was my first plan for tackling a Continuous Deployment process. I would much rather use Web Transforms to accomplish the process if possible.
An outside third possibility is using one of several CodePlex commandline projects that are capable of transforming web.config using the XDT transform engine. Unfortunately, using these tools would mean splicing the results into the Build/Package MSBUILD process in order to get the resulting web.config transformation into the deployment package - something I've not yet been successful in accomplishing. Sayed Hashimi also has a PackageWeb project from 2012 that might work as well. I am hoping his more recent work replaces the need for the extra steps involved in the packageweb solution.
Let me know if you decide on a solution - as I am definitely interested.
Has someone experience in loading jars dynamically for a XPages Application?
We would like to have some calculation code which is going to change quite often in external Jar Files and load them dynamically when they are needed. Does anyone know if that's possilbe with Domino?
You could create a tasklet which you can roll out like an OSGI plugin. This way you can execute the calculations in this tasklet which you can update independently of your application. That way you only need to update your update site and all applications who use that code will the latest version installed.
You can find more info about it here: http://xpag.es/?1926
Another solution would be to put the jar file on your server in the java/ext/lib directory. And every time a new release is created you can update that file on the server. A server / HTTP task restart would be necessary ofcourse.