I'm working on a multi-tenant MVC 4 application and I'm going with the one schema per customer approach. I'd like to run the database migrations in code when a customer signs up. Is this possible using EF 5/Code First Migrations?
So when a customer signs up, I'll create an account in dbo. I'll then check if their subdomain exists as a schema in the database, if not, I'll create the schema and ideally run the migrations.
Thanks!
Clarification
When I create the new schema for the customer in the database, I want to run the migrations for that new schema. So for example,
I'll have schema1.Products and schema2.Products.
If I'm getting it right what you want...
You could use something like this
var migrator = new DbMigrator(new Configuration());
if (migrator.GetPendingMigrations().Any())
migrator.Update();
Or even better you may want to create your own initializer - e.g. check these posts of mine...
What is the correct use of IDatabaseInitializer in EF?
How to create initializer to create and migrate mysql database?
The problem I see with your approach is that'd you'd have to 'separate' the account model/database - and the one that you're trying to migrate. Since you mentioned multi-tenant that may already be the case.
But I guess you could also crete the 'base entities' for accounts etc. - and then migrate the rest on top. That's a bit complex scenario - the model is created once (per connection) on start (first use) and cached from then on. So the only for that would be to restart reload, I think (don't hold my word for it, just thinking out loud here)
Not sure if this is what you're asking for, you can run code migrations from command line:
packages\EntityFramework.5.0.0\tools\migrate.exe Example.dll /startUpDirectory:Example\bin
So in theory you could call this whenever a new customer signed up.
Related
I have a requirement where if a table of a DB gets mistakenly dropped, we need it back, with or without the data. We already use Flyway for migration, is there any way we can achieve this using Flyway or otherwise?
I think you could hack a solution in place using callbacks (SQL or Java) but you've got to ask how can a table get deleted if you are using flyway to control migrations and amendments to your database in the first place.
This is fundamentally what flyway is intended to prevent as the following snippet from the flyway FAQ confirms and the solution may be to close the possibility of external amendments being applied in the first place.
Can I make structure changes to the DB outside of Flyway?
No. One of the prerequisites for being able to rely on the metadata in the database and having reliable migrations is that ALL database changes are made by Flyway. No exceptions. The price for this reliability is discipline. Ad hoc changes have no room here as they will literally sabotage your confidence. Even simple things like adding an index can trip over a migration if it has already been added manually before.
It seems not to be possible with versioned migrations, since they are applied only once, or with repeatable migrations, because they are reapplied only if check sum changes.
Another option - is to create a callback, which will run after migration.
For example, afterMigrate callback could do it, you just need to create a script named afterMigrate.sql in the location, used to load migrations. Now you just need to make a SQL-script to recreate some table if it not exists.
Some vendors support such an options, for example, with PostgreSQL you can use CREATE TABLE query with the IF NOT EXISTS option, to create a table only it doesn't exists.
When I remove tables used in my Azure database (of course after removing the entities), I just use DROP TABLE TABLENAME. This has a bad effect. When I run the mobile service by just starting the browser, I get an Error 500 when I add a new record (of an existing table of course) with my TableControllers. Apparently, I did something wrong. It can be "solved" by creating a completely new database and use this one in my mobile service. The Seed method ensures that the right tables exist (and only the right tables) and everything works fine.
What is the best way (to prevent errors) when removing tables in a database used in Azure Mobile Services. Creating a completely new database seems to be a bit overdone and unneeded.
My first instinct is that it's an issue with Entity Framework. It doesn't generally play nicely with people touching the database. If you looked through your log, you'd probably see Entity Framework issues.
Take a look at this Azure Doc: http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-how-to-use-code-first-migrations/
It discusses how to enable code first migrations - I won't elaborate here because there are a couple of steps.
Essentially, the problem is that Entity Framework takes a number of dependencies and when those dependencies change, it just falls over on itself. Let me know if that doesn't help you.
This might be too broad, but it's a problem I'm having a bear of a time dealing with. We have an application that we distribute to our end users. It's running on top of a derby back end. We can push out code changes fairly easily, it'll go out to our server, see there's a new version, download, overwrite old code, and reboot.
But, as we change our code, we also alter the schema of the derby database. We don't have great methods to update this. Currently we can push SQL updates via FTP. When the program is connected to the internet, it looks for new SQL files, downloads them and runs.
Unfortunately a lot of our clients have limited Internet access, so they get these updates intermittently. Sometimes because they changes are big enough, their local DB schema gets out of sync with what we want. Or they get the code changes via CD but not the SQL changes (someone mails them the CD).
What I've been trying to do is create a SOAP service that can serve up XML representations of the schema. It's been a huge PITA to develop so far.
What are some methods people are currently using to maintain databases like this? I feel like I'm not the first to do this, so there might be better ways than what I'm doing.
Based on some comments here, here's an update:
Basically, I think we screwed ourselves early on by not adhering to a strict versioning of the DB, so I don't know how everyone's DB is at. A lot of people got custom installs built (groan at will). I need a tool that can tell the differences between their DB and a "official" copy.
I have a tool built, it kind of works, but there's so…many…things to keep track of.
Can you distribute the DB changes as part of the code changes? Then, when the app restarts, it checks if it needs to run any updates on the DB.
Obviously, you'll need to version the DB schema to avoid applying the same update more than once.
I know some applications that do this (mostly in Ruby, but also in Java).
If you already have an update mechanism in place in your application that can download a program to alter the installed source code, why not package and run the schema changes as a part of that upgrade process? I would just run the updates as a part of the Java application then.
My team at work handles these changes by using the MyBatis Migration tool, which represents each schema change as a single migration script which contains the "make change" and "rollback" steps. A changelog table is stored in the database which lists which updates have been applied to that database, which makes it easy for the migrate command to determine which updates it needs to apply when run. This specific tool is probably only really useful when you control the database and have the ability to run shell commands and scripts to alter the database, but you can use the same concepts in your approach - package each schema change as an atomic unit and run them from within your program to bring the schema up to the current version, which you can track in the db itself.
You'll need a table containing the version of the database that the user is running, and then you'll need code to upgrade from version n to version n+1. Assuming you have a database user that has access to do schema changes, you can apply schema changes the same way you're now applying code changes.
In my code I am trying to check if my entity framework Code First model and Sql Azure database are in sync by using the "mycontext.Database.CompatibleWithModel(true)". However when there is an incompatibility this line falls over with the following exception.
"The model backing the 'MyContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data."
This seems to defeat the purpose of the check as the very check itself is falling over as a result of the incompatibility.
For various reasons I don't want to use the Database.SetInitializer approach.
Any suggestions?
Is this a particular Sql Azure problem?
Thanks
Martin
Please check out the ScottGu blog below:
http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx
Here is what is going on and what to do about it:
When a model is first created, we run a DatabaseInitializer to do things like create the database if it's not there or add seed data. The default DatabaseInitializer tries to compare the database schema needed to use the model with a hash of the schema stored in an EdmMetadata table that is created with a database (when Code First is the one creating the database). Existing databases won’t have the EdmMetadata table and so won’t have the hash…and the implementation today will throw if that table is missing. We'll work on changing this behavior before we ship the fial version since it is the default. Until then, existing databases do not generally need any database initializer so it can be turned off for your context type by calling:
Database.SetInitializer<Production>(null);
Using above code you are no recreating the database instead using the existing one so I don't think using Database.SetInitializer is a concern unless you have some serious thoughts about using it.
More info: Entity Framework Code Only error: the model backing the context has changed since the database was created
In this question, I was facing an issue where I was writing an update for a deployed application to bring the database up to date with the newer version we are deploying. Basic outline as follows:
Began with currently deployed version of application
Added new functionality that used existing database
Added new database tables and relationships
Added new functionality that depended on the new databse structure
Testing complete, ready for deployment
The issue here is that the currently deployed application has been in use for a few months and has a lot of data that would need to be preserved, so simply replacing the old with the new was not viable (at least not for the database, but of course it works for the code). So I used the following steps to write a script in SQL for the updated version of the application to run the first time it starts up to make the necessary changes to the database without touching existing data (aside from populating the new tables):
Use VS2010's "Generate database from model" functionality to create a .sql (the model was originally created using the "Generate model from database" functionality)
Remove all parts of the .sql that act on the existing tables, except for those that add FKs between new and old tables
Use the resulting script to build the new database
Sounds pretty clean and done, right? Wrong. The mapping from the model to the database was all wrong for the new tables. Long story short, the database that generated the model had tables named in the plural (and the mapping was correct and the application worked), and the database generated by the model created tables in the plural (identical names to what the tables where the DB generated the model, but the model did not map to them). The solution ended up being to change the script to name the tables in the singular, and then everything worked flawlessly.
What happened here? The code remained untouched, no changes were made to the model, and the old tables continued to work fine the entire time, yet somewhere in the process of
Generate script
Delete "new" tables and constraints (those that don't yet exist in the deployed version)
Run script to re-add the tables
the mapping decided to be to singularly named tables (User instead of Users, Address instead of Addresses, etc).
Can anyone explain to me how/why this would happen this way?
You might want to look at some of the tools that redgate supply - good tools for comparing two DB structures and generating a script to update.
http://www.red-gate.com/?utm_source=google&utm_medium=cpc&utm_content=brand_aware&utm_campaign=redgate&gclid=CIamkumgw6sCFcYPfAodnGVjsQ