I'm working on an application that has a data publication mecanism that I designed & implemented. It allows one master instance of the app to feed data to many subscriber instances. This is implemented by loading the master's data into a set of temporary import tables that have the same exact schema on the subscriber. The merge process uses these import tables to do its work.
This whole publication thing is working fine. It is performed outside of NHibernate using ADO.NET for batch loading, sets of stored procedures for diff'ing & merging (they're autogenerated by a custom tool). Also, we only have an HTTP link available between master/subscriber to download the data; we can't connect directly to the master SQL server.
The problem I face is visually showing the diff to the user before they actually merge the new data. In the application, I'd like to have NHibernate load our business object directly from these temporary import tables.
Can we do this ? Without having to maintain two sets of almost identical mapping files ?
In our last version, we were building up business objects using custom code that would load from these import tables. It would only load simple properties, not handling relations. This sucks big time from a coding/maintenance point of view.
Are you trying to do something like this where you need to dynamically change the table names?
You may also want to look at Fluent NHibernate for a solution.
Another possibility would be to have the temporary tables in a separate database and just change the connection string.
Related
I’m building two applications that need to share some similar data but each will also have unique data. Should I build a separate database for each app or let each app access the same database.
I need the shared data to update automatically on one app if it is changed in another. I’m also using postgresql with react and express with the intent of having both apps be progressive web apps and eventually react native apps.
In general, I would think of this as:
Databases are the unit of backup and recovery.
Databases can contain multiple schemas ("schemata" ?) which are the unit for managing users and objects.
Based on your question:
I need the shared data to update automatically on one app if it is changed in another.
It sounds like you want one database and separate schemas for each application.
It sounds as if you will need to join the database from both applications in a single SQL query. In that case, use one database and multiple schemas to separate the data.
You could have one schema common that contains the data which is shared between all applications and then one schema per application.
Both has pros and cons. But i think keeping them separate will be better. Pro for one can be con for other.
Pros -
separate DB makes maintenance better,faster and easy.
performance wise separate DB is better.
Migrations of code will be easy.
Cons -
Auto synchup can be tricky if tables etc. are different.
If one process need to use tables from both DB, it will be an issue.
In my code I am trying to check if my entity framework Code First model and Sql Azure database are in sync by using the "mycontext.Database.CompatibleWithModel(true)". However when there is an incompatibility this line falls over with the following exception.
"The model backing the 'MyContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data."
This seems to defeat the purpose of the check as the very check itself is falling over as a result of the incompatibility.
For various reasons I don't want to use the Database.SetInitializer approach.
Any suggestions?
Is this a particular Sql Azure problem?
Thanks
Martin
Please check out the ScottGu blog below:
http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx
Here is what is going on and what to do about it:
When a model is first created, we run a DatabaseInitializer to do things like create the database if it's not there or add seed data. The default DatabaseInitializer tries to compare the database schema needed to use the model with a hash of the schema stored in an EdmMetadata table that is created with a database (when Code First is the one creating the database). Existing databases won’t have the EdmMetadata table and so won’t have the hash…and the implementation today will throw if that table is missing. We'll work on changing this behavior before we ship the fial version since it is the default. Until then, existing databases do not generally need any database initializer so it can be turned off for your context type by calling:
Database.SetInitializer<Production>(null);
Using above code you are no recreating the database instead using the existing one so I don't think using Database.SetInitializer is a concern unless you have some serious thoughts about using it.
More info: Entity Framework Code Only error: the model backing the context has changed since the database was created
I have a SQL view to integrate with my application. I have been using Entity Framework till now. But the problem is that when I add a view to Entity Framework it starts treating my view as a table.
What I really want to know is, am I missing on something? Also if I use Nhibernate will this problem be resolved? Will it treat the view as a view only?
This view is a very complex query which has multiple joins and aggregation. That is why I am using a view.
But the problem is that when I add a view to Entity Framework it
starts treating my view as a table.
No it doesn't. If you add view to your model through wizard (EDMX designer) it will internally handle the view as a defining query which makes readonly entity. At entity level (the conceptual model) you don't see a difference because it is just another entity / class but if you try to make changes to instance of that class and save them you will get an exception (unless you map stored procedures or custom SQL commands to insert, update and delete operations for that entity).
Edit:
Database views as well as other database specific features like stored procedures or SQL functions are only for database first scenario (when you are using Update model from database in the designer).
Using Generate database from model is for Model first scenario where you tell VS: "Here is my model and I want some database to store it." First of all only information from conceptual model is used (original mapping and database description is replaced with a new one every time you run this command so even mapping to original database can be broken). It cannot create database specific features for you because it doesn't know that class should be mapped to view and moreover it doesn't know how should the view be created (the query from original view is unknown).
You can force VS to create the view for you but it is a lot of work in T4 templates where you will have to somehow provide SQL creation script for the view.
In this question, I was facing an issue where I was writing an update for a deployed application to bring the database up to date with the newer version we are deploying. Basic outline as follows:
Began with currently deployed version of application
Added new functionality that used existing database
Added new database tables and relationships
Added new functionality that depended on the new databse structure
Testing complete, ready for deployment
The issue here is that the currently deployed application has been in use for a few months and has a lot of data that would need to be preserved, so simply replacing the old with the new was not viable (at least not for the database, but of course it works for the code). So I used the following steps to write a script in SQL for the updated version of the application to run the first time it starts up to make the necessary changes to the database without touching existing data (aside from populating the new tables):
Use VS2010's "Generate database from model" functionality to create a .sql (the model was originally created using the "Generate model from database" functionality)
Remove all parts of the .sql that act on the existing tables, except for those that add FKs between new and old tables
Use the resulting script to build the new database
Sounds pretty clean and done, right? Wrong. The mapping from the model to the database was all wrong for the new tables. Long story short, the database that generated the model had tables named in the plural (and the mapping was correct and the application worked), and the database generated by the model created tables in the plural (identical names to what the tables where the DB generated the model, but the model did not map to them). The solution ended up being to change the script to name the tables in the singular, and then everything worked flawlessly.
What happened here? The code remained untouched, no changes were made to the model, and the old tables continued to work fine the entire time, yet somewhere in the process of
Generate script
Delete "new" tables and constraints (those that don't yet exist in the deployed version)
Run script to re-add the tables
the mapping decided to be to singularly named tables (User instead of Users, Address instead of Addresses, etc).
Can anyone explain to me how/why this would happen this way?
You might want to look at some of the tools that redgate supply - good tools for comparing two DB structures and generating a script to update.
http://www.red-gate.com/?utm_source=google&utm_medium=cpc&utm_content=brand_aware&utm_campaign=redgate&gclid=CIamkumgw6sCFcYPfAodnGVjsQ
So, I need to synchronize two data stores, one of which is a SQL database, and I felt it was natural to use the built in provider for that side. But unfortunately, I started running into trouble because the SqlSyncProvider doesn't use the ChangeDataRetriever and NotifyingChangeApplier, but instead communicates through some DbSyncContext object. Therefore, I had to derive from the SqlSyncProvider and override mainly the GetChangeBatch and ProcessChangeBatch methods so they become compatible with the rest of the Sync Framework.
But the trouble is that I believe that I'm missing something in that transformation. The result is that when I create a row in a SQL database, and synchronize to the other store, and delete the row (or update) in the other store, after syncing the changes don't appear in the SQL database. The problem is probably caused by the bulkdelete stored procedure which filters the delete table and separates rows that are created locally from the rows created elsewhere.
Does anybody know what could cause this problem? I would really like to see some samples or documentation regarding synchronization between a SQL provider and a custom provider.