Entity mapping disappears when solution is uploaded, why? CRM 2013 - dynamics-crm-2013

I have a custom field in my test environment which is mapped from the quotedetail to the salesorderdetail, and then again to the invoicedetail. When I upload this solution to the production environment though, the mapping only exists between the quotedetail and the orderdetail.
I used the Jason Lattimer's tool for mappings, which is quite helpful, and with it I can see that the mapping mentioned doesn't exist there.
Why is that, and how can I create this mapping, since this field is crucial to production logic?
Thanks in advance,
Georgi Borisov!

Related

How to prevent errors when removing tables in the database used in Azure Mobile Services?

When I remove tables used in my Azure database (of course after removing the entities), I just use DROP TABLE TABLENAME. This has a bad effect. When I run the mobile service by just starting the browser, I get an Error 500 when I add a new record (of an existing table of course) with my TableControllers. Apparently, I did something wrong. It can be "solved" by creating a completely new database and use this one in my mobile service. The Seed method ensures that the right tables exist (and only the right tables) and everything works fine.
What is the best way (to prevent errors) when removing tables in a database used in Azure Mobile Services. Creating a completely new database seems to be a bit overdone and unneeded.
My first instinct is that it's an issue with Entity Framework. It doesn't generally play nicely with people touching the database. If you looked through your log, you'd probably see Entity Framework issues.
Take a look at this Azure Doc: http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-how-to-use-code-first-migrations/
It discusses how to enable code first migrations - I won't elaborate here because there are a couple of steps.
Essentially, the problem is that Entity Framework takes a number of dependencies and when those dependencies change, it just falls over on itself. Let me know if that doesn't help you.

Multiple database with NHibernate 3.x

I found a couple of articles how to use NHibernate with multiple database, for example this one
http://codebetter.com/karlseguin/2009/03/30/using-nhibernate-with-multiple-databases/
But all articles are very old, and may be there is some new approach with NH 3.x? I looked in documentation but did not found anything, but maybe i missed somthing?
Does anybody knows some better way (native NH3.x way) to use NH 3.x with multiple database than described in this article?
http://codebetter.com/karlseguin/2009/03/30/using-nhibernate-with-multiple-databases/
Thanks,
Alexander.
AFAIK, there is nothing new in NH 3. But there are still more options to use several databases than in the blog post you linked.
You can open your own connection and pass it to NH when opening a session.
You can open a session and switch to another database on the same server (eg. by executing a use database statement on sql server).
You can provide a schema (database) name on each table you map in the mapping file. It is not useful to have it hard coded, but you can still replace it after loading the mapping files or use mapping by code.
The articles you linked are still the way to go. Each SessionFactory is responsible for a single connection (connectionstring) and schema.
There is one special case where ou split the database into multiple with the same schema to load balance. This is called sharding and there is the contrib NHibernate.Shards to deal with it.

Expected database model is inconsistent in real-time

In this question, I was facing an issue where I was writing an update for a deployed application to bring the database up to date with the newer version we are deploying. Basic outline as follows:
Began with currently deployed version of application
Added new functionality that used existing database
Added new database tables and relationships
Added new functionality that depended on the new databse structure
Testing complete, ready for deployment
The issue here is that the currently deployed application has been in use for a few months and has a lot of data that would need to be preserved, so simply replacing the old with the new was not viable (at least not for the database, but of course it works for the code). So I used the following steps to write a script in SQL for the updated version of the application to run the first time it starts up to make the necessary changes to the database without touching existing data (aside from populating the new tables):
Use VS2010's "Generate database from model" functionality to create a .sql (the model was originally created using the "Generate model from database" functionality)
Remove all parts of the .sql that act on the existing tables, except for those that add FKs between new and old tables
Use the resulting script to build the new database
Sounds pretty clean and done, right? Wrong. The mapping from the model to the database was all wrong for the new tables. Long story short, the database that generated the model had tables named in the plural (and the mapping was correct and the application worked), and the database generated by the model created tables in the plural (identical names to what the tables where the DB generated the model, but the model did not map to them). The solution ended up being to change the script to name the tables in the singular, and then everything worked flawlessly.
What happened here? The code remained untouched, no changes were made to the model, and the old tables continued to work fine the entire time, yet somewhere in the process of
Generate script
Delete "new" tables and constraints (those that don't yet exist in the deployed version)
Run script to re-add the tables
the mapping decided to be to singularly named tables (User instead of Users, Address instead of Addresses, etc).
Can anyone explain to me how/why this would happen this way?
You might want to look at some of the tools that redgate supply - good tools for comparing two DB structures and generating a script to update.
http://www.red-gate.com/?utm_source=google&utm_medium=cpc&utm_content=brand_aware&utm_campaign=redgate&gclid=CIamkumgw6sCFcYPfAodnGVjsQ

Schema Compare - How to take referenced projects into account?

I've created a database project and several databases projects that reference that project. I would like to use VS2008 Schema Compare to compare the schema of one of the databases to my development database.
So far, so good. But when I check the result, it says it will skip all references!?
Question: How can I include the referenced database into my compare?
Ps. Comparing the "base" database first and then the other database won't work either, because it will result in drops.
Each project must be compared individually. More info about pros and cons here. Hopefully it will be better in the future...
According to the MSDN page on "Compare and Synchronize Database Schemas" the meaning of Skip Referenced is
The object exists in a referenced database and does not need to be dropped or created
That basically means that those objects already exist in the database and won't be created or deleted, they're unchanged.

Keeping a database Schema upto date

I'm writing an application that is using a database (currently MySQL 4) to store data.
It is likely that I will make changes to this in the form of updates later to add additional data. Updating the application is simple, it essentially comes down to overwriting the program files with the new ones. However how do I go about updating the database schema?
The database is remote and so my application might exist in several places, so simply dumping the ALTER and CREATE statements in an installer would result in the changes being made multiple times, and I have been asked explicitly for an automatic solution that allows for the application copies to be updated over a transition period, and for schema updates to be automatic.
I considered examining the schema at start-up to look for missing tables and columns, and adding them as needed, however this does not seem like a clean solution. I also considered putting some kind of “schema version” number on the database, but can’t see any way to do this short of a single row table with an int “Version” column which doesn’t seem a good way either.
I can highly recommend Liquibase. It really does work - I've used it and was very impressed.
Essentially, it keeps its own log of statements run on a database and runs them only if not already run/needed. It is XML driven and allows you to use optional pre- and post-execution statements and conditions. You check your XML files into your source control and invoke it from your build tool. It's even suitable for driving production releases.
It's magic.
Rather than rolling your own system for versioning your database it's probably worth looking into an existing framework that will manage it for you.
I use liquibase and have integrated into my build using the maven plugin. Worth checking out!
Just as you proposed, add a table where you store the current version of the database schema. Then you only have to apply the changes between your last schema update and the new release, and set the new version number accordingly. I've done this to update our production database about 300 times, it just works.