Table not removed in EDMX after deleting in SQL - sql

AM working with MVC4 VS2012 and EF5.
I created a EDMX using the DataBase and the POCO(TT) Classes were generated Automatically.
Followed this way Entity Framework 5 and Visual Studio 2012 POCO Classes in Different Project to move POCO to different Project.
Now i edit my datatype (Int to String or AllowNull) or delete my table in SQL Server. i come and update my model(EDMX). But changes are not reflected.
Am i doing wrong or missing something ?
Thanks

I don't think you are missing anything.
I'm unsure of the logic used by the designer when it decides what to update. From experience it seems to mostly add new. I haven't seen much updating.
I tend to just delete the table and add it again as the least painful route. This will remove any custom mapping, so it's not ideal if your entities are very different from your tables.

Related

How should I model a database with Entity Framework?

I'm just starting to use Entity Framework Designer. I would like to ask how should I create my Entity files. I would like to have like 10 tables and all of them will be linked to at least one other table by some row. Should I create just one file and put all my models there or create a separate file for each model.
I don't know if this is even a question but I could find my answer on Google. I didn't know how to define it actually... :D
So if you have any tips on how I should model my database that will be awesome. Also if you have any more information on when I should use different Entity files that will be useful too.
I have used MySQL designer in the past but in there as far as i can remember you just move the model into the designer and you can make relations. So I'm kinda keen into doing that (all models in one Entity File) but wanted to check with you first guys.
just try this plugin for VS http://visualstudiogallery.msdn.microsoft.com/72a60b14-1581-4b9b-89f2-846072eff19d to generate your classes from existing db

Solution For Updating LINQ to SQL Files After Database Schema Change

I recently started using LINQ to SQL in my database later for a C# Windows Forms project. Until now, I have been very impressed with how fast I have been able to implement the data access layer. The problem that I am facing is similar to the post from 2008 below
Best way to update LINQ to SQL classes after database schema change
In short, I am struggling to find an efficient solution for updating the LINQ to SQL files after making minor changes to the database such as constraints, foreign keys, new columns, etc...
Thus far, I have merely been deleting the tables in the LINQ to SQL designer and dragging them back onto the designer. However, I now have the need to rename many of the associations in the designer. The problem is that each time I have to re-create the LINQ to SQL files I lose the change that I manually made to the files. Can someone tell me if there are any new solutions and/or methods for solving this problem. The post that I have included as well as many other dated sources of information mentions that SQLMetal and Huagati are good tools. Additionally, I have read that you can manually create your LINQ to SQL files rather than auto-generate them with the designer (this is what I had to do when using Hibernate with Java).
I know that manually creating the domain classes and mapping files will be consuming. I am not familiar with SQLMetal or Huagati. Can anyone recommend the most elegant or preferred way to deal with this issue? I know that I could use Entity Framework but, I have inherited this project and I am under a very tight deadline. I can refactor it to another Framework once I have this phase complete.
After much research and reading, I have determined that the best solution for updating my DBML after minor database changes is to manually edit the file. The procedure used to update the DBML is below:
Right-click on the DBML file
Open with XML Editor
Add or change the columns in the affected table
Add or change any associations
Save the DBML
Rebuild the project
This is not ideal but, once it has been done a few times it is pretty painless for the types of changes that I occasionally need to make to the database such as changing data types, adding keys, etc...
I don't touch dbml or linq2sql generated files because there is risk that my changes would be overwritten. I use only my generated partial classes. When database schema changes I remove old table from dbml-editor and pull new table to it.

Entity Framework - Schema Upgrade, Multiple DBMS, and Code First

I'm looking into using Microsoft's Entity Framework in an upcoming project which is a point release of an existing product. Our current product supports two DBMS (Oracle and SQL Server), the schema of each is maintained in separate .sql script files.
The entity framework (4.1) looks appealing because it allows various scenarios to be implemented automatically via code generation, reflection, etc. However, as far as I can tell, some of these benefits appear to be mutually exclusive of others.
For example, to support multiple DMBSes, I am inferring that I would need to use a model or code first design, in which case EF would generate the schema for each according to the model (I have seen little to no posts or documentation on this, so I may be wrong). This means that our existing schema would need to be either abandoned (model-first), or mapped (code-first). Additionally, updating the schema would require manual scripts as EF does not appear to support schema upgrades (without wiping out data).
Are model-first and code-first the only viable means of supporting multiple DBMSes in EF? I realize that technically it would be impossible to guarantee that two arbitrary schemae are the same, so I am thinking this is true.
Are there any potential pitfalls of code-first and mapping to multiple DBMS systems? For example, Oracle does not have auto-increment columns; you have to use sequences. How is this mapped in the DbContext? Do I need to create separate maps for each DBMS?
Does EF support any mechanism to upgrade an existing DBMS schema to one of which is representative of the EF model (schema recreation =/= upgrade), or am I limited to doing this manually?
I did come up with one possible way to use database first and support multiple DBMSes, however it is a maintenance nightmare. The idea was to add another layer of abstraction to the two generated data models and create converter classes for each of the EF generated models. This seems like the best way of doing it so that each DBMS could potentially have its own model, yet my code would handle the mapping. But in doing this, what am I really gaining from EF? Maybe query generation, but is that worth it?
Actually both the model-first and the database-first have same constraints. Both these approaches are using an EDMX file which contains SSDL (a description of store = a database layer) part related directly to a single database provider so if you want to have two different database providers you must have two different SSDL parts and keep them in sync. You can use single CSDL (a description of conceptual layer = your model classes) and a single or two MSLs (a description of mapping between SSDL and CSDL - a single file is possible only if tables and columns will have exactly same names in both SSDLs). As I know EDMX file can consists only from single SSDL, CSDL and MSL parts so I expect that the designer has no support for this scenario and you will have to modify second SSDL manually or use two EDMXs = model each change twice.
The code-first approach can make this much more simple but the question is how good is Oracle provider when using the code-first and the database generation. The provider is responsible for correctly interpreting needed features like sequences in case of auto increment columns.
EF itself currently has no support for upgrading existing DB. When using EDMX the process of the database generation is controlled either by T4 template or Workflow so it can be customized and there is already separate feature called Entity Designer Database Generation Power Pack which allow incremental building of the database with the model-first approach. The problem is that this feature is using VS Database tools. I think these tools works only with SQL server. I never like these automated tools so I still think that database upgrade should be controlled manually with help of some tools to get difference script between the current and the last deployed database versions. You should need diff script only when deploying new the new version to a production environment. In a testing and a development environment you can always recreate the whole database.
There should be no abstraction needed when working with two EDMX models. Models must produce the same conceptual layer. In such case you need only a single set of POCO classes which are mapped by conventions (same class name as the entity, same properties with same types and accesibility) so they will work with both models.
Edit:
Based on #Tridus answer I'm just adding that you can create databases first and use fluentAPI from EF 4.1 to map them. Your databases must have exactly the same schema (table names, column names, etc.), they can't use any specific features (I hope sequences will not be the problem because it is just the way how Oracle handles auto increment columns).
This is actually fairly doable with a database first design, but there's some caveats you won't be able to get around easily due to how the databases handle things differently.
Sequences are one (in that they're just ignored by EF entirely). You can fake that in Oracle by putting a trigger on the table that populates it on Insert, but I also found that if you have to update the model later then EF "forgets" that the column is an identity column and it'll try to stick a 0 in it again. I also found it unreliable in Oracle to try and get the new ID if you use a trigger. We just wound up selecting from the sequence and setting the ID on the object before doing the insert because that's how you usually do it in Oracle. You could also use a stored procedure that handles it.
Numbers aren't handled the same way. SQL Server uses number formats that map to Int32, Int64, etc. Oracle's number format is totally different and a full range Int32 in SQL Server is a Number(10,0) in Oracle... which is actually an Int64 in EF because it's bigger then an Int32. I also found that Oracle's EF provider likes to use Decimal a lot even when it doesn't have to, but that's probably just a beta issue.
Stored Procedures in Oracle require some values to be put in app.config/web.config in order to work in EF. I'm not sure if that's going to just be clutter in SQL Server or if it'll cause problems.
Finally, EF Code First is pretty immature and according to the docs doesn't support changing the database structure in this version. I'm not sure if Oracle's provider supports it either (it might, haven't tried it).
Most of this is stuff you can get around, but you're going to need to do some work to hide the differences from the rest of your code and it'll probably take a wrapper layer to do it.
edit - In regards to your #4 - EF 4.1 can generate partial POCO classes. Instead of writing a wrapper around each of the generated models to hide any differences, you can create another partial class code file that won't be regenerated when you update the model, and then add properties/methods that hide the differences. Your app code would just have to be aware to use those instead, and they'd handle the issue (like the number issue I mentioned, you could completely hide it with another property that can do the necessary casting for Oracle).

NHibernate and code first

Do you use SchemaExport and SchemaUpdate in real applications? Initially, you create model and then generate schema? Does it work? Or, you use it only for tests...
Usually, I create db (using visual studio database project) and then mappings and persistent classes or EF entities using designer. But now, I want to try code first approach with Fluent NHibernate.
I have researched SchemaExport and SchemaUpdate and found some issues. For example, update doesn't delete db objects, creates not null columns like nullable if table exists, doesn't generate primary key on many-to-many tables and so on. It mean that I have to recreate db very often. But, what's about data? And, how to deploy changes to production db and so on...
I want to know do you really use code first and SchemaExport(SchemaUpdate) in your applications? May be you can give me some advices...
I use SchemaUpdate in production. It is safe precisely because it never does destructive operations like deleting columns. However, it is not a comprehensive solution for updating your database. If you use it you will still have to supplement it with script to update your schema to do things like deleting (as you mention), indexes, changing column type, adding table data, etc. But SchemaUpdate covers the 90% case for me.
The only downside I've discovered is that over time it seems to occasionally add duplicate foreign-key constraints to my table.
One more thing: you should run SchemaUpdate manually from a build tool, not your app itself. It is not safe to give your application the rights to modify your db schema!
I use SchemaUpdate/SchemaExport for rapid evolution of my model, but they are not a replacement for a database migration tool. As you mention, data cannot be migrated in a sensible manner in many cases. The tool does not have enough context. (e.g. How can you automatically migrate a FullName column to FirstName/LastName?) I answered a similar question here where I discuss db migration tools in the context of NHibernate.
NHibernate, ORM : how is refactoring handled? existing data?
Yes, you can use these in real applications; I do.
Of course, almost all the work happens in that first go. My practice has been to create a separate project that references the mappings in my main project assembly and handles database creation and the initial data import, if any.
Once the project is in production, I usually unload that project from the solution, but keep it around for reference or if I ever need to switch from create scripts to update scripts.
As for the way NHibernate creates the database, you have to do a little more specification in your Fluent mappings than you otherwise might. I like to specify null/not null, foreign key constraint names, etc. to have maximum control over the way the database gets created.
I don't think you'd ever want to use automapping in this scenario.
Just with any generating code whether it be poco generation from a tool or database generation as in your question, it will probably get you 80% of the way there. From there it would be wise to tweak it the other 20% to add your indexes and any other performance tweaks to get it just right.

Entity framework model creation

When using the Entity Framework there are basically two ways to create your model. You either create the model in SQL server or in Visual Studio EF designer. Those are outlined below.
Start with Database
You first create the model in your SQL server DB then point EF to create the .edmx file for you. By using this approach you can use SQL server management studio to create all of your models and relationships.
Start with Visual Studio EF Designer
This approach is to create the model first in Visual Studio and from that create your database. By doing this it seems like you don't have to be soo concerned with tables and relationships.
Here is what I do and why I do it that way
I start by creating my model using SQL server management studio. I do this because I think its easier to create and modify tables using that tool, also I know exactly what is being created. I create my EF model by pointing it to my existing database. After that I create a Visual Studio Database Project so that my database is scripted into files which I put into version control. When I need to make changes, I change the database and then update my .edmx file as well as my database project.
I was wondering what are the pros and cons to these different approaches and what should be the criteria to decide which to use? Am I doing it wrong? Should I be creating my model first in Visual Studio?
I don't think that there's a 'right' or 'wrong' way to do this, a lot depends on how you deploy your code, where it goes to etc. There is also a third way, which Scott Guthrie blogged about recently:
http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx
As a side note, even if you start with the model designer, I think you always have to think about your tables/relationships, as getting these wrong in the database can cause you big problems further down the line.
I don't think there is a right or wrong way.
At our company we are developing the database changes directly first, apply them to the edmx model for existing models.
For new models, we create the edmx model first, then generate the database. From that point on we usually update the database directly. After we have tested our code internally and it runs correctly, and we know that our SQL database is correct (and of course prior to checking in), we'll then apply the changes to the database project by doing a SQL compare on the database to the database project.
This has worked very well for us.