I have created an n-tier solution where I am retrieving related data from a WCF service, updating it within a Windows Forms application, and then returning the updated data via WCF to be persisted to the database. The Application, WCF Service and Database are all on different machines.
The data being retrieved consists of an object and child objects...
public Product Select(string catalogueNumber) {
return (from p in this.ProductEntities.Products.Include(#"Tracks")
where p.vcCatalogueNumber == catalogueNumber
select p).FirstOrDefault() ?? new Product();
}
The updates being applied by the client application can, as well as updating existing content, also insert additional "Track" objects.
When I receive the Product object back from the client application, I can see all of the updates correctly, however in order to save all of the changes correctly I have to jump through a few hoops...
public void Save(Product product) {
Product original = this.Select(product.vcCatalogueNumber);
if (original.EntityKey != null) {
this.ProductEntities.ApplyPropertyChanges(product.EntityKey.EntitySetName, product);
// There must be a better way to sort out the child objects...
foreach (Track track in product.Tracks.ToList()) {
if (track.EntityKey == null) {
original.Tracks.Add(track);
}
else {
this.ProductEntities.ApplyPropertyChanges(track.EntityKey.EntitySetName, track);
}
}
}
else {
this.ProductEntities.AddToProducts(product);
}
this.ProductEntities.SaveChanges();
}
Surely, there has to be an easier way to do this?
Note: I have spent the better part of the afternoon investigating the EntityBag project, but found that this has not been updated to work with EF RTM. In particular, whilst it will successfully update the existing data exceptions are thrown when mixing in new objects.
I don't have a ready-made answer for your particular scenario - but just a question: have you checked out ADO.NET Data Services (f.k.a. "Astoria") ?
They're built on top of Entity Framework, WCF's RESTful interface, and they offer a client-side experience, plus they also seem to have a decent story for not just querying, but also updating, inserting records into databases.
Could this be an option?
Check them out on MSDN, at David Hayden's blog, on Channel9, or see some of the excellent sessions at MIX08 and MIX 09
Marc
You should probably take a look at Danny Simmons' EntityBag sample.
It is designed to simplify these sorts of issues:
http://code.msdn.microsoft.com/entitybag/
As CatZ says things will be a lot easier in .NET 4.0.
One of the things we are planning on doing to help is creating a T4 template for you that generates classes for you that are capable of self-tracking, and some extra surface to make it simple for these self-tracking entities to ApplyChanges() to the context when they get back to the server tier.
Hope this helps
Cheers
Alex (PM on the Entity Framework team at Microsoft).
I see that this thread is quiet followed, so I allow myself to do a little update...
Weeeeee !
Self-Tracking entities has arrived in EF 4!
Check this out:
http://blogs.msdn.com/efdesign/archive/2009/03/24/self-tracking-entities-in-the-entity-framework.aspx
Explanation of the self-tracking mechanism by the entity framework team.
http://aleembawany.com/2009/05/17/new-features-in-entity-framework-40-v2/
Anouncement of new features in EF 4.
http://msdn.microsoft.com/en-us/magazine/ee321569.aspx
Comparison of several N-Tier patterns for disconnected entities.
Enjoy !
In Entity Framewrok 4 you can use the method "ApplyCurrentValues" to update a detached entity.
In your scenario will be something like this:
this.ProductEntities.Product.ApplyCurrentValues(product);
foreach (Track track in product.Tracks.ToList()) {
if (track.EntityKey != null)
{
//Update Entity
this.ProductEntities.Track.ApplyCurrentValues(track);
}
else
{
//New Entity
this.ProductEntities.Track.Attach(track);
}
}
I hope it will be useful
One of the limitations of Entity Framework v1.0 is Updating Entities. Unfortunately I think you are out of luck until version 2 arrives.
Related
I'm working on a requirement to change an existing ASP.NET MVC application to become multi-tenant ready. The application was built for "only one customer" by other means, for each client there's a new installation of the MVC app. The application has its database structure prepared to have "multi" websites inside one MVC app, so all the database queries already take the "site" into consideration (siteId).
I have several questions regarding multi-tenancy applications and I'm still studying it. Today I started doing changes on the MVC app and I came across on one thing. The application has a table with several configurations. Things like AppSMTPServer, AppShowLoginBox and etc. These are parameters created to make the app dynamic.
All these configurations are currently stored in the ApplicationState inside a static class, something like this:
public static IDictionary<String, String> Configurations
{
get
{
if (HttpContext.Current.Application[CONFIGURATIONS] == null)
{
LoadConfiguration();
}
return (IDictionary<String, String>)HttpContext.Current.Application[CONFIGURATIONS];
}
private set
{
HttpContext.Current.Application[CONFIGURATIONS] = value;
}
}
My question is. If I change the MVC to become multi-tenant ready, each tenant will have its own configuration values. So, I cannot store them in the ApplicationState anymore as it will be populated on application_start and will stay there for good.
What are the options for storing tenant specific configuration data? I looked on several sites and couldn't find a "good practices" on this. If I missed something that would help, please leave a comment. Thanks!
In my experience in building multi-tenant app's this use-case can be handled as follows,
Data remains in the Db
Upon a tenant login, we might require their config values, we can fetch from the db store and add them to a cache [redis - distributed cache]
similarly for each tenant hit, we can cache them, this way as the application is being repeatedly used, the more static data goes in to the cache and lesser the load on the app and the cache and higher the response times
Is there any published guidance for handling transient fault scenarios in SQL Azure when using WCF RIA Services backed by Entity Framework and/or Linq to SQL?
We have studied the CAT retry library (Transient Fault Handling Framework for SQL Azure) and documentation (Retry Logic for Transient Failures in SQL Azure), particularly the sections as they relate to Entity Framework and Linq to SQL.
For example, in the case of Linq to SQL, we are instructed to wrap our query/update code into an ExecuteAction and execute it with a RetryPolicy.
This article (Silverlight 4, EF 4, RIA Services & Windows Azure together) suggests that the best we can hope for is to add resiliancy to the connection. However, it appears that we may get the result we want by overriding the PersistChangeSet method on LinqToEntitiesDomainService and LinqToSqlDomainService and adding our retry there.
e.g. (Pseudo code)
protected override bool PersistChangeSet()
{
e.Result = retry.ExecuteAction(() =>
{
return base.PersistChangeSet();
});
return e.Result;
}
Any thoughts to this approach? Is there any documentation or guidance out there, specifically as relates to RIA Services?
This was originally going to be a comment but...
I recently used the RetryPolicy class with entity framework and encountered no problems.
Adding the retries is extremely simple and low impact.
The article linked is dated (May 30) which is before they added support for azure to the enterprise library. (see this announcement)
So based on the scenario described above I would say that the RetryPolicy will satisfy the requirements described.
Clarified Updated Question - Start
In the official MVC 3 Getting Started-tutorial it seems to me that all we have to do to get ORM working are two steps.
First adding the simple MovieDBContext-code as described at the end of part 4 ..
public class MovieDBContext : DbContext
{
public DbSet<Movie> Movies { get; set; }
}
.. and second in the beginning of part 5, with a simple right-click on the Controllers folder we can auto-generate a MoviesController that implements CRUD()-functionality using Entity Framework by simply telling which Model to use.
Now when using the web-application we can already write and read from the database.
What would be the simplest (or a simple) way to get this done for our Movie-Model with NHibernate instead of using Entity Framework?
Clarified Updated Question - End
Original question (only for additional background-info):
I'm trying to create an ASP.Net MVC 3 application that uses NHibernate and Postgres.
Background Info
Development is done on Windows with Visual Web Developer Express, the production environment will be/should be Linux+Mono.
Steps that have worked so far:
An ASP.Net Dynamic Data Entities Web Application using Npgsql and Postgres as the DB.
Successfully run on Windows development machine.
(Following this tutorial)
An ASP.Net MVC 3 application without using a database/model yet:
Succesfully run on Windows development machine and deployed to Linux production environment using Mono and Nginx. (Only as a proof of concept for myself not as a web app used by the public.)
An ASP.Net MVC 3 application with a model using SQL Server Express as the DB.
Successfully run on my Windows development machine.
(Following the MVC 3 Getting Started-tutorial)
Question
So far I managed to get Postgres to work with a "Dynamic Data Entities Web Application" but with an MVC 3 Web app I'm stuck on where/how to start. For the last mentioned MVC-3-Movie-Webapp I want to switch the DB from SQL Server Express to Postgres using NHibernate and Npgsql (NHibernate since Mono doesn't support Entity Framework).
When you look at the end of part 4 there's the simple MovieDBContext-code
public class MovieDBContext : DbContext
{
public DbSet<Movie> Movies { get; set; }
}
and in the beginning of part 5, we autogenerate CRUD-stuff using Entity Framework by simply telling which Model to use.
(MoviesController.cs, Create.cshtml, Delete.cshtml, Details.cshtml, Edit.cshtml, and Index.cshtml)
So I have that working with Entity Framework and SQL Server Express, but how would I achieve the same result by using NHibernate? (doesn't have to be with postgres immediately, sticking with SQL-Server as a first step would be fine) (Hopefully with similar simplicity, but getting the result itself would be great)
I found a lot of old stuff and how I would manually map things, but what would be a good-up to date standard way of achieving this with NHibernate for MVC 3?
(The closest thing I found was the source code mentioned in this thread, but it's 64 MB unzipped I got several "Projects not loaded successfully"-errors and the author said he uses MVC 2 so I think it's a little over my head for being a complete NHibernate noob.)
I think showing how this is done could be very useful for others as well, since the original tutorial is very easy to follow and is linked as the official starting point for MVC 3 app-development on http://www.asp.net/mvc ("Your First ASP.NET MVC App").
So I think this would be a great up to date example about how to use NHibernate with MVC 3.
Actually, those automated things haven't helpful enough in real world applications. We have to separate concerns and by using DataContext in UI Layer is not a good practice because that dependency will cause problems like lack of test-ability, violation of best practices. I think you need to have following things of your project
Separation of Concern (Layered Architecture - UI Layer, Servie Layer, Domain Layer, Infrastructure Layer)
Generic Repository and Unit of Work wrapping (Database functionalities, ORM - EF, NHibernate, etc
In your Service Layer process repositories and unit of work processings and expose Data Transfer objects or your domain objects (POCOs) to UI Layer
Use IOC to inject dependencies will help you to minimize dependencies
Create Unit test and Integration tests
Use Continuous Integration and Source control prefer (Distributed: Mercurial)
Useful References:
(Sharp Architecture) http://sharparchitecture.codeplex.com/
(IOC Container) http://www.castleproject.org/container/
(Generic repository) http://code.google.com/p/genericrepository/
NuGet is your friend. Here's a good example of using NuGet to automatically wire in your dependencies and configuration pretty much automatically.
Hope this helps.
Suggestion, don't get hung up on all the automatic stuff that the tutorials are showing you. Microsoft is just trying to show that you can easily get things started if you don't try to do anything unique.
Now for your situation. When you're making a controller, you're wanting to bind that controller with a type of model that you created somewhere. With nHibernate I'm thinking that you'll have manually created these POCO's and that you're using one of the many ways to map those POCO's through nHibernate to your database.
You won't be able to use the Entity Framework options because they're depending upon the features of the framework to provide information on the object, database, etc. Easiest things is to just make a controller that either gives you the options for CRUD or use an empty controller to build up your own ActionResults.
Hope this helps some and good luck with your project.
I try to get above configuration working, but with no luck.
Step 1)
I started a new solution with a WCF Service Application project.
Step 2)
In this project, I added an edmx file and create a very simple model:
Entity Parent with Id and DisplayName
Entity Child with Id and ChildDisplayName
Association from Parent to Child, 1-to-m, resulting in NavigationProperties on both entities.
I generatedthe database without any problems. After generation, I inserted one Parent object with two related Child objects manually to the database.
Step 3)
I added the code generation, using the ADO.NET Self-Tracking Entity Generator.
I know that this should be done in diffrent assemblies, but to make it straight and easy, I put it all to the same project (the WCF project)
Step 4)
I just changed the IService Interface to create a simple get
[OperationContract]
Parent GetRootData(Int32 Id);
In the corresponding implementation, I take an Page object from the context and return it:
using (PpjSteContainer _context = new PpjSteContainer() )
{
return _context.ParentSet.Include("Child").Single(x => x.Id == Id);
}
Problem:
If I now run this project (the Service1.svc is start page), VS2010 automatically generates the test client to invoke the service. But once I invoke the service, I get an StackOverflowException! Debugging on the server side looks ok until it returns the object graph.
If I remove the Include("Child") everything is ok, but of course the Child objects are missing now.
I have no idea what I'm missing. I read a lot of howto's and guides, but all do it the way I did it (at least that's what I think)...
I tried the School example here, but this does not work for me as it seems the database generation and the coding in the example does not match.
So, I would much appreciate if someone could guide me how to make this work.
P.S.
Yes, all Entity-Classes are marked "[DataContract(IsReference = true)]"
Lazy-Loading is set to "false" in the edmx file
Edit:
I changed the WCF to be hosted in a console app and no longer in IIS. Of course then I had to write my own little test client.
Funny enough, now everything's working.
I of course have no idea why, but at least for my testing this is a solution...
Have a look here. Basically you have to make the serializer aware of cycles in the navigation properties.
I'm working with an application right now that uses a third-party API for handling some batch email-related tasks, and in order for that to work, we need to store some information in this service. Unfortunately, this information (first/last name, email address) is also something we want to use from our application. My normal inclination is to pick one canonical data source and stick with it, but round-tripping to a web service every time I want to look up these fields isn't really a viable option (we use some of them quite a bit), and the service's API requires the records to be stored there, so the duplication is sadly necessary.
But I have no interest in peppering every method throughout our business classes with code to synchronize data to the web service any time they might be updated, and I also don't think my entity should be aware of the service to update itself in a property setter (or whatever else is updating the "truth").
We use NHibernate for all of our DAL needs, and to my mind, this data replication is really a persistence issue - so I've whipped up a PoC implementation using an EventListener (both PostInsert and PostUpdate) that checks, if the entity is of type X, and any of fields [Y..Z] have been changed, update the web service with the new state.
I feel like this is striking a good balance between ensuring that our data is the canonical source and making sure that it gets replicated transparently and minimizing the chances for changes to fall through the cracks and get us into a mismatch situation (not the end of the world if eg. the service is unreachable, we just do a manual batch update later, but for everybody's sanity in the general case, the goal is that we never have to think about it), but my colleagues and I still have a degree of uncomfortableness with this way forward.
Is this a horrid idea that will invite raptors into my database at inopportune times? Is it a totally reasonable thing to do with an EventListener? Is it a serviceable solution to a less-than-ideal situation that we can just make do with and move on forever tainted? If we soldier on down this road, are there any gotchas I should be wary of in the Events pipeline?
In case of unreliable data stores (web service in your case), I would introduce a concept of transactions (operations) and store them in local database, then periodically pull them from DB and execute against the Web Service (other data store).
Something like this:
public class OperationContainer
{
public Operation Operation; //what ever operations you need CRUD, or some specific
public object Data; //your entity, business object or whatever
}
public class MyMailService
{
public SendMail (MailBusinessObject data)
{
DataAcceessLair<MailBusinessObject>.Persist(data);
OperationContainer operation = new OperationContainer(){Operation=insert, Data=data};
DataAcceessLair<OperationContainer>.Persist(operation);
}
}
public class Updater
{
Timer EverySec;
public void OnEverySec()
{
var data = DataAcceessLair<OperationContainer>.GetFirstIn(); //FIFO
var webServiceData = WebServiceData.Converr(data); // do the logic to prepare data for WebService
try
{
new WebService().DoSomething(data);
DataAcceessLair<OperationContainer>.Remove(data);
}
}
}
This is actually pretty close to the concept of smart client - technically not logicaly. Take a look at book: .NET Domain-Driven Design with C#: Problem-Design-Solution, chapter 10. Or take a look at source code from the book, it's pretty close to your situation: http://dddpds.codeplex.com/