MVC SPA w/o EF: You must write an attribute 'type'='object' after writing the attribute with local name '__type' - asp.net-mvc-4

So I have a very normalized model, and I'm trying to create a single page application in MVC4 which wants to use an entity framework object. My problem is I can't manage to create an entity in EF with the kind of complex mapping that I need (I have checked multiple guides, but I can't seem to make one entity from multiple tables that contain different primary keys... I found a solution using updateable views, but that's really just pushing the abstraction down to the db layer).
So I thought I could create a POCO object using an EF query to create the object, then on insert/update/delete I could just take the POCO data and update the underlying 3 tables.
Well I hit a roadblock just trying to tweak an existing working controller to try and learn what's going on.
Let's imagine I have a working SPA controller that looks like this:
public partial class FooController : DbDataController<aspnetEntities>
{
public IQueryable<Foos> GetFoos() { ... }
}
I just change it a bit to return my new POCO data object Bar, which let's imagine has the exact same fields as Foo for the moment:
public partial class FooController : DbDataController<aspnetEntities>
{
public IQueryable<Bars> GetBars() { ... }
}
Over in FooViewModel.js I update the operation name to GetBars, and the type from
var entityType = "Foo:#Models";
to
var entityType = "Bar:#Models";
I hit my operation directly and I get:
OLD
<ArrayOfFoo><Foo><Property>true</Property></Foo></ArrayOfFoo>
NEW
<ArrayOfBar><Bar><Property>true</Property></Bar></ArrayOfBar>
So the controller looks like it's giving me what I expect, but when I try to put the whole thing together the SPA flashes up:
You must write an attribute 'type'='object' after writing the attribute with local name '__type'.
I'm guessing somehow I need to get type data into KO? I'm not sure where that might be however, I've been crawling through the JS for hours, but I'm not even clear on where it's failing. Any input would be greatly appreciated.

The problem you are experiencing is connected to the fact you are using POCO instead of the standard EF. It should be related to the webapi serializer that somehow doesn't recognize the class as serializable. Anyway it is a bug that will be removed in the RC. Give a look to this thread for workarounds:
http://forums.asp.net/t/1773173.aspx/1?You+must+write+an+attribute+type+object+after+writing+the+attribute+with+local+name+__type+

Related

What is the best practice when adding data in one-to-many relationship?

I am developing a website for a beauty salon. There is an admin part of the website, where the esthetician can add a new care. A care is linked to a care category (all cares related to hands, feets, massages, ...). To solve this I wrote this code into the CareRepository in the .NET API :
public async Task<Care?> AddAsync(Care care)
{
// var dbCareCategory = await this._careCategoryRepository.GetByNameAsync(care.CareCategoryName);
if (string.IsNullOrEmpty(care.CareCategoryName) || string.IsNullOrWhiteSpace(care.CareCategoryName))
return null;
var dbCareCategory = await this._instituteDbContext.CareCategories
.Where(careCategory => Equals(careCategory.Name, care.CareCategoryName))
.Include(careCategory => careCategory.Cares)
.FirstOrDefaultAsync();
if (dbCareCategory == null || dbCareCategory.Cares.Contains(care))
return null; // TODO : improve handling
dbCareCategory.Cares.Add(care);
await this._instituteDbContext.SaveChangesAsync();
return care;
}
My problem here is that I am a bit struggling with the best practice to have, because in order to add a care to a category, I have to get the category first. At the first place, I called the CareCategoryRepository to get the care (commented line). But then, EF was not tracking the change, so when I tried to add the care, it was not registered in the database. But once I call the category from the CareRepository, EF tracks the change and saves the Care in the database.
I assume this is because when calling from another repository, it is a different db context that tracks the changes of my entity. Please correct me if my assumption is wrong.
So, I am wondering, what is the best practice in this case ?
Keep doing what I am doing here, there is no issue to be calling the category entities from the care repository.
Change my solution and put the AddCare method into the CareCategoryRepository, because it makes more sense to call the categories entities from the CareCategoryRepository.
Something else ?
This solution works fine, however I feel like it may not be the best way to solve this.
The issue with passing entities around in web applications is that when your controller passes an entity to serve as a model for the view, this goes to the view engine on the server to consume and build the HTML for the view, but what comes back to the controller when a form is posted or an AJAX call is made is not an entity. It is a block of data cast to look like an entity.
A better way to think of it is that your Add method accepts a view model called AddCareViewModel which contains all of the fields and FKs needed to create a new Care entity. Think about the process you would use in that case. You would want to validate that the required fields are present, and that the FKs (CareCategory etc.) are valid, then construct a Care entity with that data. Accepting an "entity" from the client side of the request is trusting the client browser to construct a valid entity without any way to validate it. Never trust anything from the client.
Personally I use the repository pattern to serve as a factory for entities, though this could also be a separate class. I use the Repository since it already has access to everything needed. Factory methods offer a standard way of composing a new entity and ensuring that all required data is provided:
var care = _careRepository.Create(addCareViewModel.CareCategoryId,
/* all other required fields */);
care.OptionalField = addCareViewModel.OptionalField; // copy across optional data.
_context.SaveChanges(); // or ideally committed via a Unit of Work wrapper.
So for instance if a new Care requires a name, a category Id, and several other required fields, the Create method accepts those required fields and validates that they are provided. When it comes to FKs, the repository can load a reference to set the navigation property. (Also ensuring that a valid ID was given at the same time)
I don't recommend using a Generic Repository pattern with EF. (I.e. Repository()) Arguably the only reason to use a Repository pattern at all with EF would be to facilitate unit testing. Your code will be a lot simpler to understand/maintain, and almost certainly perform faster without a Repository. The DbContext already serves all of those needs and as a Unit of Work. When using a Repository to serve as a point of abstraction for enabling unit testing, instead of Generic, or per-entity repositories, I would suggest defining repositories like you would a Controller, with effectively a one-to-one responsibility.
If I have a CareController then I'd have a CareRepository that served all needs of the CareController. (Not just Care entities) The alternative is that the CareController would need several repository references, and each Repository would now potentially serve several controllers meaning it would have several reasons to change. Scoping a repository to serve the needs of a controller gives it only one reason to change. Sure, several repositories would potentially have methods to retrieve the same entity, but only one repository/controller should be typically responsible for creating/updating entities. (Plus repositories can always reference one another if you really want to see something as simple as Read methods implemented only once)
If using multiple repositories, the next thing to check is to ensure that the DbContext instance used is always scoped to the Web Request, and not something like Transient.
For instance if I have a CareRepository with a Create method that calls a CareCategoryRepository to get a CareCategory reference:
public Care Create(string name, int careCategoryId)
{
if (string.IsNullOrEmpty(name)) throw new ArgumentNullException(nameOf(name));
var care = new Care { Name = name };
var careCategory = _careCategoryRepository.GetById(careCategoryId);
care.CareCategory = careCategory;
_context.Cares.Add(care);
}
We would want to ensure that the DbContext reference (_context) in all of our repositories, and our controller/UnitOfWork if the controller is going to signal the commit with SaveChanges, are pointing at the exact same single reference. This applies whether repositories call each other or a controller fetches data from multiple repositories.

How is SaveChanges() called in BreezeController?

It appears that all the existing breezejs examples are passing entity models to and from the BreezeController.
But almost all our pages built are using some form of view models. In the days we have no BreezeJs, we retrieve the data (domain model) from a repository to populate (using AutoMapper or manually) a view model, which contains only the necessary data for that view. The WebAPI sends only the view model data over to the browser where we can populate a client-side view model (usually a knockout observable).
When saving data, we collect data from a <form> to populate an input view model, send only that data over to the server, where data in the input view model is mapped to the domain model. The update is saved by calling SaveChanges() on the DbContext entity in the repository.
Now, BreezeJs is to take over all our repository code by creating an EFContextProvider. The examples I have seen usually retrieve the domain model data and then pass it all to the client side.
[HttpGet]
public IQueryable<Item> Items() {
return _contextProvider.Context.Items;
}
It is the client-side javascript's job to build a view model.
Of course it is possible for us to build the view model at the server side:
[HttpGet]
public List<ItemViewModel> Items() {
var items = _contextProvider.Context.Items
.Include("RelatedEntity")
.ToList();
var model = new List<ItemViewModel>();
.... some code to build model from items ....
return model;
}
The benefit is that less data is transferred across the network, and we can do many manipulations on the server side. But I don't know if it is a good practice to modify this BreezeController like that. But at least, it returns data needed to list all the items.
The real trouble came when I tried to POST data back.
In the BreezeJs examples I found, they use a ko.observableArray() to store all the domain model data, let's say vm.items. Then the new record newItem is build by manager.createEntity into a domain model. After validating the data, item.entityAspect.validateEntity(), the newItem is pushed into vm.items and manager.saveChanges() is called, which somehow invokes SaveChanges() on the BreezeController.
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle) {
return _contextProvider.SaveChanges(saveBundle);
}
I find too many things have been taken over! (Laugh at me if you disagree.) My questions are:
Can I just createEntity and then saveChanges?
I only have an empty form to fill in and submit. There is certainly no need to build a whole items array on the client-side.
Can I pass an input view model as a JObject and do some server-side processing before calling _contextProvider.SaveChanges()?
It turns out to be a super long post again. Thank you for reading it all through. Really appreciate it!
Good questions. Unfortunately, our demo code seems to have obscured the real capabilities of Breeze on both client and server. Breeze is not constrained in the ways that your fear.
I don't want to repeat everything that is in our documentation. We really do talk about these issues. We need more examples to be sure.
You are describing a CQRS design. I think it over-complicates most applications. But it's your prerogative.
If you want to send ItemViewModel instead of Item, you can. If you want that to be treated as an entity on the Breeze client - have the EntityManager turn it into a KO observable and manage it in cache, change track it, validate it -, you'll have to provide metadata for it ... either on server or client. That's true for Breeze ... and every other system you can name (Ember, Backbone, etc). Soon we will make it easier to create metadata on the server for an arbitrary CLR model; that may help.
You have complete control over the query on the server, btw, whether Item or ItemViewModel. You don't have to expose an open-ended query for either. You seem to know that by virtue of your 2nd example query.
On to the Command side.
You wrote: "[the examples] use a ko.observableArray() to store all the domain model data, let's say vm.items"
That is not precisely true. The items array that you see in examples exists for presentation. The items array isn't storing anything from a Breeze perspective. In fact, after a query, the entities returned in the query response (if they are entities) are already in the manager's cache no matter what you do with the query result, whether you put them in an array or throw them away. An array plays no role whatsoever in the manager's tracking of the entities.
You wrote: "Can I just createEntity and then saveChanges?"
Of course! The EntityManager.createEntity method puts a new entity in cache. Again, the reason you see it being pushed into the items array is for presentation to the user. That array has no bearing on what the manager will save.
You wrote: "Can I pass an input view model ... and do some server-side processing before calling _contextProvider.SaveChanges()?"
I don't know what you mean by "an input viewmodel". The Breeze EntityManager tracks entities. If your "input viewmodel" is an entity, the EntityManager will track it. If it has changed and you call saveChanges, the manager will send it to the controller's SaveChanges method.
You own the implementation of the controller's SaveChanges method. You can do anything you want with that JObject which is simply a JSON.NET representation of the change-set data. I think you'll benefit from the work that the ContextProvider does to parse that object into a SaveMap. Read the topic on Customizing the EFContextProvider. Most people feel this provides what they need for validating and manipulating the client change-set data before passing those data onto the data access layer ... whether that is EF or something else.
If instead you want to create your own, custom DTO to POST to your own custom controller method ... go right ahead. Don't call EntityManager.saveChanges though. Call EntityManager.getChanges() and manipulate that change array of entities into your DTO. You'll be doing everything by hand. But you can. Personally, I'd have better things to do.

Zend Framework 2, Entity Manager, and Doctrine 2

Had a question about what best practice might be for the implementation of "convenience" queries. In reference to this article:
http://www.jasongrimes.org/2012/01/using-doctrine-2-in-zend-framework-2/#toc-install-doctrine-modules
It's clear that the entity manager is available in the IndexController - he does a findAll to list the entire contents of the database. What if, however, we added a "band" column to the database, mapped it out, and wanted to query all albums by the Beatles? What if the Beatles albums were used rather often throughout the codebase (weak example, but you get it).
The EM only seems to be available in Controllers, and Classes don't really seem to be aware of the service locator.
Would you simply break out DQL right in the controller, and repeat the DQL in every controller that needs it? (not very DRY)
Do we instead finagle some access to the EM from the Entity, or Model?
Doesn't seem as cut-and-dry as straight Zend_Db usage where you can fire queries anywhere you like, cheating to get things done.
Thanks for helping me cross over into a "real" ORM from the Table Gateway world.
Erm, Doctrine 2 is able to handle Relationships (e.g.: Bands to Albums and vice-versa)
The EntityManager can be made available in every single class you wish, as long as you define the class as a service. I.e. inside your Module.php you can define a factory like this:
// Implement \Zend\ModuleManager\Feature\ServiceProviderInterface
public function getServiceConfig() {
return array(
//default stuff
'factories' array(
'my-album-service' = function($sm) {
$service = new \My\Service\Album();
$service->setEntityManager($sm->get('doctrine.entitymanager.orm_default'));
return $service;
}
)
)
);
You can then call this class from every Class that is aware of the ServiceManager like $this->getServiceLocator()->get('my-album-service')
This class would then automatically be injected with the Doctrine EntityManager.
To be clear: All queries you'd do SHOULD be located inside your Services. You'd have your Entities, which are basically the DB_Mapper from Doctrine 2, then you have your Services, which run actions like add(), edit(), findAll(), findCustomQuery(), etc...
You would then populate your Services with Data from the Controllers, the Service would give data back to the controller and the controller would pass said data to the view. Does that make sense to u and answer your question?

OOP Value Objects and Entities in the same class

I am refactoring an old procedural PHP website into a tasty OOP application with a light sprinkling of Domain Driven Design for added flavour.
I keep stumbling upon cases where I have a need for classes that can have subclasses which are either entities or value objects.
An url object, for example. There are a zillion urls out there and so they all cannot really be entities. But some are very special urls, like my home page. That is an entity.
Another example is, say, a 'configuration object'. I'd like some configurations to have identities so i can create 'presets' and administer them via an online control panel. For those a finder/repository is needed to find them and ORM is needed to manage their lifetimes. But, for others 'not-presets' (of the same class hierarchy) I'd like to be able to load them up with data that has been customised on the fly and does not need to be persisted.
I am envisaging a lot of :
class factory {
reconstitute($rawdata) {
if (raw data has identity)
load up and return entity version of the class
else
load up and return anonymous/value object version of the class
It all seems a bit odd.
Is there any pattern out there that discusses the best way to handle this issue?
I'm not sure I totally understand your scenerio but... does that really matter?
In my experience with EFs/ORMs the best way (that I can think of) to do what you are wanting to do is to let your entity class decide whether or not to load/persist itself from/to a database based on business rules defined in the class.
$url = new URLClass('KEY_DATA') // returns loaded object url if key if found in database
$url = new URLClass() // returns new url object
$url = new URLClass('', '110011000110010001000011101010010100') // returns new url with data loaded from raw data
Not sure if that really helps you out or if it even applies.

Tracking changes in Entity Framework 4.0 using POCO Dynamic Proxies across multiple data contexts

I started messing with EF 4.0 because I am curious about the POCO possibilities... I wanted to simulate disconnected web environment and wrote the following code to simulate this:
Save a test object in the database.
Retrieve the test object
Dispose of the DataContext associated with the test object I used to retrieve it
Update the test object
Create a new data context and persist the changes on the test object that are automatically tracked within the DynamicProxy generated against my POCO object.
The problem is that when I call dataContext.SaveChanges in the Test method above, the updates are not applied. The testStore entity shows a status of "Modified" when I check its EntityStateTracker, but it is no longer modified when I view it within the new dataContext's Stores property. I would have thought that calling the Attach method on the new dataContext would also bring the object's "Modified" state over, but that appears to not be the case. Is there something I am missing? I am definitely working with self-tracking POCOs using DynamicProxies.
private static void SaveTestStore(string storeName = "TestStore")
{
using (var context = new DataContext())
{
Store newStore = context.Stores.CreateObject();
newStore.Name = storeName;
context.Stores.AddObject(newStore);
context.SaveChanges();
}
}
private static Store GetStore(string storeName = "TestStore")
{
using (var context = new DataContext())
{
return (from store in context.Stores
where store.Name == storeName
select store).SingleOrDefault();
}
}
[Test]
public void Test_Store_Update_Using_Different_DataContext()
{
SaveTestStore();
Store testStore = GetStore();
testStore.Name = "Updated";
using (var dataContext = new DataContext())
{
dataContext.Stores.Attach(testStore);
dataContext.SaveChanges(SaveOptions.DetectChangesBeforeSave);
}
Store updatedStore = GetStore("Updated");
Assert.IsNotNull(updatedStore);
}
As you stated later, you were using the POCO generator, not the self-tracking entities generator.
I've tried it as well, and became quite perplexed. It seems that the proxy classes don't quite work as expected, and there might be a bug. Then again. none of the examples on MSDN try something like this, and when they reference updates in different tiers of an app (something like we're doing here) they use self-tracking entities, not POCO proxies.
I'm not sure how these proxies work, but they do seem to store some kind of state (I managed to find the "Modified" state inside the private properties). But it seems that this property is COMPLETELY ignored. When you attach a property to a context, the context adds an entry to the ObjectStateManager, and it stores further state updates in there. At this point if you make a change - it will be registered, and applied.
The problem is that when you .Attach an entity - the Modified state from the proxy is not transferred to the state manager inside the context. Furthermore, if you use context.Refresh() the updates are override, and forgotten! Even if you pass RefreshMode.ClientWins into it. I tried setting the object state's state property to modified, but it was overridden anyway, and the original settings were restored..
It seems that there's a bug in the EF right not, and the only way to do this would be to use something like this:
using (var db = new Entities())
{
var newUser = (from u in db.Users
where u.Id == user.Id
select u).SingleOrDefault();
db.Users.ApplyCurrentValues(user);
db.SaveChanges();
}
One more thing here
Entitity Framework: Change tracking in SOA with POCO approach
It seems that POCO just doesn't support the approach you're looking for, and as I expected the self-tracking entities were created to tackle the situation you were testing, while POCO's proxies track changes only within the context they created.. Or so it seems...
Try
db.ObjectStateManager.ChangeObjectState(user, System.Data.EntityState.Modified);
Before calling SaveChanges
After playing around with the self-tracking entities, I realised what was your mistake.
Instead of trying to attach the entity to the data context, you should instead instruct that you want the data context to apply the new changes you have made to it to the database.
In this case, change the "saving" code to this:
using (var dataContext = new DataContext())
{
dataContext.Stores.ApplyChanges(testStore);
dataContext.SaveChanges();
}
At least I have tested it on my local machine, and it worked after this update :)
Hope this helps!
I think the root of your problem is your management of the Context object.
With POCO disposing the context does not notify the entities on that context that they are no longer associated with a context. The change tracking with POCO is all managed by the context so you get into some fun problems where the POCO will act like it is still attached to a context but in reality it is not and re-attaching to another context should throw an error about attaching to multiple contexts.
there is a small post about this you may want to read here:
http://social.msdn.microsoft.com/forums/en-US/adodotnetentityframework/thread/5ee5db93-f8f3-44ef-8615-5002949bea71/
If you switch to self tracking I think you'll find your entities work the way you're wanting.
another option is to add a property to a partial class of your poco to track changes manually after detaching the POCO from the context you used to load it.