I am relatively new to developing Cocoa applications on the Mac and come from a .NET C# background. I was wondering if a Cocoa Model object should contain its own data access methods such as Create, Update and Delete etc. Apples documentation seems to lean towards the Model doing everything but it doesn't seem right to have a Model (ie UserModel) which has a method named GetUsers which returns a collection of UserModels!
In ASP.NET MVC all my Models are just a representation of a Business object (ie a User) or a View. Using the example from above it would be the controllers responsibility to call a service (Business Layer or something of that nature) and get back a list of UserModel objects. The same controller would also populate a UserModel with data and pass that as a parameter to some other service which could then perform an Update or a Delete.
Any thoughts on this subject would be greatly appreciated as example code from Apple tend to be rather simple and don't really touch on CRUD type operations.
Thanks in advance.
I also come from a .NET background and I agree that Apple sometimes confuse things a bit. I tend to keep my domain models clean and implement a data access service. The only time I do it differently is if I am using CoreData in which my domain level objects are also CoreData objects (so they have underlying data persistence) HOWEVER I still use a Storage Service / Data Access Service to retrieve and save through.
If you want an example of a Storage Service / DAL I use then one of my blog posts contains it....CoreData Example
Related
I have three projects:
WCF Service project (Interface and Implementation)
aspx web project (client) that consumes the WCF Service
class library project that holds my business objects (shared by both WCF project and client)
I have a method in the WCF Service implementation class file that retrieves a generic list of data from SQL (referencing the project that holds the business objects), serialize the data using System.Web.Script.Serialization.JavaScriptSerializer and returns the result as a string.
The web client takes this string and deserializes it back to the appropriate business object (referencing the project that holds the business objects)
This is an intranet app and I want to make sure I am doing this correctly.
My questions are:
Should I be using DataContracts instead of business objects? Not sure when to use DataContracts and when to use the business objects.
If I am using DataContracts, should I not use
System.Web.Script.Serialization.JavaScriptSerializer?
Any clarification would be appreciated.
Of course there is no one answer. I think the question is whether you want to use business objects in the first place, otherwise my fourth point pretty much covers it.
Do use the business objects if they look like the data contracts would, i.e. they are a bunch of public properties and do not contain collections of children/ grandchildren etc.
Don't use the business objects if they contain a bunch of data you don't need. For example populating a grid with hundreds of entities begs for a data contract specific to that grid.
Do use the business objects if they contain validation logic etc that you would otherwise have to duplicate in your web service.
Do use the business objects if you are just going to use the data contracts to fully inflate business objects anyway.
Don't use the business objects if you ever want to consume that service interface from non .net code.
Don't use the business objects if you have to massively configure their serialization.
Don't use the business objects if they need to "know" where they are (web server or app server)
Not your case but: Do use the business objects if you are building a rich client for data entry.
Thats all for now, I'll see if anything more occurs to me. :)
I am creating a brand new application, including the database, and I'm going to use Entity Framework Code First. This will also use WCF for services which also opens it up for multiple UI's for different devices, as well as making the services API usable from other unknown apps.
I have seen this batted around in several posts here on SO but I don't see direct questions or answers pertaining to Code First, although there are a few mentioning POCOs. I am going to ask the question again so here it goes - do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries? I am really trying to follow the YAGNI train of thought so while I have a clean sheet of paper I figured that I would get this out of the way first.
Thanks,
Paul Speranza
There is no definite answer to this problem and it is also the reason why you didn't find any.
Are you going to build services providing CRUD operations? It generally means that your services will be able to return, insert, update and delete entities as they are = you will always expose whole entity or single exactly defined serializable part of the entity to all clients. But once you do this it probably worth to check WCF Data Services.
Are you going to expose business facade working with entities? The facade will provide real business methods instead of just CRUD operations. These buisness methods will get some data object and decompose it to multiple entities in wrapped business logic. Here it makes sense to use specific DTO for every operation. DTO will transfer only data needed for the operation and return only date allowed to the client.
Very simple example. Suppose that your entities keep information like LastModifiedBy. This is probably information you want to pass back to the client. In the first scenario you have single serializable set so you will pass it back to the client and client pass it modified back to the service. Now you must verify that client didn't change the field because he probably didn't have permissions to do that. You must do it with every single field which client didn't have permission to change. In the second scenario your DTO with updated data will simply not include this property (= specialized DTO for your operation) so client will not be able to send you a new value at all.
It can be somehow related to the way how you want to work with data and where your real logic will be applied. Will it be on the service or on the client? How will you ensure that client will not post invalid data? Do you want to restrict passing invalid data by logic or by specific transferred objects?
I strongly recommend a dedicated view model.
Doing this means:
You can design the UI (and iterate on it) without having to wait to design the data model first.
There is less friction when you want to change the UI.
You can avoid security problems with auto-mapping/model binding "accidentally" updating fields which shouldn't be editable by the user -- just don't put them in the view model.
However, with a WCF Data Service, it's hard to ignore the advantage of being able to write the service in essentially one line when you expose entities directly. So that might make the most sense for the WCF/server side.
But when it comes to UI, you're "gonna need it."
do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries?
Yes, the same set of POCOs / entities can be used for all boundaries.
But a set of mappers / converters / configurators will be needed to adapt entities to some generic structures of each layer.
For example, when entities are configured with DataContract and DataMember attributes, WCF is able to transfer domain objects' state without creating any special classes.
Similarly, when entities are mapped using Entity Framework fluent mapping api, EF is able to persist domain objects' state in database without creating any special classes.
The same way, entities can be configured to be used in any layer by means of the layer infrastructure without creating any special classes.
If you have a decent layered ASP.NET MVC 3 web application with a data service class pumping out view models pulled from a repository, sending JSON to an Ajax client,
[taking a breath]
what's a good way to add data filtering based on ASP.NET logins and roles without really messing up our data service class with these concerns?
We have a repository that kicks out Entity Framework 4.1 POCOs which accepts Lambda Expressions for where clauses (or specification objects.)
The data service class creates query objects (like IQueryable) then returns them with .ToList() in the return statement.
I'm thinking maybe a specification that handles security roles passed to the data service class, or somehow essentially injecting a Lambda Expression in just the right place in the data service class?
I am sure there is a fairly standardized pattern to implement something like this. Links to examples or books on the subject would be most appreciated.
If you've got a single-tiered application (as in, your web layer and service/data layer all run in the same process) then it's common to use a custom principal to achieve what you want.
You can use a custom principal to store extra data about a user (have a watch of this: http://www.asp.net/security/videos/use-custom-principal-objects), but the trick is to set this custom principal into the current thread's principal also, by doing Thread.CurrentPrincipal = myPrincipal
This effectively means that you can get access to your user/role information from deep into your service layer without creating extra parameters on your methods (which is bad design). You can do this by querying Thread.CurrentPrincipal and cast it to your own implementation.
If your service/data layer exists in a different process (perhaps you're using web services) then you can still pass your user information separately from your method calls, by passing custom data headers along with the service request and leave this kind of data out of your method calls.
Edit: to relate back to your querying of data, obviously any queries you write which are influence by some aspect of the currently logged-in user or their role can be picked up by looking at the data in your custom principal, but without passing special data through your method calls.
Hopefully this at least points you in the right direction.
It is not clear from your question if you are using DI, as you mentioned you have your layers split up properly I am presuming so, then again this should be possible without DI I think...
Create an interface called IUserSession or something similar, Implement that inside your asp.net mvc application, the interface can contain something like GetUser(); from this info I am sure you can filter data inside your middle tier, otherwise you can simply use this IUserSession inside your web application and do the filtering inside that tier...
See: https://gist.github.com/1042173
Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!
We have a typical multi-tier/layer architecture. Application + WCF Service + Repository/EF4/Database.
We are using a customized version of the EF POCO T4 template to generate our entities, that we use across the tiers/layers. We have decided not to use DTO, because of the additional time/work involved.
An example object would be a forest which could have navigation properties of trees which could have navigation properties of leaves.
What is the best approach to add leaves and deal with the object graph? The data is being imported from the client side, so we don't necessarily know if the parent forest/tree already exists in the database.
Query service and retrieve any existing related objects. Attach graph for related objects or create new objects and attach graph on the client side.
example: public Forest GetForest(string forestid)
then --- public void AddLeaf(Leaf leaf)
Create the forest, tree, and leaf objects on the client side and attach the graphs. Send the leaf object across and then on the server side perform logic to compare objects to existing objects in the database. Strip graphs if required, add items that do not exist and/or attach to existing objects.
example: public void AddLeaf(Leaf leaf)
Create the forest, tree and leaf objects on the client side, but don't attach the graphs. Send the objects across and then on the service side perform the logic to compare objects to existing objects in the database. Add items that do not exist and/or attach to existing objects.
example: public void AddLeaf(Leaf leaf, Tree tree, Forest forest)
The question boils down to where should the logic take place to attach the graphs of these related objects.
On a side note I am a little concerned about the "fixup" logic for the navigation properties when dealing with graphs being serialized and deserialized. It seems like that could become an expensive opearation.
Note: The client application is a windows service that is importing data...so it is not necessarily a light weight client. (We are not necessarily afraid of adding logic to it.)
I had similar question few months ago. After playing a lot with this problem my final decission is to use your third solution (my client is always web application). This solution requires writting a lot of code and it includes some additional database queries because each time you want to update your objects you have to load whole object graph first. Reason for this is that when working with detached objects you have to deal with change tracking manually.
When you use third solution you can also involve DTO and transfers only really needed data between client and server.
In case of statefull client (windows app written in .NET or maybe Silverlight) you can also use self tracking entities and your first approach. Self tracking entities are implementation of Changeset pattern. They can track all changes after detaching from context but you have to load your entities first from DB. Self tracking entities are not a good choice in case of web application client or service consumed by non .NET client.