Not sure how to isolate proxy from a database connection - objective-c

I have the following:
An Abstract Person object
A Person object that inherits from the abstract
A Person proxy object that inherits from the abstract, and has one extra field since this is essentially a friend
A Person Repository and DAO that fetch everything I need
Here's the problem. At the time I retrieve one person's data, I can retrieve a list of IDs which is their friends list. If I build those Objects right after i fetch the person, that's too expensive. So I built a proxy to be a placeholder. However I still need a database connection to retrieve the data when I need to use it. How exactly do I populate the proxy's internal Person object without it internally fetching like the DAO? Or do I have it wrong and is it supposed to do that?

Ok, I've figured it out. Proxy objects CAN have a database connection. The only caveat is if the object gets serialized at a point in time bad things can happen, although I won't be doing that. What I did is have my Proxy pass in my DAO when you initialize it, a factory for the ease of object creation, and an internal instance of my object. Afterwards, the api must be similar to that of the object you proxy and all its calls will simply forward to the internal instance.

Related

Ninject: What happens to non-disposable InRequestScope and InTransientScope objects after the HTTP request is finished?

I have searched a lot about these question, here and a lot of other places, but not getting everything I want to know!
From a WebApi project point-of-view, when are InTransientScope objects Created? In the Ninject docs it is stated that such objects are created whenever requested, but in a web api project that handles HTTP requests, the instance is created at the request start time so in this regard it is the same as InRequestScope then?
In a WebApi project, is it okay to use InTransientScope objects knowing that they will never be kept track of by Ninject? If Ninject never keeps track of Transient objects, then what is the purpose of this scope and what happens to such objects after they have been used?
If I declare an object with InRequestScope and that object doesn't implement the IDisposable interface, what happens to such object after the web request has completed? Will it be treated the same way as an InTransientScope object?
Are different scopes to be used for: WebApi controllers, Repositories(that use a InRequestScope Session that is created separately) and Application services?
There's two purposes for scopes:
Only allow one object to be created per scope
(optionally) dispose of the object once the scope ends.
As said, the disposal is optional. If it doesn't implement the IDisposable interface it's not being disposed. There's plenty of usecases for that.
The InTransientScope is the default scope - the one being used if you don't specify another one. It means that every time a type A is requested from the kernel one activation takes place and the result is returned. The activation logic is specified by the binding part that follows immediately after the Bind part (To<...>, ToMethod(...),...).
However, this is not necessarily at the time the web-request starts and the controller is instanciated. For example, you can use factories or service location (p.Ex. ResolutionRoot.Get<Foo>()) to create more objects after the controller has been created. To answer your questions in short:
When: When a request takes place or whenever your code asks for a type from Ninject either directly (IResolutionRoot.Get(..)) or through a factory. As InTransientScope objects are not being tracked they will not be disposed, however, if they are not disposable and the entire request code requests only one IFoo then practically there's is no discernible difference (apart from the slight performance hit due totracking InRequestScope()-ed objects)
As long as you don't need to make sure that instances are shared and/or disposed this is completely fine. After they are not being used anymore, they will get garbage-collected like any object you would new yourself.
When the scope ends ninject will remove the weak reference to the non-IDisposable object. The object itself will not be touched - just like when bound InTransientScope()
That depends on your specific requirements and implementation details. Generally one needs to make sure that long-scoped objects don't depend on short-scoped objects. For example, a Singleton-Service should not depend on a Request-scoped object. As a baserule, everything should be InTransientScope() unless there's a specific reason why it should not be. The reason will dictate what scope to use...

Better Approach for Creating Temp Object for Core Data with Restkit

In my app, I have this scenario where I need to post an object to remoter server and get an object key back and then store the object locally. I have Core data and Restkit implemented in my app.
The object value are collected from user input. I couldn't figure out a great way to prepare the object before posting it to remote server. This object is an entity of type NSManagedObject, and I don't want to store it before I get the object id from server.
I came across this which suggested to use a transient object to handle this situation. But as discussed in that thread, this causes issue with code maintenance.
Is there a better way to handle this scenario? Thanks.
Make your core data model class adhere to the RKRequestSerializable protocol.
Then when the user input is validated, create an entity as normal and set it as the params value to the RKRequest, this will send your object as the HTTP body. Look inside RKParams.m for an example.
Also set the newly created entity as the targetObject for the RKObjectLoader. That way, when your web service returns the information (like the new unique ID), it will target the new object and save the new unique ID to this object without creating a duplicate.
Clear as mud?
PS: Oh and be careful mixing autogenerated core data classes with custom code! I recommend mogen to help you not lose code each time you make a change.

WCF Serialization Information outside class definition

Suppose this simple scenario:
My client has an already working .net application and he/she wants to expose some functionality through WCF. So he gives me an assembly, containg a public class that exposes the followig method.
OrderDetail GetOrderDetail (int orderId) // Suppose OrderDetail has {ProductId, Quantity, Amount)
Now, I want some members of OrderDetail (Amount) not to be serialized.
According to http://msdn.microsoft.com/en-us/library/aa738737.aspx, the way to do this is by means of the [DataContract] and [DataMember]/[IgnoreDataMember] attributes. However, that's not an option for me because I can not modify client's source code. So I'm looking for a way to specify which members I want to serialize out, outside the type's definition. Something that should look like this:
[OperationContract]
[IgnoreMember(typeof(OrderDetail), "Amount" )]
OrderDetail QueryOrder(int orderId){
return OrderDetail.GetOrderDetail(orderId)
}
Is there any way to to this?
Thanks,
Bernabé
Don't send the clients objects across the wire, create a DTO from the clients object containing only the information that you want to send and send that instead.
This allows you to control exactly what information gets sent, and is in keeping with the WCF intentions of passing messages and not objects
So create an OrderDetailDto class and populate this with the data from the OrderDetail returned by the call to the method in the clients code. Decorate The OrderDetailDto with the DataContract and DataMember attributes (you can rename the class in here so that when it is returned by WCF it is returned with the name OrderDetail)
Repeat this for all objects in the client code, so that at the service boundary you basically convert from DTO->Client objects and Client Objects->DTO
EDIT
Whilst there might be an option which allows what you have asked for (I am not aware of one, but hopefully someone else might be) consider that when you send use your client objects as DTOs you are using them for two purposes (the client object and the message contract), which is against the Single Responsibility Principle and when you get them on the client side they will not be the same client side objects, just DTOs with the same properties, you will not be able to get behaviour in the client side objects (at least not without sharing libraries on the server side and client side).
By binding the data contract to the objects you also end up having to manage the changes to client objects and data contracts as one thing. When they are separate you can manage the changes to client side objects without neccessarily changing the DTOs, you can just populate the differently.
Whilst it seems like it is a lot of work to create the DTOs, in the end I think it will be worth it.
You will have to write a wrapper class that only exposes the desired properties and simply calls the class your client provided to gets its values.
The only other option would be to emit a new dynamic class using reflection and serialize that (see http://msdn.microsoft.com/en-us/library/system.reflection.emit.typebuilder.aspx), but its probably not worth the effort unless you need to build a lot of wrapper classes.

Beans, methods, access and change? What is the recommened practice for handling them (i.e. in ColdFusion)?

I am new to programming (6 weeks now). i am reading a lot of books, sites and blogs right now and i learn something new every day.
Right now i am using coldfusion (job). I have read many of the oop and cf related articles on the web and i am planning to get into mxunit next and after that to look at some frameworks.
One thing bothers me and i am not able to find a satisfactory answer. Beans are sometimes described as DataTransferObjects, they hold Data from one or many sources.
What is the recommended practice to handle this data?
Should i use a separate Object that reads the data, mutates it and than writes it back to the bean, so that the bean is just a storage for data (accessible through getters) or should i implement the methods to manipulate the data in the bean.
I see two options.
1. The bean is only storage, other objects have to do something with its data.
2. The bean is storage and logic, other objects tell it to do something with its data.
The second option seems to me to adhere more to encapsulation while the first seems to be the way that beans are used.
I am sure both options fit someones need and are recommended in a specific context but what is recommended in general, especially when someone does not know enough about the greater application picture and is a beginner?
Example:
I have created a bean that holds an Item from a database with the item id, a name, and an 1d-array. Every array element is a struct that holds a user with its id, its name and its amount of the item. Through a getter i output the data in a table in which i can also change the amount for each user or check a user for deletion from this item.
Where do i put the logic to handle the application users input?
Do i tell the bean to change its array according to the user input?
Or do i create an object that changes the array and writes that new array into the bean?
(All database access (CreateReadUpdateDelete) is handled through a DataAccessObject that gets the bean as an argument. The DAO also contains a gateway method to read more than one record from the database. I use this method to get a table of items, which i can click to create the bean and its data.)
You're observing something known as "anemic domain model". Yes, it's very common, and no, it's not good OO design. Generally, logic should be with the data it operates on.
However, there's also the matter of separation of concerns - you don't want to stuff everything into the domain model. For example, database access is often considered a technically separate layer and not something the domain models themselves should be doing - it seems you already have that separated. What exactly should and should not be part of the domain model depends on the concrete case - good design can't really be expressed in absolute rules.
Another concern is models that get transferred over the network, e.g. between an app server and a web frontend. You want these to contain only the data itself to reduce badnwidth usage and latency. But that doesn't mean they can't contain logic, since methods are not part of the serialized objects. Derived fields and caches are - but they can usually be marked as transient in some way so that they are not transferred.
Your bean should contain both your data and logic.
Data Transfer Objects are used to transfer objects over the network, such as from ColdFusion to a Flex application in the browser. DTOs only contain relevant fields of an object's data.
Where possible you should try to minimise exposing the internal implementation of your bean, (such as the array of user structs) to other objects. To change the array you should just call mutator functions directly on your bean, such as yourBean.addUser(user) which appends the user struct to the internal array.
No need to create a separate DAO with a composed Gateway object for your data access. Just put all of your database access methods (CRUD plus table queries) into a single Gateway object.

WCF and Object

I am trying to pass an object into a WCF web service, the object I am passing in is a Server object, I then want to be able to call TestConnection();
The issue I am having is that Server is the base class and there are several derived classes of Server, i.e. SqlServer2005Server, OracleServer and ODBCServer that I want to use
I want to be able to pass in a Server Object and then determine its type, cast it and then use the method
public string TestServerConnection(Server server)
{
if (server.ConnectionType == "SqlServer")
{
SqlServer2005Server x = (SqlServer2005Server)server;
// Tests connection to server and returns result
return x.TestConnection();
}
return "";
}
'Server' the base class implements IServer
I am unable to cast it, can you advise?
Much Appreciated
Phill
As Daniel Pratt said, in the end, you are only shuttling XML (not always the case, but most of the time you are) across the wire.
If you used a proxy generator to generate the definition of the Server type, then you aren't going to be able to make calls on the methods of Server, because only properties (semantically at least) are used in the proxy definition. Also, you can't cast to the derived types because your instance is really a separate type definition, not the actual base.
If the Server type is indeed the same type (and by same, I mean a reference to the same assembly, not just in name and schema only), then you can do what Steve said and use the KnownType attribute on the Server definition, adding one attribute for each derived class.
However, like he said, that convolutes your code, so be careful when doing this.
I thought that using inversion of control would work here, but you run into the same situation with generic references to specific providers.
You need to add the KnownType declaration to your service contract for each derived class. There are ways to automate this (since it obviously convolutes code and breaks inheritance), but they require a lot of work.
Does the object you're passing represent a "live" connection to a DBMS? If the answer is yes, there is no hope of this ever working. Keep in mind that despite the pretty wrapper, the only thing your web service is getting from the caller is a chunk of xml.