I have the requirement to support multiple connections (multiple baseURLs). So some get and post request should go to baseURL1 some to baseURL2 an so forth.
I think one way to do this is by constantly switching the baseURL to the needed one right before a get or post. I'm not sure how well RestKit 0.20 would handles this and if there are some side-effects. Generally I think it would be a bad idea with a lot of overhead. But I'm not a RestKit expert when it comes to the internal workings.
The other idea I have is to use multiple RKObjectManagers, one per baseURL and then somehow use the right one for the calls. But I have no idea if RestKit is designed for that and can be used in this scenario. How would you manage multiple RKObjectManagers? Or is this a bad idea to solve my problem?
What's the way to go?
Use multiple RKObjectManagers. I would have a singleton data controller which abstracts the rest of the application away from the usage of RestKit and presents an interface in terms of your model objects. Internally it has a number of RKObjectManager instances and each method uses the appropriate one (they could be properties or you could have a dictionary of them depending on how you know when each should be used).
Related
I'm not so much seeking a specific implementation but trying to figure out the proper terms for what I'm trying to do so I can properly research the topic.
I have a bunch of interfaces and those interfaces are implemented by controllers, repositories, services and whatnot. Somewhere in the start up process of the application we're using the Castle.MicroKernel.Registration.Component class to register the classes to use for a particular interface. For instance:
Component.For<IPaginationService>().ImplementedBy<PaginationService>().LifeStyle.Transient
Recently I became interested in creating an audit trail of every class and method call. There's a few hundred of these classes so writing a proxy class for each one by hand isn't very practical. I could use a template to generate the code but I'd rather not blow up our code base with all that.
So I'm curious if there's some kind of on the fly solution. I know nHibernate creates proxy classes at some point which overlay all the entity classes. Can someone give me some guidance on how I might be able to do something similar here?
Something like:
Component.For<IPaginationService>().ImplementedBy<ProxyFor<PaginationService>>().LifeStyle.Transient
Obviously that won't work because I can only use generics to generalize the types of methods but not the methods themselves. Is there some tricky reflection approach I can use to do this?
You are looking for what Castle Windsor calls interceptors. It's an aspect-oriented way to tackle cross-cutting concerns -- auditing is certainly one of them. See documentation, or an article about the approach:
Aspect oriented programming is an approach that effectively “injects” pieces of code before or after an existing operation. This works by defining an Inteceptor wrapping the logic being invoked then registering it to run whenever a particular set/sub-set of methods are called.
If you want to apply it to many registered services, read more about interceptor selection mechanisms: IModelInterceptorsSelector helps there.
Using PostSharp, things like this can be even done at compile time. This can speed the resulting application, but when used correctly, interceptors are not slow.
First of all, I'm using Objective-C, but this doesn't matter at all.
My situation is:
I have two different scenarios. I distinguish them by a preprocessor macro like:
#ifdef USER
do some stuff for scenario 1
#else
do some stuff for scenario 2
Both scenarios works with a list of items all across the application, but the difference is the way of getting those items.
In the first one I get the items by sending a request to a server.
In the second one, I get them from the local device storage.
What I have now is the second scenario implemented. I have a singleton class that returns to me the list of items by getting them from the local storage. (like a traditional database singleton)
I want to add the other scenario. Since the items can be get from any point across the app, I want this to be a singleton too.
Does it make sense to have a singleton superclass, and then two subclasses that implement the different ways of getting the items? Singleton hierarchies sound quite strange to me.
That's not exactly hierarchy. The superclass you're mentioning is actually an interface for your 2 concrete classes, which can be singletons if you want. The interface is an abstract entity thus any instance-related term is irrelevant to it.
You're statically defining your program behavior by using preprocessor to do the scenario choice. If you stick to this approach and it fits your requirements, you don't need any design patterns. In your code just use the interface I mentioned above, which is a port to your statically instantiated data. If you want to have more flexibility (this sounds likely), you can do your scenario choice at runtime. In this case you may find the Strategy pattern useful for applying scenarios and Factory pattern for instancing.
Factory combined with Strategy.
Factory as the pattern of using another class to make your instance rather than using just a constructor. You are already doing that with your Singleton most likely.
Strategy for the ability to configure which kind of object is actually created by the factory at rutime.
I'm currently developing an app using Parse and I'd like to start abstracting their SDK as I don't know if and when I'm going to replace their backend with another by other provider or by ours.
Another motivation is separating issues: all my apps code will use the same framework while I can just update the framework for any backend specifics.
I've started by creating some generic classes to replace their main classes. This generic classes define a protocol that each adapter must implement. Then I'd have a Parse adapter that would forward the calls to the Parse SDK.
Some problems I can predict is that this will require a lot of different classes. In some cases, e.g. Parse, they also have classes for dealing with Facebook. Or that the architecture in some parts can be so different that there'll be no common ground to allow something like this.
I've actually never went so far with Stackmob as I am with Parse so I guess the first versions will share Parse's own architecture.
What are the best practices for something like this?
Is there something like this out there? I've already searched without success but
maybe I'm looking in the wrong direction;
Should I stick with the Parse SDK just making sure that the code using
it is well identified and contained?
I'm the Developer Evangelist at Applicasa.
We've built a cool set of tools for mobile app developers, part of which includes offering a BaaS service that takes a bit different approach compared to Parse, StackMob, and others. I think it provides a helpful perspective for tackling the problem of abstracting away from third-party SDK APIs in a way that would allow you to replace backends by other providers or your own.
/disclaimer
Is there something like this out there? I've already searched without success but maybe I'm looking in the wrong direction
While there are other BaaS providers out there that provide similar and differentiating features, I'm not aware of a product out there that completely abstracts away third-party providers in an agnostic manner.
What are the best practices for something like this?
I think you already show to be on a solid footing for getting started in the right direction.
First, you're correct in predicting that you'll end up with a number of different classes that encapsulate objects and required functionality in a backend-agnostic way. The number, of course, will depend on what kind of abstraction and encapsulation you're going after. The approach you outline also sounds like the way I'd begin such a project, as well—creating classes for all the objects my application would need to interact with, and implementing custom methods on those classes (or a base class they all extend) that would do the actual work of interacting with a backend provider.
So, if I was building an app that, for example, had a Foo, Bar, and Baz object, I'd create those classes as part of my internal API, with all necessary functionality required by my app. All app logic and functional operations would only interact with those classes, and all app logic and functionality would be data backend-agnostic (meaning no internal functionality could depend on a data backend, but the object classes would provide a consistent interface that allowed operations to be performed, while keeping data handling methods private).
Then, I'd likely make each class inherit from a BaseObject class, which would include the methods that actually talked to a data backend (provider-based or my own custom remote backend). The BaseObject class might have methods like saveObject, getById:, getObjects (with some appropriate parameters for performing object filtering/searching). Then, when I want to replace my backend data service in the future, I'd only have to focus on updating the BaseObject class methods that handle data interaction, while all my app logic & functionality is tied to the Foo, Bar, and Baz classes, and doesn't actually care how get/save/update/delete operations work behind the scenes.
Now, to keep things as easy on myself as possible, I'd build out my BaaS schema to match internal object class names (where, depending on the BaaS requirements, I could use either an isKindOfClass: or NSStringFromClass: call). This means that if I was using Parse, I'd want to make my save method get the NSStringFromClass: of the class name to perform data actions. If I was using a service like Applicasa, which generates a custom SDK of native objects for data interactions, I'd want to base custom data actions on isKindOfClass: results. If I wanted even more flexibility than that (perhaps to allow multiple backend providers to be used, or some other complex requirement), I'd make all the child classes tell BaseObject exactly what schema name to use for data operations through some kind of custom method, like getSchemaName. I'd probably define it as a BaseObject method that would return the class name as a string by default, but then implement on child classes to customize further. So, the inside of a BaseObject save method might look something like this:
- (BOOL) save {
// call backend-specific method for saving an object
BaasProviderObject *objectToSave = [BaasProviderObject
objectWithClassName:[self getSchemaName]];
// Transfer all object properties to BaasProviderObject properties
// Implement however it makes the most sense for BaasProvider
// After you've set all calling object properties to BaasProviderObject
// key-value pairs or object properties, you call the BaasProvider's save
[objectToSave save];
// Return a BOOL value to indicate actual success/failure
return YES; // you'll want this to come from BaaS
}
Then in, say, the Foo class, I might implement getSchemaName like so:
- (NSString) getSchemaName {
// Return a custom NSString for BaasProvider schema
return #"dbFoo";
}
I hope that makes sense.
Should I stick with the Parse SDK just making sure that the code using it is well identified and contained?
Making an internal abstraction like this will be a fair amount of work up front, but it will inevitably offer a lot of flexibility to implement as you wish. You can implement CoreData, reject CoreData, and do whatever you'd like really. There are definite advantages to building internal app logic/functionality in a data-agnostic way, even if it's to allow yourself the ease of trying out another BaaS in, say, a custom branch of your app code to see how you like another provider (or to give you an easy route to working with developing your own data solution).
I hope that helps.
I'm the Platform Evangelist at StackMob and thought I'd chime in on this question. We built our iOS SDK with a Core Data interface. You'll use regular Core Data and we've overridden the NSIncremental Store to persist to StackMob instead of SQLLite.
You can checkout an example of the Core Data code.
http://developer.stackmob.com/tutorials/ios/Create-an-Object
If you want see what methods are being leveraged by Core Data to communicate with StackMob.
http://developer.stackmob.com/tutorials/ios/Lower-Level-CRUD-API
Say I have a User object, which is generated by a Usermapper. The User object does not know anything about the database/repository in use (which I believe to be good design).
When creating a User, I only want it to have it filled by the mapper with the most trivial things e.g. Name, address etc. However after object instantiation I might have a method userX.getTotalDebt(), getTotalDebt() would need to reconnect to the database , because I don't want this relatively expensive operation to be done for every User instantiation (multiple tables needed etc). If I'd simply insert some sql in the getTotalDebt() or a dependency back to the Mapper where the coupledness is growing tight very fast.
There is an obvious good/best practice for this, because it's a situation arises often, however I can't find it or I'm looking at this problem totally from a wrong angle.
Say I have a User object, which is generated by a Usermapper. The User object does not know anything about the database/repository in use (which I believe to be good design).
They are often referred to as POCOs (Plain Old CLR Objects).
When creating a User I only want it to have it filled by the mapper with the most trivial things e.g. Name, address etc.
There are several OR/M layers which can achieve this. Use either nhibernate or Entity Framework 4.1 Code First.
I might have a method userX.getTotalDebt(), getTotalDebt() would need to reconnect to the database
Then it's not a poco anymore. Although it is possible using a transparent proxy. Both EF and nhibernate supports this and it's called Lazy Loading.
There is an obvious good/best practice for this, because it's a situation arises often, however I can't find it or I'm looking at this problem totally from a wrong angle
I usually keep my objects dumb and disconnected. I use the Repository pattern (even if I use nhibernate or another orm) since it makes my classes testable.
I either use the repository classes directly or create a service class which contains all logic. It depends on how complex my application is.
This blog post by Joubert just opened my eyes. I have dealt with a lot of design patterns in Java and other languages. But Objective-C is a rather unique language.
Let's say that in a project we talk with a third party API, like Dropbox or Facebook. What I've been doing so far is to combine everything that has to do with the third party API into a singleton class. So I can access the class from anywhere in my view controllers. I can just go for example: [[DropboxModel sharedInstance] uploadFile:aFile]
However as the blog post noted, this isn't efficient and leads to spaghetti code and bad unit testing. So what is the best way to design the system so that it's modular and easy to use?
I would dispute the idea that singletons lead to spaghetti code and are inefficient. However, the unit testing problem is legitimate and singletons do reduce modularity since they are really just fancy global variables.
I like Joubert's idea of injecting the singleton instance into the controller(s) from the app delegate (which is itself a singleton, ahem). I think the same approach would work for you.
What I normally do in these situations where I might want to use a different stub object in unit tests is define a protocol to represent the API and make my "real" API object conform to it and also my stub API object. I use the stub in the unit tests and the real object in the app.
Not that this really solves any architectural problems associated with singletons, but for the sake of readability and typability you can always define a macro in your DropboxModel header file, eg:
#define DBM [DropboxModel sharedInstance]
<...>
[DBM uploadFile:aFile];
i'll typically create an abstraction layer. this wraps a simple interface onto the library's calls which you use, while giving you a chance to introduce whatever state (e.g. variables) you'll need.
you can then expose only what you need and use, and add your own state, checks, and conveniently deal with all issues of the library from one place. 'issues' may be introduced for several reasons - it could be threading, resources, state, or undesired behavioral changes across versions.
most libraries are not meant to be used solely via a singleton. in such cases, it's best (subjective) to create interfaces as you would normally -- of course, being mindful of the constraints behind the abstraction layer. in that sense, you simply create object based interfaces which are divided by size/task/purpose/functionality -- all as you'd usually do when writing your own classes.
if you don't need the library all over the place, then i think it's also good to wrap what you need to minimize dependencies (increasingly important in large projects).
if you use the library all over the place, then you may also prefer to use the calls without the abstraction layer.