I implemented a typical REST API library in Go. But due to the amount of endpoints and different data structures of which almost none are shared between endpoints the GoDoc for the project is very confusing:
The way it is structured right now makes it hard to see what is returned by the actual functions and requires a lot of scrolling through the document to find the associated structures with the data.
The Endpoints are all part of the API struct as they can share the authentication state between calls to the API, which causes them all to be listed below the GW2Api struct instead of their associated data structures.
Is there a good way to make the library API clearer with GoDoc, aside from
adding comments to function calls?
One example of an api package that I think does pretty well is the github wrapper: https://godoc.org/github.com/google/go-github/github.
If you have a large api, a bit of a large godoc is somewhat unavoidable. Note that rather than have a million methods defined directly off of client, the core object has multiple "service" objects defined, which allow them to partition the methods into logical groups. I could see multiple possible groups from the methods in your api.
I don't think there is a super good way to group methods with the struct types they act on or return without significant changes to your api. Rather expect people to look for the operations they want to perform, and from there link to the specific struct types for reference.
Related
I have the following case. User can export several object types (transaction, invoice, etc) to external accounting system.
Export algorithm has steps:
fetch objects by some filter
export objects one by one to the accounting system (web service method per object type)
register the fact that given document was exported, so it wont be exported again
prepare summary for user (num of exported documents, error messages etc)
The algorithm is the same for all object types but there are some important differences which must be handled:
different types
different target web service method, different object to DTO mappings
different filters per object type
I've considered a few solutions:
don't treat the export algorithm as code duplication and implement an algorithm per object type. Export of any data to any external system may be described by such algorithm - does it mean that we should always have one general class to export anything to anywhere?:)
move the differences to strategies (one strategy interface to create abstraction for all differences) - I even implemented it.
use generics - unfortunately I'm coding in PHP and it's not possible
The question:
Is creating a separate export algorithm per object type a code duplication?
Maybe all of them should be treated as separate Use Cases?
If it's a duplication then what techniques to avoid it should I consider?
Description of my first implementation:
In first approach I defined an Exportable abstraction but I was not happy about it. Each object has completely different payload.
An Exportable interface defined only one method getId and it was used to register that object was exported (and thanks to that wont be exported again).
For this purpose the abstraction was fine, but the problem was moved to exportService which had to check the concrete instance to choose DTO mapper and endpoint. So the exportService broke SOLID.
None of the things you have described above are domain-specific logic (and in fact you don't even mention the problem domain in your question), so I don't think it really falls under domain-driven design. Because it's not domain-specific logic I wouldn't worry too much about code duplication, especially considering that the solution doesn't seem obvious.
Keep it simple and just write out each use case separately. If you find that there's common code that's easily refactored, do so after you get everything working smooth. Don't overthink it or add patterns before they are obviously necessary.
When using an external library or API, I have noticed that each function or data structure belonging to that library or API has something in its name which discloses the API or library we are using. For example, D3DXVECTOR3 or SDL_Surface from Direct3D and SDL respectively have been named to disclose which API they belong to.
While building our own applications, I would not like to disclose which API's I have used, so is it good practice to change the name of these API structures by #define directives into some more general names? Is this a practices and used form of abstraction? Are there better ways to do such abstractions?
In the OO world, the best way to do such an abstraction is through the adapter pattern.
Since you mentioned #define, I assume you are using C or C++. In both cases, you can still use this pattern if you want. Simply use a class with an abstract base class as the interface. This will add a small overhead because of the virtual function calls though. You could also consider template inheritance to bypass this issue.
Either way, I would avoid using the preprocessor as much as possible since it can quickly turn your code into a nice italian dish.
I have a Component which has API exposed with some 10 functionality in all. I can think of two ways to achieve it:
Give out all these functionality as separate functions.
Expose only one function which takes an XML as input. Based on request_Type specified and the parameters passed in the XML, I internally call one of the respective functions.
Q1. Will the second design be more loosely coupled than the first ?
I always read about how I should try my components to be loosely coupled, should I really go to this extent to achieve lose coupling ?
Q2. Which one of these would be a better design in terms of OOP and why?
Edit:
If I am exposing this API over D-Bus for others to use, will type checking still be a consideration to compare the two approaches? From what I understand type checking is done at compile time, but in case when this function is exposed over some IPC, issue of type checking comes into picture ?
The two alternatives you propose do not differ in the (obviously quite large) number of "functions" you want to offer from your API. However, the second seems to have many disadvantages because you are loosing any strong type checking, it will become much harder to document the functionality etc. (The only advantage I see is that you don't need to change your API if you add functionality. But at the disadvantage that users will not be able to figure out API changes like deleted functions until run-time.)
What is more related with this question is the Single Responsiblity Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). As you are talking about OOP, you should not expose your tens of functions within one class but split them among different classes, each with a single responsibility. Defining good "responsibilities" and roles requires some practice, but following some basic guidelines will help you to get started quickly. See Are there any rules for OOP? for a good starting point.
Reply to the question edit
I haven't used D-Bus, so this might be totally wrong. But from a quick look at the tutorial I read
Each object supports one or more interfaces. Think of an interface as
a named group of methods and signals, just as it is in GLib or Qt or
Java. Interfaces define the type of an object instance.
DBus identifies interfaces with a simple namespaced string, something
like org.freedesktop.Introspectable. Most bindings will map these
interface names directly to the appropriate programming language
construct, for example to Java interfaces or C++ pure virtual classes.
As far as I understand, D-Bus has the concept of differnt objects which provide interfaces consisting of several methods. This means (to me) that my answer above still applies. The "D-Bus native" way of specifying your API would mean to exhibit interfaces and I don't see any reason why good OOP design guidelines shouldn't be valid, here. As D-Bus seems to map these even to native language constructs, this is even more likely.
Of course, nobody keeps you from just building your own API description language in XML. However, things like are some kind of abuse of underlying techniques. You should have good reasons for doing such things.
I'm currently developing an app using Parse and I'd like to start abstracting their SDK as I don't know if and when I'm going to replace their backend with another by other provider or by ours.
Another motivation is separating issues: all my apps code will use the same framework while I can just update the framework for any backend specifics.
I've started by creating some generic classes to replace their main classes. This generic classes define a protocol that each adapter must implement. Then I'd have a Parse adapter that would forward the calls to the Parse SDK.
Some problems I can predict is that this will require a lot of different classes. In some cases, e.g. Parse, they also have classes for dealing with Facebook. Or that the architecture in some parts can be so different that there'll be no common ground to allow something like this.
I've actually never went so far with Stackmob as I am with Parse so I guess the first versions will share Parse's own architecture.
What are the best practices for something like this?
Is there something like this out there? I've already searched without success but
maybe I'm looking in the wrong direction;
Should I stick with the Parse SDK just making sure that the code using
it is well identified and contained?
I'm the Developer Evangelist at Applicasa.
We've built a cool set of tools for mobile app developers, part of which includes offering a BaaS service that takes a bit different approach compared to Parse, StackMob, and others. I think it provides a helpful perspective for tackling the problem of abstracting away from third-party SDK APIs in a way that would allow you to replace backends by other providers or your own.
/disclaimer
Is there something like this out there? I've already searched without success but maybe I'm looking in the wrong direction
While there are other BaaS providers out there that provide similar and differentiating features, I'm not aware of a product out there that completely abstracts away third-party providers in an agnostic manner.
What are the best practices for something like this?
I think you already show to be on a solid footing for getting started in the right direction.
First, you're correct in predicting that you'll end up with a number of different classes that encapsulate objects and required functionality in a backend-agnostic way. The number, of course, will depend on what kind of abstraction and encapsulation you're going after. The approach you outline also sounds like the way I'd begin such a project, as well—creating classes for all the objects my application would need to interact with, and implementing custom methods on those classes (or a base class they all extend) that would do the actual work of interacting with a backend provider.
So, if I was building an app that, for example, had a Foo, Bar, and Baz object, I'd create those classes as part of my internal API, with all necessary functionality required by my app. All app logic and functional operations would only interact with those classes, and all app logic and functionality would be data backend-agnostic (meaning no internal functionality could depend on a data backend, but the object classes would provide a consistent interface that allowed operations to be performed, while keeping data handling methods private).
Then, I'd likely make each class inherit from a BaseObject class, which would include the methods that actually talked to a data backend (provider-based or my own custom remote backend). The BaseObject class might have methods like saveObject, getById:, getObjects (with some appropriate parameters for performing object filtering/searching). Then, when I want to replace my backend data service in the future, I'd only have to focus on updating the BaseObject class methods that handle data interaction, while all my app logic & functionality is tied to the Foo, Bar, and Baz classes, and doesn't actually care how get/save/update/delete operations work behind the scenes.
Now, to keep things as easy on myself as possible, I'd build out my BaaS schema to match internal object class names (where, depending on the BaaS requirements, I could use either an isKindOfClass: or NSStringFromClass: call). This means that if I was using Parse, I'd want to make my save method get the NSStringFromClass: of the class name to perform data actions. If I was using a service like Applicasa, which generates a custom SDK of native objects for data interactions, I'd want to base custom data actions on isKindOfClass: results. If I wanted even more flexibility than that (perhaps to allow multiple backend providers to be used, or some other complex requirement), I'd make all the child classes tell BaseObject exactly what schema name to use for data operations through some kind of custom method, like getSchemaName. I'd probably define it as a BaseObject method that would return the class name as a string by default, but then implement on child classes to customize further. So, the inside of a BaseObject save method might look something like this:
- (BOOL) save {
// call backend-specific method for saving an object
BaasProviderObject *objectToSave = [BaasProviderObject
objectWithClassName:[self getSchemaName]];
// Transfer all object properties to BaasProviderObject properties
// Implement however it makes the most sense for BaasProvider
// After you've set all calling object properties to BaasProviderObject
// key-value pairs or object properties, you call the BaasProvider's save
[objectToSave save];
// Return a BOOL value to indicate actual success/failure
return YES; // you'll want this to come from BaaS
}
Then in, say, the Foo class, I might implement getSchemaName like so:
- (NSString) getSchemaName {
// Return a custom NSString for BaasProvider schema
return #"dbFoo";
}
I hope that makes sense.
Should I stick with the Parse SDK just making sure that the code using it is well identified and contained?
Making an internal abstraction like this will be a fair amount of work up front, but it will inevitably offer a lot of flexibility to implement as you wish. You can implement CoreData, reject CoreData, and do whatever you'd like really. There are definite advantages to building internal app logic/functionality in a data-agnostic way, even if it's to allow yourself the ease of trying out another BaaS in, say, a custom branch of your app code to see how you like another provider (or to give you an easy route to working with developing your own data solution).
I hope that helps.
I'm the Platform Evangelist at StackMob and thought I'd chime in on this question. We built our iOS SDK with a Core Data interface. You'll use regular Core Data and we've overridden the NSIncremental Store to persist to StackMob instead of SQLLite.
You can checkout an example of the Core Data code.
http://developer.stackmob.com/tutorials/ios/Create-an-Object
If you want see what methods are being leveraged by Core Data to communicate with StackMob.
http://developer.stackmob.com/tutorials/ios/Lower-Level-CRUD-API
I am in the processing of migrating from web services to WCF, and rather than trying to make old code work in WCF, I am just going to rebuild the services. As a part of this process, I have not figured out the best design to provide easy to consume services and also support future changes.
My service follows the pattern below; I actually have many more methods than this so duplication of code is an issue.
<ServiceContract()>
Public Interface IPublicApis
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataA(ByVal req As RequestA) As ResponseA
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataB(ByVal req As RequestB) As ResponseB
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataC(ByVal req As RequestC) As ResponseC
End Interface
Following this advice, I first created the schemas for the Request and Response objects. I then used SvcUtil to create the resulting classes so that I am assured the objects are consumable by other languages, and the clients will find the schemas easy to work with (no references to other schemas). However, because the Requests and Responses have similar data, I would like to use interfaces and inheritance so that I am not implementing multiple versions of the same code.
I have thought about writting my own version of the classes using interfaces and inheritance in a seperate class library, and implementing all of the logging, security, data retrieval logic there. Inside each operation I will just convert the RequestA to my InternalRequestA and call InternalRequestA's process function which will return an InternalResponseA. I will then convert that back to a ResponseA and send to the client.
Is this idea crazy?!? I am having problems finding another solution that takes advantage of inheritance internally, but still gives clean schemas to the client that support future updates.
The contracts created by using WCF data contracts generally produce relatively straight-forward schemas that are highly interoperable. I believe this was one of the guiding principles for the design of WCF. However, this interoperability relates to the messages themselves and not the objects that some other system might produce from them. How the messages are converted to/from objects at the other end entirely depends on the other system.
We have had no real issues using inheritance with data contract objects.
So, given that you clearly have control over the schemas (i.e. they are not being specified externally) and can make good use of WCF's inbuilt data contract capabilities, I struggle to see the benefit you will get the additional complexity and effort implied in your proposed approach.
In my view the logic associated with processing the messages should be kept entirely separate from the messages themselves.