I'm in the middle of writing my first web app. Just wondering how the conventions are when it comes to REST API designs. Is it better to have it reflect my server side architecture or whatever seems to be easier to reason about?
I'm thinking of either doing:
/serviceProvider/product
or
/product/serviceProvider
My server side architecture are all separated into modules organized by service providers, however they all expose a product query API.
APIs ideally should be designed to make most sense for its consumer. There isn't really a good reason to reflect your "server architecture" at all. In fact, it's what's usually called a leaky abstraction or a leaky API and is considered bad practice, mainly because your application structure may change and then you have these possible scenarios:
you need to change your API, which is a non-trivial task when it's already being used by someone;
your API stops being reflective of your application structure which leads to inconsistencies;
exposing your application structure or database schema to the world may have security implications.
With these things in mind, you might as well design the API with focus on ease of use in the first place. The consumer of your API doesn't need to know or care about your application architecture.
I believe that keeping on the same architecture is important because you're forced to offer simple API and it will enforce you a simplified architecture on the server side.
That said, of course that you don't want to expose any server side method or even every server side property of the returned objects.
In Kaltura we also believe in flat (not nested) paths to simplify the API.
For more guidelines, see my blog: http://restafar.com/create-new-rest-server/
Related
PROBLEM: Application uses Axon Framework and org.axonframework.eventsourcing.EventSourcingRepository and building _links in HAL format is needed in responses.
RESEARCH: Can be tuned with Spring Hateoas, but a lot requires to be handcoded in rest-controller. Spring Data REST offers autogeneration of links with an only annotation on CRUD repository. The project is not RDBS & JPA-based, so Spring Data REST is not an option.
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
Gotcha, so you are essentially looking to expose a service's capabilities when it comes to which commands can be handled by a given Command Handling Component, disregarding whether that component is an Aggregate or an External Command Handler.
Note, that interaction between a component which dispatches commands and one which handles them resides within the CommandBus. When an Axon application starts up, it's the CommandBus which receives all the registrations for known command handlers.
That way, the CommandBus provides the location transparency for this part of the application. And it's location transparency which provides clear and cleanly segregated components; essentially what will help you to take an evolutionary microservices approach (as AxonIQ describes here).
I'd thus argue the necessity of sharing all known command handlers on a given service/aggregate through REST.
Regardless, whether it makes sense is always a question of "it depends". I for one have created a means to share the known commands a service could handle as JSON schema, as you can see here in a sample project I helped built between AxonIQ and Pivotal.
So, to come round to your question:
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
No, Axon does not provide something like this out of the box, as it expect you use the CommandBus for communication. I do know you might need a starting point somewhere, for which REST makes sense, but even then exposing all known commands can be regarded as exposing your internal domain to the outside world. In the majority of scenarios, that would be undesirable, but as stated this highly "depends" on your use case.
I've seen several questions revolving around that theme on SO, but no answer that really satisfies me.
I'm trying to put words on things I feel without always being able to express them clearly enough to convince people around me. Might be that I'm wrong. Might be that my understanding is not deep enough to find proper arguments.
How would you contrast developing applications according to a "service oriented approach" instead of a "traditional" API approach?
Let's be totally clear here that, by services, I don't necessarily mean Web Services.
Here are some differences I see. Please correct me if I'm wrong:
a service is a "living thing" that you can talk to, according to a given and explicit protocol. A service has its own runtime while a library uses the runtime of your application. You can move that "living thing" wherever you want
a library allows code-based integration, while services traditionally use a message-based integration (however, nothing really prevents you to write a library based on exchanging messages)
services are discoverable
contracts are explicit and expressed "outside" the running code
services are autonomous (but here again, you could write autonomous APIs, couldn't you?)
boundaries are explicit
What am I missing here? What else really distinguishes services from a high-level API?
Service oriented architecture implies that the exposed interface does not live on the same host where the client runs and the service is completely decoupled from the client code (loose coupling). You can easily call an API by loading the necessary library and executing your code, on the same node. Rather than defining the API, service oriented architecture is focusing on the functionality, many times you can access the same feature using different protocols.
I would go for the loose code coupling if there was anything which would distinguish SOA and AOA.
You have covered most important points. I would add one :
Usually, a Service is stateless. Each Service request is independent. This is in contrast to a library interface where you may make certain calls in a sequence to get the desired result.
I was looking at how you can create a WCF data service around an entity framework context and you can consume it as an EF context as well.
Creating an OData API for StackOverflow including XML and JSON in 30 minutes
I really just started looking at this, but I was wondering where would the business logic go? As a service I would expect that you couldn't just freely add/delete etc without it having some validation.
If I wrote an MVC app to consume this service, how would I best implement business logic. Not simple property level validation that you could do with attributes, but more complex stuff that needs to check the data store first etc.
It sounds like you need a custom data service provider (msdn link). They're quite powerful and give you full control over all of your reading/writing logic.
For example I wrote one that enforces our licensing logic in the update provider.
You can put some in the Data Service class, but you are limited as to what you can and can't do there. And then of course you can put some in the client above the service, but that's not ideal either.
I've only spent a few weeks with WCF Data Services but you highlight (one of) the big problems with it - lack of flexibility. It's fantastic for rapid development and banging out LOB applications, but anything with a deliberate design is very difficult to implement. I needed to include objects in my entity model simply to allow them to be exposed through the service, and I had huge headaches just trying to extend those classes with a simple property.
I'd only recommend using WCF Data Services for trivially simple applications that needed extremely fast development - a one or two day development cycle, for example. Anything else is worth doing thoroughly with regular WCF services, writing your own data layer and so on.
Depending on your specific needs, it sounds like Web API might be a good fit. Web API may never get the full range of OData support that WCF Data Services has, but it does make certain things easier (like adding business logic). I'm quite confident that Web API's initial support for OData will cover a significant number of use cases, and that support will grow over time.
While a custom data provider will most certainly do about anything you want and may well be a great solution for you if you have a complex architecture, I wasn't really thrilled when I attempted to save back through the client and found out I had to implement my the IUpdatable Interface as part of my Context.(I was attempting to build a repository pattern out of my context and DataService).
I'm sure it's very useful for many people, but I really only needed the functionality the EntityProvider already contained and didn't have the time in my project schedule to figure the Iupdatable piece of the custom provider out, so my team, specifically Geoff , stuck with the Entity Provider and used Change and Query Interceptors to route the DataService requests through our Business Logic classes on the server. It provides a central point of control. We used these to provide security checks, run calculations and other operations on Insert/Update, etc. Turned out great. You can also use service methods as another way to provide specific business logic functions to your clients.
I am currently developing a very simple web service and thought I could write an API for that so when I decide to expand it on new platforms I would only have to code the parser application. That said, the API isn't meant for other developers but me, but I won't restrict access to it so anyone can build on that.
Then I thought I could even run the website itself through this API for various reasons like lower bandwidth consumption (HTML generated in browser) and client-side caching. Being AJAX heavy seemed like an even bigger reason to.
The layout looks like this:
Server (database, programming logic)
|
API (handles user reads/writes)
|
Client application (the website, browser extensions, desktop app, mobile apps)
|
Client cache (further reduces server reads)
After the introduction here are my questions:
Is this good use of API
Is it a good idea to run the whole website through the API
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
EDIT
Additional questions:
Any alternative approaches I haven't considered
What are some potential issues I haven't accounted for that may arise using this approach
First things first.
Asking if a design (or in fact anything) is "good" depends on how you define "goodness". Typical criteria are performance, maintainability, scalability, testability, reusability etc. It would help if you could add some of that context.
Having said that...
Is this good use of API
It's usually a good idea to separate out your business logic from your presentation logic and your data persistence logic. Your design does that, and therefore I'd be happy to call it "good". You might look at a formal design pattern to do this - Model View Controller is probably the current default, esp. for web applications.
Is it a good idea to run the whole website through the API
Well, that depends on the application. It's totally possible to write an application entirely in Javascript/Ajax, but there are browser compatibility issues (esp. for older browsers), and you have to build support for things users commonly expect from web applications, like deep links and search engine friendliness. If you have a well-factored API, you can do some of the page generation on the server, if that makes it easier.
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
Tricky one - with this kind of app, you have to distinguish between authenticating the user, and authenticating the application. For the former, OpenID or OAuth are probably the dominant solutions; for the latter, have a look at how Google requires you to sign up to use their Maps API.
In most web applications, HTTPS is not used for authentication (proving the current user is who they say they are), but for encryption. The two are related, but by no means equivalent...
Any alternative approaches I haven't considered
Maybe this fits more under question 5 - but in my experience, API design is a rather esoteric skill - it's hard for an API designer to be able to predict exactly what the client of the API is going to need. I would seriously consider writing the application without an API for your first client platform, and factor out the API later - that way, you build only what you need in the first release.
What are some potential issues I haven't accounted for that may arise using this approach
Versioning is a big deal with APIs - once you've created an interface, you can almost never change it, especially with multiple clients that you don't control. I'd build versioning in as a first class concept - with RESTful APIs, you can do this as part of the URL.
Is this good use of API
Depends on what you will do with that application.
Is it a good idea to run the whole website through the API
no, so your site will be accessible only through your application. this way This implementation prevents compatibility with other browsers
What choices for safe authentication do I have, using the API (and for some reason I prefer not to use HTTPS)
You can use omniauth
Any alternative approaches I haven't considered
create both frontends, one in your application and other in common browsers
What are some potential issues I haven't accounted for that may arise using this approach
I don't now your idea, but I can't see major danger.
We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
Having created a typical API using specific calls for specific functions I am considering simplifying the API by using a REST type API (GET, PUT, POST, DELETE) or CRUD type (CREATE, READ, UPDATE, DELETE) interface.
This would work in a similar way to a client-server type REST API where there are only 4 possible API calls but these can take flexible parameters.
This seems to have the benefit of making the API stable in that new calls are not being added and old calls are not being removed. So a consumer of this API need not worry about having to recompile and change their code to suit any updates to our middleware.
The overhead is that there is an extra layer of redirection in the middleware controller to route API calls and the developer needs to know what parameters are available for each REST call (supplied of course).
I have not so far seen this system used outside of web type client server applications so my question is this: Is this a feasible idea?
I am thinking in terms of its efficiency as well as if for example a game developer would find it easy to use.
Yes, this is a feasible idea. But I'm not sure the benefits would justify the costs. REST is best applied to a networked application scenario, oriented around requests and responses. While there are definite learning curve advantages to a uniform interface, those advantages can be present in almost any well-designed API which provides reasonably abstract procedures.
You also expressed concern for whether a game developer would find a RESTful API easy to use. I'd be dubious. I've implemented many RESTful web services, and helped many developers get up to speed both building them and using them, and the conceptual leap required to grasp REST can be substantial for someone who has been steeped in procedural APIs for years. I'd think that game developers in particular would be very strongly connected to procedural APIs, to the point that attempting to adopt a different paradigm, whatever its benefits, might prove extremely difficult.
Remember that REST is not specific to HTTP, and does not rely on just the 4 HTTP verbs. The verbs you have and can use depend on what protocol you're using.