API design and the relevancy of SOLID - api

I am wondering if SOLID is applicable when handling the following usecase.
There are 3 services:
-A consolidator service
-A voucher service
-Financial service ( accountancy )
In a certain scenario the consolidator service needs to get all details of a voucher to
be able to send a payload with all data to a financial service.
In the voucher service there are 2 endpoints which can be used:
endpoint A only providing the needed properties
endpoint B (rest) providing the full voucher with the needed properties inside of it.
The delta in data between the 2 endpoints is not big. We are not handling big quantities of data here.
Now I have a colleague saying we need to use endpoint A because we only need to ask what we need.
I dont agree cause in the future they might need another data prop which will be available if they use endpoint B.
I only have the above two options to choose from now.
I believe the best way is to use endpoint B but then define the needed fields so you only get what you need, by re-using 1 endpoint for N usecases. By allowing the consumer to define the fields needed of a resource....
Thanks for the help

as you said delta in data between the 2 endpoints is not big
so for future work it will be easier to handle changes ( a bit less code with bit low performance ) when you choose endpoint b.

Related

Are these URIs considered one endpoint or 3 endpoints?

I am sure this will be a simple question but I want to make sure I use the correct terminology for a meeting explaining my new API.
So if I have the following URIs for my API:
https://testapi.org/api/registration - POST defined
https://testapi.org/api/token - GET defined
https://testapi.org/api/pricedata - GET and POST defined
So these are 3 controllers in the same Web API. In conversation is this considered 1 endpoint or 3 endpoints?
In conversation is this considered 1 endpoint or 3 endpoints?
The REST answer is "0 endpoints".
There is no such thing as a REST endpoint -- Fielding, 2018
What you show here are three resource identifiers, and by implication three resources.
The fact that those three resources each have a separate "controller" is an accident of your implementation. REST really doesn't care.
In a context like OpenAPI 3, you will also see this described as four "operations" (path + method).
(You'll also see that the OpenAPI 3 documentation seems to consider "resource" and "endpoints" to be equivalent.)
Given Hohpe's description of the MessageEndpoint pattern, I'm inclined to think that resource isn't a particularly good match for "endpoint".
You might be able to argue that there is a message-endpoint in your web server process, which is handing off messages to your controllers? In which case, that would be one endpoint.
Are you saying I should have combined up some of the verbs into less controllers?
No, I'm not saying that. REST says that we need a client-stateless-server protocol with uniformly understood self descriptive messages. The HTTP specifications define self descriptive request messages and self descriptive response messages.
But the magic black box that decides what response to send to a request is yours. The number of controllers you decide to use is an implementation detail that is hidden from clients behind the HTTP facade.
There's nothing in REST, or HTTP, that requires/forbids that the server implementation use MVC.
Well, the responsibility of each one are different, it doesn't matter that they are acting in the same resource, so, for practice, all of that are different endpoint, you could put the 1 and 2 in the same one, but the 3 need to be a different product.

Grouping data together in Microservices

Lets suppose I have two services in a mircoservices architecture which are:-
OrderService
CustomerService
Each service has its own database, my frontend makes a request to /api/order/25 now along with the order I want to return the CustomerName as well but CustomerName is in a different database. So whats the best and most microservices based approach for fetching the CustomerName along with the Order data?
In this case, I suggest creating an aggregator service or backends-for-frontend.
This service will fetch data from both the services and aggregates it for your frontend(or for your client).
In this way, your order service doesn't need to depend on customer service for just getting a name or your client doesn't need to make another call for fetching data and do aggregation.
With BFF,
You can add an API tailored to the needs of each client, removing a lot of the bloat caused by keeping it all in one place.
Frontend requirements will be separated from the backend concerns. This is easier for maintenance.
The client application will know less about your APIs’ structure, which will make it more resilient to changes in those APIs.
However, all these microservices patterns came with some tradeoffs and no exceptions for BFFs.
so Always keep in mind that,
BFF is a translation/aggregation layer between the client and the services. When data is returned from a service API, the purpose of it is to aggregate and transform it into the data type specified by the client application.
Avoid over-dependence on BFF and don't add application logics in this layer. As I said in the previous step it's just a translator.
Implement a resilient design and timeout since this aggregator is calling other services and getting data. If one or more service calls take too long, it should timeout and return a partial set of data. Consider how your application will handle this scenario
Monitoring of your aggregator and it's child service calls. Implement distributed tracing using correlation IDs to track each call.
You have multiple options here. For example:
Duplicate some Customer data in the OrderService Database. You could for example save a small subset of the entity as a separate Table in the Order service db like: CustomerId and CustomerName. This works very well for use cases where you have a lot of traffic on the: /api/order/id endpoint but the number of Customers is very small. This means that duplicating some of Customers data in your OrderService db would not be very expensive. This option works quite well if you have a Messaging Queue for communication between micro-service which publishes event messages when the Customer data changes in CustomerService. This way you can update the Customer data in OrderService db and be up to date with the source of truth of the data. If you do not have a Messaging Queue for communication this option would not be so great as you would need to periodically check for changes in the CustomerService. This will create additional load to the CustomerService and the Customer data in OrderService db would inconsistent for some period. Good thing about this option is even if you have a lot of calls to your "/api/order/id" the CustomerService will not be affected at all by this load as the data is already in the OrderService db.
Call the CustomerService from OrderService when you need that data. In this option every time you need the CustomerName and other CustomerService data you would need to call the CustomerService over some api. For example that would be when someone calls your "/api/order/id" endpoint you would call the CustomerService api from your OrderService. This option is very good if you do not have a lot of calls to this api and generally the load on the CustomerService is not big. This way this additional calls would not be very problematic. It becomes a problem if you have a lot of calls to"/api/order/id" and the load is delegated to the CustomerService as well.
Create a 3'rd micro-service which will aggregate data from both the CustomerService and OrderService. In this option you would have another read-only micro-service with its own database which will have data from both micro-service in its own database. This option can be useful if you have a system which has to handle a lot of load in particular has a lot of potential for lot of load to this particular endpoint: "/api/order/id" or other endpoints which use an Aggregations of multiple entities from multiple micro-service. If you want to separate this and scale this independently you can create another micro-service like customer-order-micro-service and duplicate data from CustomerService and OrderService. Similar as in option 1. you would just have to duplicate particular fields/properties of the Entities which you need. Using this option you could even have some kind of De-Normalized data structure where you could store the entities in one table/collection(if you need it :)). You can even use pick a data storage technology which fits your queries better like Elastic Search if you have some full text search requirements and etc. This special service can be used to fit your needs as you need it. Some examples are also in the direction Query/Read part of CQRS and so on. There are a lot of options what you can do here. This is an option for a specific use case so you need to carefully review your business requirements and adjust it to your needs.
Conclusion
Usually in most of the cases you will be fine with option 1 or option 2. But it is good to know that you can also decide to go with the option 3 and have a special read-only/query only service which can server your special needs. Keep in mind that the way of communication between services has also an impact on your options. You can read more about micro-service to micro-service communication here.

How to establish relationships between Spring Data REST / Spring HATEOAS based (micro) services?

Trying to figure out a pattern for how to handle relationships when using a hypermedia based microservices based on Spring Data Rest or HATEOAS.
If you have service A (Instructor) and Service B (Course) each exist as an a stand alone app.
What is the preferred method for establishing a relationship between the two services. In a manner that does not require columns for IDs of the foreign service. It would be possible for each service to have many other services that need to communicate in the same manor.
Possible solution (Not sure a correct path)
Each service has a second table with a OneToMany with the primary entity within the service. The table would have the following fields:
ID, entityID, rel, relatedID
Then in the opposite service using Spring Data Rest setup a find that queries the join table to find records that match.
The primary goal I want to accomplish would be any service can have relationships with any number of other services without having to have knowledge of the other service.
The basic steps are the following ones:
The service needs to discover the resources of the other service.
The service then adds a link to the resources it renders where necessary.
I have a very rudimentary example of these steps in this repository. The example consists of two services: a service to provide geo-spatial searches for stores. The second service is some rudimentary customer management that optionally integrates with store service if it is currently available.
Here's how the steps are implemented:
Resource discovery
In my example the consuming service (i.e. the customer one) uses Spring HATEOAS' Traverson API to traverse a set of link relations until it finds a link named by-location. This is done in StoreIntegration. So all the client service needs to know is the root URI (taken from the environment in my case) and a set of link relations. It periodically checks the link for existence using a HEAD-request.
This of course can be done in a more sophisticated manner: hard-wiring the base URI into the client service might be considered suboptimal but actually works quite well if you're using DNS anyway (so that you can exchange the actual host behind the URI hard-coded). Nonetheless it's a decent pragmatic approach, still rediscovers the other service if it changes URIs, no additional libraries required.
For an even more sophisticated approach have a look at Netflix' Eureka library which is basically a service registry. Also, you might wanna checkout the Spring Cloud integration we have for that.
Augmenting resources with links
Spring HATEOAS provides a ResourceProcessor API that Spring Data REST leverages. It allows you to manipulate the Resource instance about to be rendered and e.g. add links to it. The implementation for the customers service can be found here.
It basically takes the link just discovered in the steps above and expands it using well-known parameters and thus allows clients to trigger a store geo-search by just following the link.
Beyond that
You can find a more sophisticated variant of this example in the examples projects for Spring Cloud. It takes the very same example but switches to Spring Cloud components such as Eureka integration, gathering metrics, adding UI etc.
In my case I can only derive related items from the service itself. My goal is to abstract the related items to the point that any number of services can be related to a service and only need to lookup ID's or links. One thought was an #ElementCollection named related with a join of the entity ID of the service. Then in the #Embedded have a relLink field and a relatedID field. Then in the repository do a findby to find the relLink and relatedID.
The hope is to keep it abstracted enough to essentially mimic a Many to Many setup.

WCF Rest - what are the best practices?

Just started my first WCF rest project and would like some help on what are the best practices for using REST.
I have seen a number of tutorials and there seems to be a number of ways to do things...for example if doing a POST, I have seen some tutorials which are setting HttpStatusCodes (OK/Errors etc), and other tutorials where they are just returning strings which contain result of the operation.
At the end of the day, there are 4 operations and surely there must be a guide that says if you are doing a GET, do it this way, etc and with a POST, do this...
Any help would be appreciated.
JD
UPDDATE
Use ASP.NET Web API.
OK I left the comment REST best practices: dont use WCF REST. Just avoid it like a plague and I feel like I have to explain it.
One of the fundamental flaws of the WCF is that it is concerned only with the Payload. For example Foo and Bar are the payloads here.
[OperationContract]
public Foo Do(Bar bar)
{
...
}
This is one of the tenants of WCF so that no matter what the transport is, we get the payload over to you.
But what it ignore is the context/envelope of the call which in many cases transport specific - so a lot of the context get's lost. In fact, HTTP's power lies in its context not payload and back in the earlier versions of WCF, there was no way to get the client's IP Address in netTcpBinding and WCF team were adamant that they cannot provide it. I cannot find the page now but remember reading the comments and the MS guys just said this is not supported.
Using WCF REST, you lose the flexibility of HTTP in expressing yourself clearly (and they had to budge it later) in terms of:
HTTP Status code
HTTP media types
ETag, ...
The new Web API, Glenn Block is working addresses this issue by encapsulating the payload in the context:
public HttpResponse<Foo> Do(HttpRequest<Bar> bar) // PSEUDOCODE
{
...
}
But to my test this is not perfect and I personally prefer to use frameworks such as Nancy or even plain ASP NET MVC to expose web API.
There are some basic rules when using the different HTTP verbs that come from the HTTP specification
GET: This is a pure read operation. Invocation must not cause state change in the service. The response to a GET may be delivered from cache (local, proxy, etc) depending on caching headers
DELETE: Used to delete a resource
There is sometimes some confusion around PUT and POST - which should be used when? To answer that you have to consider idempotency - whether the operation can be repeated without affecting service state - so for example setting a customer's name to a value can be repeated multiple times without further state change; however, if I am incrementing a customer's bank balance this cannot be safely be repeated without further state change on the service. The first is said to be idempotent the second is not
PUT: Non-delete state changes that are idempotent
POST: Non-delete state changes that are not idempotent
REST embraces HTTP - therefore failures should be communicated using HTTP status codes. 200 for success, 201 for creation and the service should return a URI for the new resource using the HTTP location header, 4xx are failures due to the nature of the client request (so can be fixed by the client changing what they are doing), 5xx are server errors that can only be resolved server side
There's something missing here that needs to be said.
WCF Rest may not be able to provide all functionality of REST protocol, but it is able to facilitate REST protocol for existing WCF services. So if you decide to provide some sort of REST support on top of the current SOAP/Named pipe protocol, it's the way to go if the ROI is low.
Hand rolling full blown REST protocol maybe ideal, but not always economical. In 90% of my projects, REST api is an afterthought. Wcf comes in quite handy in that regard.

WCF Business logic handling

I have a WCF service that supports about 10 contracts, we have been supporting a client with all the business rules specific to this client now we have another client who will be using the exact same contracts (so we cannot change that) they will be calling the service exactly the same way the previous client called now the only way we can differentiate between the two clients is by one of the input parameters. Based on this input parameter we have to use a slightly different business logic – the logic for both the Client will be same 50% of the time the remainder will have different logic (across Business / DAL layers) . I don’t want to use if else statement in each of contract implementation to differentiate and reroute the logic also what if another client comes in. Is there a clean way of handling a situation like this. I am using framework 3.5. Like I said I cannot change any of the contracts (service / data contract ) or the current service calling infrastructure for the new client. Thanks
Can you possibly host the services twice and have the clients connect to the right one? Apart from that, you have to use some kind of if-else, I guess.
I can't say whether this is applicable to you, but we have solved a similar problem along this path:
We add a Header information to the message that states in which context some logic is called.
This information ends up in a RequestContext class.
We delegate responsibility of instantiating the implementation of the contract to a DI Container (in our case StructureMap)
We have defined a strategy how certain components are to be provided by the container:
There is a default for a component of some kind.
Attributes can be placed on specializations that denote for which type of request context this specialization should be used.
This is registered into the container through available mechanisms
We make a call to the Container by stating ObjectFactory.With(requestcontext).getInstance<CONTRACT>()
Dependencies of the service implementations are now resolved in a way that the described process is applied. That is, specializations are provided based ultimately on a request information placed in the header.
This is an example how this may be solvable.