How to establish relationships between Spring Data REST / Spring HATEOAS based (micro) services? - spring-data-rest

Trying to figure out a pattern for how to handle relationships when using a hypermedia based microservices based on Spring Data Rest or HATEOAS.
If you have service A (Instructor) and Service B (Course) each exist as an a stand alone app.
What is the preferred method for establishing a relationship between the two services. In a manner that does not require columns for IDs of the foreign service. It would be possible for each service to have many other services that need to communicate in the same manor.
Possible solution (Not sure a correct path)
Each service has a second table with a OneToMany with the primary entity within the service. The table would have the following fields:
ID, entityID, rel, relatedID
Then in the opposite service using Spring Data Rest setup a find that queries the join table to find records that match.
The primary goal I want to accomplish would be any service can have relationships with any number of other services without having to have knowledge of the other service.

The basic steps are the following ones:
The service needs to discover the resources of the other service.
The service then adds a link to the resources it renders where necessary.
I have a very rudimentary example of these steps in this repository. The example consists of two services: a service to provide geo-spatial searches for stores. The second service is some rudimentary customer management that optionally integrates with store service if it is currently available.
Here's how the steps are implemented:
Resource discovery
In my example the consuming service (i.e. the customer one) uses Spring HATEOAS' Traverson API to traverse a set of link relations until it finds a link named by-location. This is done in StoreIntegration. So all the client service needs to know is the root URI (taken from the environment in my case) and a set of link relations. It periodically checks the link for existence using a HEAD-request.
This of course can be done in a more sophisticated manner: hard-wiring the base URI into the client service might be considered suboptimal but actually works quite well if you're using DNS anyway (so that you can exchange the actual host behind the URI hard-coded). Nonetheless it's a decent pragmatic approach, still rediscovers the other service if it changes URIs, no additional libraries required.
For an even more sophisticated approach have a look at Netflix' Eureka library which is basically a service registry. Also, you might wanna checkout the Spring Cloud integration we have for that.
Augmenting resources with links
Spring HATEOAS provides a ResourceProcessor API that Spring Data REST leverages. It allows you to manipulate the Resource instance about to be rendered and e.g. add links to it. The implementation for the customers service can be found here.
It basically takes the link just discovered in the steps above and expands it using well-known parameters and thus allows clients to trigger a store geo-search by just following the link.
Beyond that
You can find a more sophisticated variant of this example in the examples projects for Spring Cloud. It takes the very same example but switches to Spring Cloud components such as Eureka integration, gathering metrics, adding UI etc.

In my case I can only derive related items from the service itself. My goal is to abstract the related items to the point that any number of services can be related to a service and only need to lookup ID's or links. One thought was an #ElementCollection named related with a join of the entity ID of the service. Then in the #Embedded have a relLink field and a relatedID field. Then in the repository do a findby to find the relLink and relatedID.
The hope is to keep it abstracted enough to essentially mimic a Many to Many setup.

Related

Moleculer: segmenting micro service communication

I just starting playing with Moleculer and saw how easy it is to call a service actions/events from another service. This is great. However, is there a way to limit which services can access particular services? So for example if I have products and orders in my app, I may not want the orders accessing all the product related services, just the main one. I am just thinking that if I leave it free for all, maintainability may suffer as I won't easily know which service is calling which service.
Or should I just create two projects (one for orders and one for products) and control it there?
You could use the namespace property in broker. From the docs:
Namespace of nodes to segment your nodes on the same network.
https://moleculer.services/docs/0.13/broker.html#Broker-options
And then you could use inter namespace middleware
https://gist.github.com/icebob/c0bce54436379d29c1bee8521ceb5348
Anyway, you might consider using Discord chat (https://discord.gg/TSEcDRP). Moleculer community is much more active there.

Resolving design dependency between microservices

I am designing an e-commerce application with microservice approach, using ORM(JPA) for data persistence for one of the microservice named OrderService. OrderService owns functionality related to persisting and reporting orders, which essentially include customer and product information. Customer and product functionality is managed by different microservices.
My question is at ORM layer OrderService need POJO which belongs to ProductService and CustomerService. What is the best way to deal with this dependency between services? Should application needs to design in different way?
There are few things that one should take into consideration when try to find a solution
1. You cannot access the database of other services, you have to make a call.
2. You should try not to keep data from other services into yours. Data duplication lead to an inconsistent state and should be avoided if you can
3. You should have a means to query data from other services when asked for.
Now with those points, I will mostly restrict data from other services to some reference ids (which should be immutable). At ORM layer I will just fetch the reference IDs and bloat them up by making an API call to concerned services(business layer).
You may realize that you are making way too many calls for say getting customer name to customer service using customer id, if that is the case, you may consider saving some of these information in your system. But be cautioned. Data that you saved should not be volatile and make sure you have done due diligence in making that call.
Recently, I have gone through many design principles of microservices and realizes that CQRS-ES and data replication with eventual consistency is best solution of this issue. We try to make communication Asynchronous as much as possible, uses point to point synchronous communication between microservices only when necessary.
This is a fairly common situation when designing microservices. Most microservices will require access to data available through another microservices or an external provider.
The best way to deal with this is to design each microservice as a "separate" application and think of all other microservices as external to it.
So, the developer of Microservice#1 (M1) would have to check into the Microservice#2 (M2) spec and write simple POJO classes for the data he fetches from there. Just like he would do if he were using some external API like Facebook.
Do note that that M1 will always talk to M2 (via REST for example) and never to the DB directly for the data it needs.
Ideally, each microservice would have its own database (or part clone of a central database)

Interaction of services in the service layer

What is the best way to organize interaction between services in the service layer?
For example, I have document service and product service. In my case products can have their own documents and to manage documents of product I call appropriate methods from the document service in the product service. So, I need to create instance of document service in product service. And I need to call some methods from product service in the document service too. So, each of these services refers to other and I get stackoverflowexception respectively.
Which design solutions should I use to eliminate these problem?
Application Services are supposed to provide external clients an API for executing cohesive business operations. An application service method generally matches a use case of your application.
While an application service operation may require calling another service (eg, the Create Product use case includes the Create Document use case, which can also be called separately), this is not the norm and you should look to make your application services as cohesive as possible. In particular, just because at some point in your business case you start to manipulate another kind of entity doesn't mean you should delegate that part to another application service - in other words, one application service per entity is not necessarily right.
In any case, from your domain it should appear clearly in which direction the dependency between 2 applications services points. In your example, Product Service seems to depend on Document Service - it's difficult to imagine why it would be the other way around.
If you really need a round-trip between service A and service B (which I wouldn't do unless I have no other option), you could try and have the instance of A inject itself into B instead of relying on a DI container to resolve the dependency with a new instance, solving the stack overflow problem - if that's why you get a stack overflow in the first place.
Obviously, circular dependencies are wrong.
You can use shared identifiers to decouple Products and Documents.
Moreover you can orchestrate the service interaction from outside them, in the application: in the ProductService you can have a LoadProducts(ProductIdentifiers[] identifiers) returning an immutable collection of products and in the DocumentService you can have a LoadDocuments(DocumentIdentifiers[] identifiers) returning an immutable collection of documents.

OData with WCF Data Services / Entity Framework

Apologies in advance, this is a long question.
(TL;DR : Does anyone have any advice on using the EF with dynamic fields exposed using WCF Data Services/OData)
I am having some conceptual problems with WCF Data Services and EF, specifically pertaining to exposing some data as an OData service.
Basically my issue is this. The database I am exposing allows users to add fields dynamically (user-defined fields) and it uses a system whereby these fields are added directly to the underlying SQL tables. Furthermore, when you want to add data to the tables you cannot use direct SQL, you have to go via an API that they provide. (it's SAP Business One, fwiw).
I have already sucessfully built a system that exposes various objects via XML and allows a client to update or add new entities into SBO by sending in XML messages, and although it works well it's not really suited to mobile apps as it's very XML-heavy and the entry point is an old-skool asmx webservice. I want to try to jazz it up for mobile development and use Odata with WCF or Web API. (I know I could change up to a WCF service, allow handing of JSON-format requests, and start returning JSON data, but it just seems like there must be a more...native...way)
Initially I had discounted the possibility of using the EF for this because a)Dynamic fields and b)the EF could only be read-only; adding/updating entities would have to be intercepted and routed to the SBO DI Server. However, I am coming back to thinking about it and am looking for some advice (negative or otherwise!) on how to approach.
What I basically want to do is this
Expose the base tables from SBO (which don't change except when they themselves issue a patch) as EF Entities, with all the usual relationy goodness. In fact I actually will not be directly exposing the tables, I will use a set of filtered SQL Views as the data sources as this ties in with various other stuff we do to allow exposing only certain data to 3rd parties.
Expose any UDFs a particular user has added as some kind of EAV sub-collection per entity.
Intercept any requests to ADD or UPDATE an object, and route these through an existing engine I have for interfacing with the SAP Data import services.
I suppose my main question is this; suppose I implement an EF entity representing a Sales Order which comprises a Header and Details collection. To each of these classes I stick in an EAV type collection of user-defined fields and values. How much work is involved in allowing the OData filtering system to work directly on the EAV colleciton (e.g for a client to be able to ask for Service/Orders/$filter=SomeUdfField eq SomeValue where this request has to be passed down into the EAV collection of the Order header entity)
Or is it possible, for example, to generate an EF Model from some kind of metadata on the fly (I don't mind how - code generation or model building library) that would mean I could just expose each entity, dyanmic fields included, as a proper EF Model? Many thanks in advance if you read this far :)
For basic crud to an existing EF context, WCF Data Services works out great. As soon as you want to add some custom functionality, as you described above it takes a bit more work.
What you described is possible, but you would need to build out your own custom data provider to handle the dynamic generation of metadata as well as custom hooks into add/update/delete.
It may be worth looking into WCF Data Services Toolkit, it's a custom provider which slaps a repository pattern over WCF Data Services for ease of use, but it does not provide the custom metadata generation.

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.