Mulesoft best practices for API-led connectivity , is it okay to invoke System API directly from the client application(be it web/mobile) - api

The main reason for this question is to understand/reasons behind the best practices over the usage of system APIs. If the System API itself good enough to be serve the purpose of my client application, do we still need to write an experience API to invoke the system API indirectly, or break the rule, just invoke the system-API directly from the client application. As sometimes , it is overhead/numerous API calls over the network.

System API is to unlock or expose the system asset(back end data). Now, one could write the system API in such a way that it fetches the data from system database, does the required processing, for instance convert the table rows to JSON format and then does some enrichment & trimming of fields and expose it to the customer A. This is a course grained approach. Now, another customer B requires similar data but needs some fields that were already trimmed by you to serve Customer A who wanted only few fields of the many fields that you picked from System(database). You'll have to write a separate course grained API for Customer B.
Also, in the future if the backend SYSTEM is replaced with a new SYSTEM, one would have to re-write/update both the API's for each customer A and customer B.
This course grained approach would solve your problem each time, but architecturally having a fine grained approach of breaking down a large service into multiple layers of experience, process and system API's would enable re-use, reduce work effort, increase time to market, lower total cost of ownership and allow applying the required separate policies(Security, sla's etc) for each of the clients through experience API layer. You can now better scale your integration landscape.
A fine grained approach increases usage of resources such as network, diskspace(more logging) etc but its a trade-off to all the many advantages you get. Again, the decision to go with either of the approaches should align with the current circumstances of your ecosystem, so it all depends.

Related

Considerations for Creating Industrial Applications (Native/Web)

What considerations are needed when creating a web app that is intended to be used in an industrial plant setting for a company? My specific use case is an industrial facility with several different production plants that would each have its own device for the application interface.
How do companies enforce the usage of such apps on a monitor/tablet? For example, could I prevent them from using other stuff on the tablet?
Importantly, how would security work? They'd share a device. There may be multiple operators that use the app in a given shift. Would they all use the same authentication session (this is not preferable, as I'd like to uniquely identify the active user)? Obviously I could use standard username/passwords with token based sessions that expire, however, this leaves a lot of potential for account hijacking. Ideally, they'd be able to log on very quickly (PIN, perhaps?) and their session would end when they are done.
As long as there is internet connection, I would presume that there isn't much pro/con regarding the use of native applications versus web based or progressive web apps. Is this assumption correct?
What's the best way of identifying which device the application is being run on?
Is this a common thing to do in general? What other technologies are used to create software that obtains input from industrial operators?
--
Update - this is a good higher level consideration of the question at hand, however, it has become apparent why focused, specific questions are helpful. As such, I will follow up with questions that are specific.
Identifying the Area/Device a Web Application is Accessed On
Enforcing Specific Application Use on Tablets
Best Practices for Web App Authentication in Industrial Settings
I'm not able to answer everything in great detail but here are a few pointers. In the environment as you describe we usually see these two options. 1) you tell them what you need, internet, security, if they give you device and how it will be configured 2) they tell you exactly what you need to deliver.
I do not think you can 100% prevent them. We did it by providing the tablet( well laptops in our case) and the OS configuration took care of that, downside we had few devices to support. You seem to hint that there is always an internet connection so I guess you can collect all info about the system and send it back to you daily?
We were allowed to "tap" into their attendance SW and when you entered the facility you were able to use your 4 digit pin to log in if you were out of premisses you could not log in at all. I can imagine the following: you log in with your username and password - this does full verification, after that, you can use 4 digit pin to login for next n hours.
maybe, kinda, depends on what you are doing. Does the browser have all features you need? Our system needs multicast to perform really fast, so we have a native app
touched on this in 1. You could also use device enrolment process. You can also contractually force them that there will be only your software and it may invalidate support contract. It really depends on your creativity. My favourite( and it works - just tell them, there will only be installed my software and if not you will pay me double for support. I only saw one customer who installed some crap on the device when there were told not to
it really depends on what industry you are talking about, every industry is different. We almost always build a custom solution
The enforcement of the device/app usage depends on the customer, if the customer asked for help in the enforcement, then you can provide guide, training and workshops. If the customer serious about the enforcement then it will be a policy that's adapted by all the organization from top to down. Usually seniors will resist a workflow change more than juniors, so top management/executive should deal with that. Real life story: SAP team took 6 months to transform major newspaper workflow, during that few seniors got fired because they refuse to adapt the change.
Security shouldn't handicap the users, usually in industrial environment the network is isolated or at least restricted through VPN to connect multiple sites (plants in your case), regarding the active user: we usually provide guide/training/workshop for the users and inform them that using colleague account or device will prevent the system from tracking your accomplishment/tasks, so each user is responsible to make sure the active account/device is the one assigned to him/her.
It depends, with native you have more controls than web, but if the app is just doing monitoring then most of today apps use web for monitoring and the common way to receive input is REST APIs (even if the industrial devices doesn't support REST API, a middleware could be written to transform the output). If you need more depth about native vs web you need to ask new question with more details about the requirements.
Depends on the tech you are using (native or web), and things I mentioned in point 2: you can use whitelist of devices that's allowed to run the app. overall there are many best ways to track down the device.
How common in general? I think such information can only be achieved by survey, the world full of variations. And having something common not mean its safe or best, our industry keep changing at all levels. So to stay in the loop, we must keep learning and self-updating without reboot.

Reuse microservices across different project

We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...

Mule API Led Connectivity Design Approaches for Experience API

As part of our journey towards API-led Connectivity, we have to group our resources (i.e. API endpoints) into multiple Mule applications for the experience APIs.
In order to have meaningful names for the Mule applications while maintaining the maximum re-usability, rather than associating the consumer names with the application names (which makes the experience API tightly coupled with the current application landscape), we propose to have Mule application names to reflect the essence of the business.
The list of the options are as follows. Which one do you think is more ideal? What approach have you used in your organization?
based on Channel/Consumer
A dedicated experience API for a consumer such as WEB, CRM, Mobile etc.
uri examples:
www.example.com/example-**web**-application/v1/
www.example.com/example-**crm**-application/v1/
www.example.com/example-**mobile**-application/v1/
Pro's: - applying channel specific policies is easier, management becomes easier, smaller outage window
Con's: - reusability reduces and chances of duplication of objects across api's increases
based on Business Domain
Company data model is used. Eg - Customer, Product, Payment etc.
uri examples:
www.example.com/example-**customers**-application/v1/
www.example.com/example-**products**-application/v1/
www.example.com/example-**payments**-application/v1/
Pro's: - promotes reusability, channel agnostic, same api can be used across different consumers.
Con's: management might get complex, larger outage window, multiple consumers might be impacted
based on Customer Journey
This approach is tied to the customer's lifecycle with the organization. Eg - Prospective Customer --> Lead --> Engage --> Payments --> Customer Retain
uri examples:
www.example.com/example-**prospect**-application/v1/
www.example.com/example-**lead**-application/v1/
www.example.com/example-**engage**-application/v1/
Pro's: channel agnostic, same api can be used across different consumers.
Con's: can get increasingly big and further breakdown might still be required
Thanks.
As far I understand your question; you would like to know what URIs to be using for the endpoints of the experience APIs, right?
Based on a recent blog entry from mulesoft (July 12 2017).
Experience APIs are:
Experience APIs are the means by which data can be
reconfigured so that it is most easily consumed by its intended
audience, all from a common data source, rather than setting up
separate point-to-point integrations for each channel. An Experience
API is usually created with API-first design principles where the API
is designed for the specific user experience in mind.
Based on the examples from MuleSoft and my understanding, the experience APIs are created for one given "experience"; web, virtual reality, mobile, etc...
You are trying to create an API for a given special experience to make the consumption of the API easy for this specific client.
According to my understanding the main goal on this level is not the re-usability. You focus on re-usability on the System API and Process API level, but the Experience APIs are supposed to make the life of the developers of the different clients easier by providing exactly the interface and data they need so they don't have to communicate directly with the system and process APIs, but they get a tailor-made API, suiting exactly their special needs.
Since the experience API is tailor-made for the special experience / channel / client-application; I think respresenting this in the URI is a good idea.

Which layer is responsible for the business logic?

I am working on a project that designed base on Domain Driven Design.
In this project, we have 5 layers:
Infrastructure
Domain
Application Service
Distributed Service
Presentation
I am confused about how to put my business logic among Infrastructure, Domain and Service layers. Sometimes I put the business logic condition in iqueryable Linq in a repository; sometimes I load all the objects to memory and put them into services; and sometimes I put them in the method of an object. I don't know which way is the right way. Which layer should be responsible for this business logic?
I need some concrete reasons to convince a team of developers that business logic in code is better, because it's more maintainable. I used to have a lot of business logic in the DB, because I believed it was the single point of access.
Stored procedure are usefull to speedup certain DB operations.
Stored procedure are evil because:
it's hard to versioning (not the hardest thing, but harder than versioning your project)
it's harder to deploy (e.g. in my job we have thousand of DBs with thousand of stored procedure's on a couple of servers; when we change logic of a SP we have to update every DB: a pain in the neck.)
it's difficult to debug,
it's difficult to unit test
Said that... implementation of repositories are infrastracture, and infrastracture doesn't know about domain and business logic.
After all I can't really see DDD in this question, maybe you should deepen concepts like entity, value object and aggregate root, together with repository and domain model.
The only thing that we can confirm right now is: Business logic intended as domain logic belongs to the domain model/domain layer. The domain logic is rules that act always in the same way apart from the use case (e.g.: if the order is more expensive than 100$ the shipment is free).
If you have a rule that depends on a use case (e.g.: if a user browse my e-commerce with the appmobile than ...) this is application logic.
DDD also follows the "seperation of concerns" rule so business around domain stays in domain layer and if something outside of domain is dependent then we put them in higher layers like model views in presentation layer.
I know this is old, but I've had some experience working on older projects where the database held all the logic and various systems used that logic. Updating any of those systems became a nightmare of making a change to any of it would break something somewhere else.
DDD was built to get around these exact scenarios.
Think of it as you having one focused application that controls it's domain, defining the domain is often hard, but lets say you could define a traditional system with 3 domains.
Commerce Domain controls how to take orders.
Logistics Domain controls how to ship orders.
Billing Domain for how orders are paid for.
Each one of these domains would ideally be represented layered applications, but the whole end to end story of an "order" involves all 3 applications. Each domain controls it's business and is responsible for doing it's job the best way it can.
Billing Domain could be as simple as a web api that appends order data to a csv file that someone in accounting opens once a month and hand types an invoice out. Or it could grow into a massive complex beast of quickbooks integrations automatically pulling money from saved accounts. The Commerce and Logistics domain shouldn't have to care about where the billing domain is saving it's data or how they're getting paid. They just have the responsibility to inform the billing domain when something is sold and when something is shipped.
The Commerce domain likewise shouldn't have to care about how shipping costs are calculated, it just needs to ask the Logistics domain. Commerce shouldn't be rooting around in a database that Logistics needs, becasuse then if Logistics wants to pivot and use google maps to determine shipping costs we'll need to update Commerce then as well.
Once you understand the concept of "Every domain controls it's data, if you need that domains data, you ask that domain." The next bits kinda fall in line.
Each domain will have a Presentation Layer or two, this can be a website, api, mobile/desktop app or a combination of the above. Each domain will have business logic in a domain/application layer. Each domain will be supported by infrastructure like databases and apis.
In the above example we could have a Commerce Domain. It's presentation layer renders a website to a user, it's domain layer is composed of OrderPage and interfaces for commands/queries. It's infrastructure layer has logic to handle those commands and queries, most of them probably go to a private database, but we also have some api calls out to the Logistics and Billing domains.
Our Billing Domain has 2 projects in it's presentation layer. One is an API that's used to field requests from the Commerce and Logistics domains, the other is a desktop app that we wrote for accounting. They both talk to the same domain objects/interfaces so if accounting needs to log in and manually modify an order, they can do so just as easily as if it was happening on the website. The interfaces in the domain are implemented by the infrastructure which could be a quickbooks api which will also forward data into freshbooks until that big migration is finished. No code in Commerce and Logistics has to care about freshbooks/quickbooks, and we can use both at the same time if we want to.
Our Logicstics domain similarly has two projects in it's presentation layer. A console app that runs on a scheduled task once a morning to batch up orders and an api. Same deal with it's data.
Ok that got a bit too long and I'm going to wrap that up. No one will probably read this answer on a 4 year old post anyways lol.

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).