How to update multiple entities in Cuba Platform Frontend UI - cuba-platform

I have a function that returns data from many entities and puts them in a table. When the table is changed and the save button is clicked I need to send all the data to their entities. Is there a way to do this in one action?

If you mean that you need a REST API method which can update several entities at once - then there is no such built-in method in the CUBA REST addon.
However, backend developers can create and expose a "service method" for you that accepts list of entities or POJOs as parameter. Then you will be able to call it from the web page.
See https://doc.cuba-platform.com/restapi-7.2/#rest_api_v2_ex_service_post

Related

Is it okay to return all data that related to the resource in one endpoint

We have a recipe in our mobile application, this recipe has a details screen and the details screen has a lot of information, for example:
Rating
Related recipes
Ingredients
Recipe info
Nutritional info
So is that right to return all this info in the same endpoint or create multiple endpoints for each section?
One endpoint example:
GET: https://www.example.com/api/v1/{recipeId}
Multiple endpoints example:
GET: https://www.example.com/api/v1/{recipeId}/info // this API will return all info including the ingredients
GET: https://www.example.com/api/v1/{recipeId}/rating
GET: https://www.example.com/api/v1/{recipeId}/related
GET: https://www.example.com/api/v1/{recipeId}/nutritional
It depends.
If a consumer of your API is say a web page where you want to display all the information at the same time in one click, you can just bring all information together and display in one go rather than calling APIs one by one and then aggregating, however if there is possibility that individual endpoints are also required to be called separately, then you can expose multiple endpoints.
Also,your resource uri should be like this :
/api/v1/recipes/{recipeId}
In this case, you can create a single API endpoint. And, can expose a query parameter which shows which are the fields user (rest client, web) is interested in.
/api/v1/recipes/{recipeId}/?fields=all
/api/v1/recipes/{recipeId}/?fields=info,rating,related
/api/v1/recipes/{recipeId}/?fields=info
In this way, you will be saved from the headache of writing multiple endpoints for a single serving type.
Also, the schema of output message or JSON will be the same.
Maybe in future, you want to add another field to your response. You just need to add another filter name. Your client (web/rest) doesn't need to use a new API for that. Just pass the new filter and done.

WorkFront / AtTask API querying secondary objects

I'm using the WorkFront / AtTask API and when looking up Tasks, I'd like to filter them down to the Projects that contain specific Roles.
using /TASK/search/?fields=project:roles it will show me the Roles, but then I'm not sure how to filter on those.
Accessing a tertiary object directly (fails)
I tried /TASK/search/?project:roles:ID=aaaaaaa but the API returns (422) Unprocessable Entity.
Access from the parent object (works)
task -> project -> /PROJ/search/?roles:ID=aaaaaaa works, but involves sub-queries to the API that are costly and slow.
Access from secondary object's ID fields (works)
/TASK/search/?project:ownerID=bbbbbbb since it references a field of a secondary object and not yet another object. But I've only been able to make this work with single-instance references and don't know how to access the ID fields of collections without referencing them as objects.
So how could I filter or access down to a secondary object's collection? I can view them in a single API query, but can't seem to filter.
Task > It's Project > filter by Role
This functionality is not available in Workfront, neither through the API nor through built-in tools like Reports. This is due to a constraint on the database side of things. After seeing this question I spoke with my enterprise support team at Workfront and received confirmation of this from the DBAs.
The solution that you provided is the best you can do - split this query into the front and back half of your parameters and filter results within your code.
The best solution I can think of thus far is:
Pull the list of acceptable projects based on role.
/PROJ/search/?roles:ID=aaaaaa&...
Save the list of projects in memory
Pull the list of Tasks in question
/TASK/search/?...
Remove the tasks that don't have a project ID from step 2
This way it's only 2 queries and the project query should have a minimal impact in terms of size and number of entries.

what's the featherjs way to handle non-data related actions and child objects?

I just discovered feathersjs and really like the idea behind it, even though I'm still unsure how the service-based philosophy can fit for applications which are more complex than a simple CRUD UI.
In order to better understand it I made up an example: Consider an application where you can create and share surveys. You could easily manage to create a survey service to create, update and get the properties of a survey (i.e. questions and answers). But how should one handle the following aspects:
1) There are actions, i.e. service invocations which do not affect the data at all. One action could be to send a reminder email to all invited users who did not participate on a survey yet. If not using feathers I would created a dedicated express endpoint for this, but how do those actions fit in the feathers philosophy? Should one create a service (only implementing one HTTP verb) per action? This will get confusing soon. Use hooks that detect updates on virtual fields and trigger the action? Hard to document and confusing as well.
2) Imagine users could add comments to a survey. The comments would be part of the survey model (I'd use MongoDB for that, so consider each survey object to have a comments array). The client web would invoke the GET /survey/123 method on the survey service which would return the comments amongst the other properties (question, answers, ..). But what about adding comments? Should I use a dedicated service for it, or how would this fit into the survey service? How would such a request look like?
From the Feathers slack channel: https://feathersjs.slack.com/messages/C0HJE6A65/
Sending an email is fine in a hook. For actions you could do a patch with a certain action attribute and then use hooks to determine which action should be performed, etc. The other way would be a simple small service that only has create implemented. For comments I would probably have a comments or survey-comments service and then your survey/123 could populate the comments. Or the web could make 2 calls, one to fetch the survey, the other to fetch the comments.

API design pattern to be integrated both by own web app and other systems

So this backend will be consumed by an ad-hoc front end application. But will also be integrated by other systems and we will expose API for them.
When designing the rest I see that there is ONE database table (we call it for table A) that can join many other tables, lets say about 10 to 20 other tables.
Now, my strategy would be to build routes in my backend that will "reason" according to the ad-hoc frontend we have.
So if there is a page in the frontend (let's call this page for page1) that requires to get rows from the table A but also fields from let's say 3 other join tables, then I want to create a route in the backend called maybe "page1" which will return rows from table A and also from the other 3 tables.
This is of course an ordinary way to build a backend. But as it will also be used by other systems then somebody could argue that these systems maybe don't have any need for the route "page1". Their frontend will maybe never build a "page1".
So according to people here, it would better to build the API more agnostically. And instead of creating the route "page1" I should build it according to "hateoas". And if I understand that principle, instead of letting my ad-hoc frontend to request the resource "page1" it would request "pageForTableA". And then, the resource "pageForTableA" should return which are the possible table to be joined.
In this case, for my frontend's page1, I would need to make 4 subsequent request to the server, instead of one like I would like to do if there was a "page1" resource in the backend.
What do you think?
I also see a thirt strategy. I don't know if there is a name for this pattern but it would be this way:
A resource in backend that returns only rows from table A. BUT, the route also takes arguments. And the argument is an array with the name of all the other tables someone want to include.
So if frontend calls:
getTableA(array('tableB', 'tableD', 'tableF'))
Then the resource would include/join the tables B, D and F. In short: API resource let's the frontend decide what it want to get delivered.
Which of these 3 strategies are best do you think? Or there is some more that could be taken in consideration?
You need to architect your API in a way that consumers shouldn't know about how the data is stored in the underlying data store.
Furthermore, if you want to allow consumers to decide which fields you want to project in the response, you could give them using some query string format.
BTW, maybe you should avoid re-inventing the wheel. There's a standard called Open Data (OData) which already defines a lot of things like you already require in your API, and since it has been made by Microsoft, it has deep support on .NET.
Check this tutorial (Create an OData v4 Endpoint Using ASP.NET Web API 2.2) to get more in touch with OData.

Zend Framework 2 - Importer for multiple Rest or Soap Apis

I want my ZF2 Application to import data from many different REST or SOAP Services, which may use different authentication types and so on.
Now I'm basically looking for a structure / architecture how to implement this, maybe some design patterns or ready to use modules if they exist.
Every information could help. I'm also thankful for API docs or tutorials that you provide.
But my main question is: How should be the structure for this kind of "importer"
My Application:
Based on Zend Skeleton Application
Using Doctrine 2
Trying to use all ZF2 Best Practices I can find
Consists of many modules, entities and complex associations in some cases
Entities that I want to import are already working (crud operations, validation, ...)
Apis that I want to use:
Usually E-Commerce stuff, like products, orders, stock keeping
Magento Api (Thinking of Rest)
Shopware and other important Webshops
Ebay Stores
Amazon (I think is going to be the hardest one)
Must have functionality:
I want the api URLs and authentication data to be configurable in my app with doctrine entities
The "Api" Entity should be associated to my "Shop" Entity. Orders or Products that I import or create directly in my App are also associated to my Shop entities. So every Shop/Ebay-Store/Amazon-Store is a "Shop" in my Application. This is already the part I've done.
For example product import should be done directly from my app frontend, I'm thinking of retrieving the api data first and then import them incremtally / step for step
I don't want fat controllers that transform the data into doctrine entities and save them one by one. This way complex associations would become very hard to maintain.
Need a good approach for data transformation and hydration to doctrine entities. Because the data I retrieve from api will usually not have the same structure as my entities. Maybe an attribute that's a property of the "Product" entity in foreign app is excluded into an associated entity in my own application.
Many modules in my application will have entities that should be importable from these apis, so I need a central component that does the job
How would be the best approach for this? I'm not asking for a complete solution, but ideas that fit these requirements.
The Zend HTTP client and its relatives (like Zend OAuth) provides most of the functionality that you need to implement fetching the data from the services.
You can then persist the response in any number of ways, but a schema-less database like Mongo DB makes saving dynamic data much easier. If you are stuck using a relational DB like MySQL then you can either setup an EAV database, or use dynamically generated tables.