How to structure a big application in React Redux that consumes a REST API? - api

I've looked into consuming APIs in Redux using the redux-thunk middleware. I understand the flow of calling an API from Redux, and I understand how to structure a slice of my state to reflect the status of an API call, i.e. fetching, success, or fail. What I'd like to know now is: how do I structure a very large application to avoid boilerplate code? From what I gathered reading the docs, a single API call requires:
Action creators to dispatch an API_CALL_FETCHING action and an API_CALL_SUCCESS action or an API_CALL_FAIL action
Reducer code to handle each of the actions
A slice of your state dedicated towards reflecting the status of your API calls
Assuming I have a resource that allows basic CRUD operations on it, naively that means I should write 12 different actions (4 crud operations * 3 API status actions per call). Now imagine I have many resources that allow CRUD operations, and this starts to get huge.
Is there any eloquent way to condense the code necessary to make many API calls? Or does having a large application simply demand lots of repetition in this area?
Thanks!

Yes. There's numerous ways to abstract the process of making API calls, and dozens of existing libraries to help with that.
Generically speaking, you can write "factory functions" that take some set of parameters (API endpoints, data descriptions, etc), and return a set of actions, reducers, and other logic for actually making the API calls and handling the data.
For existing examples, see the Action/Reducer Generators#Network Requests and Entity/Collection Management sections of my Redux addons catalog. There's also some more intentional abstraction layers on top of Redux, like redux-tiles and Kea.

Related

[Question + Discussion]: what are the tradeoffs of using apollo-client in redux application?

I have a redux application that fetches data from a Graphql server. I am currently using a lightweight Graphql client called graphql-request, and all it does is help you send Graphql queries/mutations, but I would like to get the best out of my APIs. even though I am using Redux as state management, is it ok to use apollo-client without its built-in cache and use it only for network requests/ API calls?
Benefits I know I would get from using apollo-client include:
Better error handling
Better implementation of auto-refreshing tokens
Better integration with my server, since my server is written apollo-server
Thanks
Apollo-client's built-in cache does pretty much the same job that redux state management would do for your application. Obviously, if you are not comfortable with it, you can use redux to implement the functionality that you need, but the best case scenario in my opinion would be to drop redux, since the configuration of it its pretty heavy, and rely purely on the cache provided by Apollo-client.

Query database directly or fetch from API in GraphQL resolvers?

I have a microservice application with a few services. I'm planning to implement GraphQL for the application.
An approach I have in mind is to implement a layer of APIs in each of the services first. Then, the GraphQL resolvers would make requests to the services' API endpoints and return them. This method seems neat to me because I will only have one GraphQL endpoint for my frontend to work.
At the same time, however, I'm not sure if this is a good idea at all. Instead of querying against the database directly in my resolvers, I'm actually making extra HTTP requests in my resolvers and creating overheads through network transfers. I'm guessing this would impact the overall performance with the extra layer of API calls.
One of the benefits of GraphQL is to prevent over fetching. With that extra layer of API calls in the resolvers, I'm effectively already fetching all the fields in the response of the API. Does this sound like another problem with the approach I have described?
When implementing GraphQL in a microservice application, should I have a layer of API for all the services and then have GraphQL resolvers fetching from them, or should I aim to query against the services' database directly in the GraphQL resolvers?
This sounds like a pretty normal way of doing things. Over-fetching (e.g all the fields on an entity) at the GraphQL <-> Platform boundary is arguably beneficial because you can relatively easily add entity-level caching that's close enough to the source of truth that you can also handle cache invalidation.
Whilst those additional requests do add overhead, you can take advantage of various techniques to reduce it (keep-alive, connection pooling, http2 multiplexing, etc). Ultimately, what you have is a pattern that'll be forced on you anyway once you hit a certain scale.

What are some good ways of converting business logic errors to rest API errors?

I have a business logic layer that acts as a service and is agnostic to any application facing or user facing interface. For example, I've got UserService which takes care of operations related to user (e.x. creating users) and at the moment it returns a custom error object that includes a message that explains what went wrong. Now my RESTful API would use services to handle api requests but how do I handle business errors? how do I know what status code to use? I obviously don't like to put a lot of if statements in every single API call. I also thought about having a global error handler that would map every single business logic error to a status code and return that but that's also very verbose code. I'd love to hear some good ideas to handle this elegantly.

Where to put calls to 3rd party APIs in Apigility/ZF2?

I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.
You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.

Can Webapi be used in an application which is not excessed by any external application?

I'd read it somewhere that whenever one needs to do data intensive work then Webapi could be used. Ex: autocomplete textbox where we get data from using ajax on key press.
Now someone told me that Webapi shouldn't be used within applications which are not externally accessed. Rather action should be used to the same work as it is capable of returning the data back in a similar fashion to webapi.
I'd like to know your suggestions over it.
Depends on how you look at it. If all you need is ajax-ification of your controller actions, then you really don't need Web-API. Your actions can return a JsonResult and it is very easy to consume that from your client side through an AJAX call.
Web-API makes it easy for you to expose you actions to external clients. It supports HTTP protocol and Json and XML payloads automatically, out of the box, without you writing the code for it. Now, there is nothing preventing you from consuming the same Web-API actions from your own internal clients in an AJAX manner.
So the answer to your question depends on your design. If you don't have external clients, then there is no string need for you to have Web-API. Your standard controller actions can do the job.