[Question + Discussion]: what are the tradeoffs of using apollo-client in redux application? - react-native

I have a redux application that fetches data from a Graphql server. I am currently using a lightweight Graphql client called graphql-request, and all it does is help you send Graphql queries/mutations, but I would like to get the best out of my APIs. even though I am using Redux as state management, is it ok to use apollo-client without its built-in cache and use it only for network requests/ API calls?
Benefits I know I would get from using apollo-client include:
Better error handling
Better implementation of auto-refreshing tokens
Better integration with my server, since my server is written apollo-server
Thanks

Apollo-client's built-in cache does pretty much the same job that redux state management would do for your application. Obviously, if you are not comfortable with it, you can use redux to implement the functionality that you need, but the best case scenario in my opinion would be to drop redux, since the configuration of it its pretty heavy, and rely purely on the cache provided by Apollo-client.

Related

What is the difference between GraphQL-Mesh and Apollo-Federation?

What is the difference between GraphQL-Mesh and Apollo-Federation?
I see that mesh support federation, which is kind of confusion?
Is it just two different ways to achieve a unified schema?
Why would you pick one solution rather than the other?
GraphQL Mesh is a set of tools to build either a gateway or a sdk for a set of data sources. They can be of various types : documented REST Apis, postgres databases through Postgraphile, of course GraphQL servers, and many more.
Mesh then allows you to perform transformations to alter your end schema.
Apollo Federation is a library designed to build relationships between separate GraphQL Schemas, also called sub graphs. It is also one of the transformation strategy proposed by GraphQL Mesh.
In a sense, GraphQL Mesh is more comparable to Apollo Gateway but with way more features to customize your schema and the ability to be used as a sdk.
GraphQL-Mesh: connects to your database: detects the database schema and transform it into a GraphQL server. It is a sort of "zero code" solution. You have the database and bam!: you can query it as a graphql server.
Apollo: more difficult to summarize in one sentence it is both frontend and backend. It is not zero code solution: it is a fullstack framework that helps you write the code that will serve a proper GraphQL communication
on frontend side, it helps you write the graphql queries in your javascript code, (integrates well with React.js but can be used for other frontend libraries as well) it does also caching (so that the frontend does not ask the server again if the data is in his cache)
on backend side: you can declare a graphql schema and write the code for your resolvers (the resolvers are the function that are called when your backend receives a graphql query: they must return the expected data). Apollo takes care of listening for the queries, parsing them and call the proper resolver for every sub part of the query.

Query database directly or fetch from API in GraphQL resolvers?

I have a microservice application with a few services. I'm planning to implement GraphQL for the application.
An approach I have in mind is to implement a layer of APIs in each of the services first. Then, the GraphQL resolvers would make requests to the services' API endpoints and return them. This method seems neat to me because I will only have one GraphQL endpoint for my frontend to work.
At the same time, however, I'm not sure if this is a good idea at all. Instead of querying against the database directly in my resolvers, I'm actually making extra HTTP requests in my resolvers and creating overheads through network transfers. I'm guessing this would impact the overall performance with the extra layer of API calls.
One of the benefits of GraphQL is to prevent over fetching. With that extra layer of API calls in the resolvers, I'm effectively already fetching all the fields in the response of the API. Does this sound like another problem with the approach I have described?
When implementing GraphQL in a microservice application, should I have a layer of API for all the services and then have GraphQL resolvers fetching from them, or should I aim to query against the services' database directly in the GraphQL resolvers?
This sounds like a pretty normal way of doing things. Over-fetching (e.g all the fields on an entity) at the GraphQL <-> Platform boundary is arguably beneficial because you can relatively easily add entity-level caching that's close enough to the source of truth that you can also handle cache invalidation.
Whilst those additional requests do add overhead, you can take advantage of various techniques to reduce it (keep-alive, connection pooling, http2 multiplexing, etc). Ultimately, what you have is a pattern that'll be forced on you anyway once you hit a certain scale.

How to structure a big application in React Redux that consumes a REST API?

I've looked into consuming APIs in Redux using the redux-thunk middleware. I understand the flow of calling an API from Redux, and I understand how to structure a slice of my state to reflect the status of an API call, i.e. fetching, success, or fail. What I'd like to know now is: how do I structure a very large application to avoid boilerplate code? From what I gathered reading the docs, a single API call requires:
Action creators to dispatch an API_CALL_FETCHING action and an API_CALL_SUCCESS action or an API_CALL_FAIL action
Reducer code to handle each of the actions
A slice of your state dedicated towards reflecting the status of your API calls
Assuming I have a resource that allows basic CRUD operations on it, naively that means I should write 12 different actions (4 crud operations * 3 API status actions per call). Now imagine I have many resources that allow CRUD operations, and this starts to get huge.
Is there any eloquent way to condense the code necessary to make many API calls? Or does having a large application simply demand lots of repetition in this area?
Thanks!
Yes. There's numerous ways to abstract the process of making API calls, and dozens of existing libraries to help with that.
Generically speaking, you can write "factory functions" that take some set of parameters (API endpoints, data descriptions, etc), and return a set of actions, reducers, and other logic for actually making the API calls and handling the data.
For existing examples, see the Action/Reducer Generators#Network Requests and Entity/Collection Management sections of my Redux addons catalog. There's also some more intentional abstraction layers on top of Redux, like redux-tiles and Kea.

Ember adapter and serializer

I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.

Existing SOAP service and new Angular Web App

We have an established WCF SOAP service. Its interface is defined in WSDL, from which C# classes are generated for our server (customers generate client-side bindings in various languages, from the same WSDL). The WSDL has a current version, which we can change a bit, and old versions, which we can't change or drop without a deprecation period, consultation etc. The SOAP requests tend to be complicated, having multiple XML namespaces within the same request.
The WCF SOAP service has a lot of "smarts" in it, and provides exactly the kinds of fetching and reporting facilities that we need for a new Web application that we need to make. We hope to use AngularJS for the client side of that. But these complex SOAP requests aren't easy to make in JavaScript world. If only we had a REST service, we could use angular Resource service. If not that, then a server that spoke JSON, albeit in an RPC style like SOAP, would run a fairly close second.
I've had various ideas for how the impedance mismatch between our server and client might be mitigated. But nothing sounds quick or easy.
I've thought of: -
Write a new REST service. Exactly what the client-side wants, but a serious piece of new development.
WebHttpBinding looks to offer something. But seems to me like it requires C# markup of custom attribute (how to achieve when our C# is generated from WSDL) and possibly wouldn't support our complex types
Obtain or write loads of client-side JS to abstract away calling SOAP services. But, unless this can be auto-generated from the WSDL, it's a huge amount of client-side code to write.
Write an IDispatchMessageFormatter for the server, to accept some JSON format of messages that I invent. Sounds hard, especially as good examples of people implementing and integrating IDispatchMessageFormatter seem hard to come by.
Write a MessageEncoder to swap between JSON and XML. But this isn't really an encoding operation, as became very clear when I tried to write it!
I'm searching for suggestions.
Generally, I recommend a REST service for any AngularJS development and have wrapped a number of legacy systems with Node.js API servers. Of course there is a massive amount of "it depends", but generally most projects will be happier and more productive following that route.
Some Things To Think About
How well does your current SOAP API fit the user interface requirements?
Are you experienced with Express, Sinatra, Flask or other micro-framework that allow rapid development of REST APIs? I find I can build a solid Node.js Express API server in a couple of hours and then extend it as I build the AngularJS application out.
How experienced are you with AngularJS? It's a more advanced project to build a complex data layer client-side.
Six Reasons Why REST is Important for AngularJS
It's much faster to write Angular code using $resource and $http. Get the API right is a good recommendation for effective AngularJS development. Indeed, you could argue that AngularJS is designed for REST, and that's why plain JavaScript works for the model (see 2).
Angular's plain-old JavaScript object data model works well with a REST API that speaks JSON that matches the user interface. However, issues arise when there isn't a good fit- Angular doesn't have a formal data model so you end up writing an lot of code trying to rationalize your API to work well with Angular. 3rd party libraries like breeze.js may offer some solution, but it's still awkward.
You can scale easily with caching. It's easy to add Redis or memcache or Varnish or other common HTTP caching solutions into the mix. Resource-based abstractions are perfect for caching strategies due to the transparency and idempotency of a REST api.
Loose coupling of front-end and server- it will be easier to support changes to the backend if you migrate off SOAP or need to integrate with other services.
It's generally easier to test JSON APIs separately from AngularJS logic, so your test suites will be simpler and more effective.
Your new REST API will be easier to leverage for future AngularJS and JSON-oriented projects.
I hope that helps.
Cheers,
Nick