How can I represent a model with multiple different serializations in Ember? - serialization

I have a Client model. When viewing /clients, I want to return a simple serialization of my clients, with just a few aggregate values (lets say total_unbilled and total_owing). When viewing /clients/1, I want to return a full serialization of the client, including all it's nested tasks and expenses. The back-end has already been configured to do this.
I don't want to return the full serialization of all clients when the user views /clients, as there can be a lot of data under potentially hundreds of clients. I'd like to load that extra information only when needed, when the user views a particular client.
What's the best way to handle this use-case, where models can be serialized in multiple ways, using Ember Data? I know it will cache the initial representation of the client, so if the user visits /clients first, it won't ever try to fetch the full serialization of the client if the user then visits /clients/1. Is there a sensible way to override this? Or would I have to have two different Ember Data models client-side (eg. Client and MiniClient)?

Honestly the easiest is to use two different models, or just don't use Ember Data for the mini clients. It sounds like they won't be used for much more than just an info.
I'd possibly just do POJOs for the mini client, and Ember Data for the full client (since caching would be most useful at that point) Ember without Ember Data

Related

Optimizing GraphQL resolvers for SQL databases and in service-oriented architectures

My company has a service-oriented architecture. My app's GraphQL server therefore has to call out to other services to fullfill the data requests from the frontend.
Let's imagine my GraphQL schema defines the type User. The data for this type comes from two sources:
A user account service that exposes a REST endpoint for fetching a user's username, age, and friends.
A SQL database used just by my app to store User-related data that is only relevant to my app: favoriteFood, favoriteSport.
Let's assume that the user account service's endpoint automatically returns the username and age, but you have to pass the query parameter friends=true in order to retrieve the friends data because that is an expensive operation.
Given that background, the following query presents a couple optimization challenges in the getUser resolver:
query GetUser {
getUser {
username
favoriteFood
}
}
Challenge #1
When the getUser resolver makes the request to the user account service, how does it know whether or not it needs to ask for the friends data as well?
Challenge #2
When the resolver queries my app's database for additional user data, how does it know which fields to retrieve from the database?
The only solution I can find to both challenges is to inspect the query in the resolver via the fourth info argument that the resolver receives. This will allow it to find out whether friends should be requested in the REST call to the user account service, and it will be able to build the correct SELECT query to retrieve the needed data from my app's database.
Is this the correct approach? It seems like a use-case that GraphQL implementations must be running into all the time and therefore I'd expect to encounter a widely accepted solution. However, I haven't found many articles that address this, nor does a widely used NPM module appear to exist (graphql-parse-resolve-info is part of PostGraphile but only has ~12k weekly downloads, while graphql-fields has ~18.5k weekly downloads).
I'm therefore concerned that I'm missing something fundamental about how this should be done. Am I? Or is inspecting the info argument the correct way to solve these optimization challenges? In case it matters, I am using Apollo Server.
If you want to modify your resolver based on the requested selection set, there's really only one way to do that and that's to parse the AST of the requested query. In my experience, graphql-parse-resolve-info is the most complete solution for making that parsing less painful.
I imagine this isn't as common of an issue as you'd think because I imagine most folks fall into one of two groups:
Users of frameworks or libraries like Postgraphile, Hasaura, Prisma, Join Monster, etc. which take care of optimizations like these for you (at least on the database side).
Users who are not concerned about overfetching on the server-side and just request all columns regardless of the selection set.
In the latter case, fields that represent associations are given their own resolvers, so those subsequent calls to the database won't be fired unless they are actually requested. Data Loader is then used to help batch all these extra calls to the database. Ditto for fields that end up calling some other data source, like a REST API.
In this particular case, Data Loader would not be much help to you. The best approach is to have a single resolver for getUser that fetches the user details from the database and the REST endpoint. You can then, as you're already planning, adjust those calls (or skip them altogether) based on the requested fields. This can be cumbersome, but will work as expected.
The alternative to this approach is to simple fetch everything, but use caching to reduce the number of calls to your database and REST API. This way, you'll fetch the complete user each time, but you'll do so from memory unless the cache is invalidated or expires. This is more memory-intensive, and cache invalidation is always tricky, but it does simply your resolver logic significantly.

API design pattern to be integrated both by own web app and other systems

So this backend will be consumed by an ad-hoc front end application. But will also be integrated by other systems and we will expose API for them.
When designing the rest I see that there is ONE database table (we call it for table A) that can join many other tables, lets say about 10 to 20 other tables.
Now, my strategy would be to build routes in my backend that will "reason" according to the ad-hoc frontend we have.
So if there is a page in the frontend (let's call this page for page1) that requires to get rows from the table A but also fields from let's say 3 other join tables, then I want to create a route in the backend called maybe "page1" which will return rows from table A and also from the other 3 tables.
This is of course an ordinary way to build a backend. But as it will also be used by other systems then somebody could argue that these systems maybe don't have any need for the route "page1". Their frontend will maybe never build a "page1".
So according to people here, it would better to build the API more agnostically. And instead of creating the route "page1" I should build it according to "hateoas". And if I understand that principle, instead of letting my ad-hoc frontend to request the resource "page1" it would request "pageForTableA". And then, the resource "pageForTableA" should return which are the possible table to be joined.
In this case, for my frontend's page1, I would need to make 4 subsequent request to the server, instead of one like I would like to do if there was a "page1" resource in the backend.
What do you think?
I also see a thirt strategy. I don't know if there is a name for this pattern but it would be this way:
A resource in backend that returns only rows from table A. BUT, the route also takes arguments. And the argument is an array with the name of all the other tables someone want to include.
So if frontend calls:
getTableA(array('tableB', 'tableD', 'tableF'))
Then the resource would include/join the tables B, D and F. In short: API resource let's the frontend decide what it want to get delivered.
Which of these 3 strategies are best do you think? Or there is some more that could be taken in consideration?
You need to architect your API in a way that consumers shouldn't know about how the data is stored in the underlying data store.
Furthermore, if you want to allow consumers to decide which fields you want to project in the response, you could give them using some query string format.
BTW, maybe you should avoid re-inventing the wheel. There's a standard called Open Data (OData) which already defines a lot of things like you already require in your API, and since it has been made by Microsoft, it has deep support on .NET.
Check this tutorial (Create an OData v4 Endpoint Using ASP.NET Web API 2.2) to get more in touch with OData.

Zend Framework 2 - Importer for multiple Rest or Soap Apis

I want my ZF2 Application to import data from many different REST or SOAP Services, which may use different authentication types and so on.
Now I'm basically looking for a structure / architecture how to implement this, maybe some design patterns or ready to use modules if they exist.
Every information could help. I'm also thankful for API docs or tutorials that you provide.
But my main question is: How should be the structure for this kind of "importer"
My Application:
Based on Zend Skeleton Application
Using Doctrine 2
Trying to use all ZF2 Best Practices I can find
Consists of many modules, entities and complex associations in some cases
Entities that I want to import are already working (crud operations, validation, ...)
Apis that I want to use:
Usually E-Commerce stuff, like products, orders, stock keeping
Magento Api (Thinking of Rest)
Shopware and other important Webshops
Ebay Stores
Amazon (I think is going to be the hardest one)
Must have functionality:
I want the api URLs and authentication data to be configurable in my app with doctrine entities
The "Api" Entity should be associated to my "Shop" Entity. Orders or Products that I import or create directly in my App are also associated to my Shop entities. So every Shop/Ebay-Store/Amazon-Store is a "Shop" in my Application. This is already the part I've done.
For example product import should be done directly from my app frontend, I'm thinking of retrieving the api data first and then import them incremtally / step for step
I don't want fat controllers that transform the data into doctrine entities and save them one by one. This way complex associations would become very hard to maintain.
Need a good approach for data transformation and hydration to doctrine entities. Because the data I retrieve from api will usually not have the same structure as my entities. Maybe an attribute that's a property of the "Product" entity in foreign app is excluded into an associated entity in my own application.
Many modules in my application will have entities that should be importable from these apis, so I need a central component that does the job
How would be the best approach for this? I'm not asking for a complete solution, but ideas that fit these requirements.
The Zend HTTP client and its relatives (like Zend OAuth) provides most of the functionality that you need to implement fetching the data from the services.
You can then persist the response in any number of ways, but a schema-less database like Mongo DB makes saving dynamic data much easier. If you are stuck using a relational DB like MySQL then you can either setup an EAV database, or use dynamically generated tables.

Restful resources and relational databases are incompatible

Say I have a relational database with 100+ tables. Each table models some sort of entity (person, address, vehicle, dog, etc etc). Say I also have a restful API and a bunch of people who want to POST data into this database. Many times this data comes in as an XML package or POST data from a web form or something of that nature. Sometimes we need to post to all the tables of the database, sometimes most, sometimes some, sometimes one.
Now requiring our clients to post clumps of multi resource data into a 100+ table persistence via the restful way of
POST /person
POST /email
POST /vehicle
POST /insurance
is insane! So we could have a resource instead that is
POST /auto-record
{ post body of key values for all the tables needed to make an 'auto-record' }
and it would be connected to some sort of business logic that knows to make inserts into the many tables of the database needed. Okay great. But now that I'm thinking about it, does this design abide by the open/closed principle? If we ever needed to update/add/remove to what an 'auto-record' is then we screw up our clients.
How can restful api's deal with resource groupings? Or does it simply not? Are there alternatives?
You can implement more versions of your RESTful API resource /auto-record. For now modify your resource URI to /v1/auto-record. When there will be a feature change request, you will simply provide your customers with a new resource /v2/auto-record. Old functionality will be preserved at /v1/auto-record and new users will have their needed functionality at v2/auto-record.

RESTfully creating object graphs

I'm trying to wrap my head around how to design a RESTful API for creating object graphs. For example, think of an eCommerce API, where resources have the following relationships:
Order (the main object)
Has-many Addresses
Has-many Order Line items (what does the order consist of)
Has-many Payments
Has-many Contact Info
The Order resource usually makes sense along with it's associations. In isolation, it's just a dumb container with no business significance. However, each of the associated objects has a life of it's own and may need to be manipulated independently, eg. editing the shipping address of an order, changing the contact info against an order, removing a line-item from an order after it has been placed, etc.
There are two options for designing the API:
The Order API endpoint intelligently creates itself AND its associated resources by processing "nested resource" in the content sent to POST /orders
The Order resource only creates itself and the client has to make follow-up POST requests to newly created endpoints, like POST /orders/123/addresses, PUT /orders/123/line-items/987, etc.
While the second option is simpler to implement at the server-side, it makes the client do extra work for 80% of the use-cases.
The first option has the following open questions:
How does one communicate the URL for the newly created resource? The Location header can communicate only one URL, however the server would've potentially created multiple resources.
How does one deal with errors? What if one of the associons has an error? Do we reject the entire object graph? How is that error communicated to the client?
What's the RESTful + pragmatic way of dealing with this?
How I handle this is the first way. You should not assume that a client will make all the requests it needs to. Create all the entities on the one request.
Depending on your use case you may also want to enforce an 'all-or-nothing' approach in creating the entities; ie, if something falls, everything rolls back. You can do this by using a transaction on your database (which you also can't do if everything is done through separate requests). Determining if this is the behavior you want is very specific to your situation. For instance, if you are creating an order statement you may which to employ this (you dont want to create an order that's missing items), however if you are uploading photos it may be fine.
For returning the links to the client, I always return a JSON object. You could easily populate this object with links to each of the resources created. This way the client can determine how to behave after a successful post.
Both options can be implemented RESTful. You ask:
How does one communicate the URL for the newly created resource? The Location header can communicate only one URL, however the server would've potentially created multiple resources.
This would be done the same way you communicate linkss to other Resources in the GET case. Use link elements or what ever your method is to embed the URL of a Resource into a Representation.