Why am I seeing the _entities request in one service when the entity is native to another? - apollo-federation

I'm working on implementing services compatible with Apollo GraphQL federation; my providing services are written in Lacinia (GraphQL library for Clojure).
I have one service that defines Users:
type User #key(fields: "id") {
id: String!
name: String!
}
type Query {
user_by_id(id:String!) : User
}
schema { query: Query }
and and a second that defines Products and extends Users:
type User #extends #key(fields: "id") {
id: String! #external
favorite_products: [Product]
}
type Product #key(fields: "upc") {
upc: String!
name: String!
price: Int!
}
type Query {
product_by_upc(upc: String!) : Product
}
schema { query: Query }
When I execute a query that spans services:
{
user_by_id(id: "me") {
id
name
favorite_products {
upc
name
price
}
}
}
I get a failure; the following request is sent to the products service:
INFO products.server - {:query "query($representations:[_Any!]!){_entities(representations:$representations){...on User{favorite_products{upc name price}}}}", :vars {:representations [{:__typename "User", :id "me"}]}, :line 52}
and that fails, because the products service shouldn't, as far as I know, have to provide the equivalent of __resolveReference for type User (which it extends); just type Product.
This is very unclear in the documentation and I'll experiment with providing a kind of stub reference resolver in Product for stubs of User.

Yes, indeed, you must provide the __resolveReference (or equivalent) for each type the service schema extends. In retrospect, it makes sense, as it provides the "kernel" of a raw value to be passed down the resolver tree.

Related

Reddis Stack JSON: Nested JSON vs namespace usage recommendation

The redisinsight workbench use namespaces to store JSON objects, such as:
school_json:1 -> {...}
school_json:2 -> {...}
...
But I am asking myself if that is the way to go when dealing with JSON documents. The JSON examples at https://redis.io/docs/stack/json/path/ showcase how to store items in a nested JSON object called store.
In my case I would like to store users. At first I had a structure where a toplevel key users exists such as
users -> {
1: { // actually I'm using a uuid here
username: "Peter"
email: ... // etc.
},
2: {
username: "Marie",
email: ...
}
}
Or should I use namespaces here as well which would look somewhat like:
users:1 -> {
username: "Peter"
email: ...
},
users:2 -> {
username: "Marie",
email: ...
}
I assume that using namespaces would have performance benefits over nested JSON but the example in the redis documentation which uses a nested JSON object to store several items confused me if that is actually true.
I found this answer but that is discussing redis, not redis stack using JSON (which may come with other optimizations).
Thanks in advance!

NestJS serialization from snake_case to camelCase

I want to achieve automatic serialization/deserialization of JSON request/response body for NestJS controllers, to be precise, automatically convert snake_case request body JSON keys to camelCase received at my controller handler and vice versa.
What I found is to use class-transformer's #Expose({ name: 'selling_price' }), as on the example below (I'm using MikroORM):
// recipe.entity.ts
#Entity()
export class Recipe extends BaseEntity {
#Property()
name: string;
#Expose({ name: 'selling_price' })
#Property()
sellingPrice: number;
}
// recipe.controller.ts
#Controller('recipes')
export class RecipeController {
constructor(private readonly service: RecipeService) {}
#Post()
async createOne(#Body() data: Recipe): Promise<Recipe> {
console.log(data);
return this.service.createOne(data);
}
}
// example request body
{
"name": "Recipe 1",
"selling_price": 50000
}
// log on the RecipeController.createOne handler method
{ name: 'Recipe 1',
selling_price: 50000 }
// what I wanted on the log
{ name: 'Recipe 1',
sellingPrice: 50000 }
There can be seen that the #Expose annotation works perfectly, but going further I want to be able to convert it as the attribute's name on the entity: sellingPrice, so I can directly pass the parsed request body to my service and to my repository method this.recipeRepository.create(data). Current condition is the sellingPrice field would be null because there exists the selling_price field instead. If I don't use #Expose, the request JSON would need to be written on camelCase and that's not what I prefer.
I can do DTOs and constructors and assigning fields, but I think that's rather repetitive and I'll have a lot of fields to convert due to my naming preference: snake_case on JSON and database columns and camelCase on all of the JS/TS parts.
Is there a way I can do the trick cleanly? Maybe there's a solution already. Perhaps a global interceptor to convert all snake_case to camel_case but I'm not really sure how to implement one either.
Thanks!
You could use mapResult() method from the ORM, that is responsible for mapping raw db results (so snake_case for you) to entity property names (so camelCase for you):
const meta = em.getMetadata().get('Recipe');
const data = {
name: 'Recipe 1',
selling_price: 50000,
};
const res = em.getDriver().mapResult(data, meta);
console.log(res); // dumps `{ name: 'Recipe 1', sellingPrice: 50000 }`
This method operates based on the entity metadata, changing keys from fieldName (which defaults to the value based on selected naming strategy).

Proper error handling when performing multiple mutations in graphql

Given the following GraphQL mutations:
type Mutation {
updateUser(id: ID!, newEmail: String!): User
updatePost(id: ID!, newTitle: String!): Post
}
The Apollo docs state that it's totally possible to perform multiple mutations in one request, say
mutation($userId: ID!, $newEmail: String!, $postId: ID!, $newTitle: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
id
email
}
updatePost(id: $postId, newTitle: $newTitle) {
id
title
}
}
1. Does anyone actually do this? And if you don't do this explicitly, will batching cause this kind of mutation merging?
2. If you perform run multiple things within on mutation, how would you handle errors properly?
I've seen a bunch of people recommending to throw errors on the server so that the server would respond with something that looks like this:
{
errors: [
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updateUser'
],
message: {
message: 'Validation failed',
fields: {
newEmail: 'The new email is not a valid email address.'
}
},
},
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updatePost'
],
message: {
message: 'Validation failed',
fields: {
newTitle: 'The given title is too short.'
}
},
}
],
data: {
updateUser: null,
updatePost: null,
}
}
But how do I know which error belongs to which mutation? We can't assume, that the first error in the errors array belongs to the first mutation, because if updateUser succeeds, the array would simple contain one entry. Would I then have to iterate over all errors and check if the path matches my mutation name? :D
Another approach is to include the error in a dedicated response type, say UpdateUserResponse and UpdatePostResponse. This approach enables me to correctly address errors.
type UpdateUserResponse {
error: Error
user: User
}
type UpdatePostResponse {
error: Error
post: Post
}
But I have a feeling that this will bloat my schema quite a lot.
In short, yes, if you include multiple top-level mutation fields, utilize the path property on the errors to determine which mutation failed. Just be aware that if an error occurs deeper in your graph (on some child field instead of the root-level field), the path will reflect that field. That is, an execution error that occurs while resolving the title field would result in a path of updatePost.title.
Returning errors as part of the data is an equally valid option. There's other benefits to this approach to:
Errors sent like this can include additional meta data (a "code" property, information about specific input fields that may have generated the error, etc.). While this same information can be sent through the errors array, making it part of your schema means that clients will be aware of the structure of these error objects. This is particularly important for clients written in typed languages where client code is often generated from the schema.
Returning client errors this way lets you draw a clear distinction between user errors that should be made visible to the user (wrong credentials, user already exists, etc.) and something actually going wrong with either the client or server code (in which case, at best, we show some generic messaging).
Creating a "payload" object like this lets you append additional fields in the future without breaking your schema.
A third alternative is to utilize unions in a similar fashion:
type Mutation {
updateUser(id: ID!, newEmail: String!): UpdateUserPayload!
}
union UpdateUserPayload = User | Error
This enables clients to use fragments and the __typename field to distinguish between successful and failed mutations:
mutation($userId: ID!, $newEmail: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
__typename
... on User {
id
email
}
... on Error {
message
code
}
}
}
You can get even create specific types for each kind of error, allowing you to omit any sort of "code" field:
union UpdateUserPayload = User | EmailExistsError | EmailInvalidError
There's no right or wrong answer here. While there are advantages to each approach, which one you take comes ultimately comes down to preference.

Designing a GraphQL schema for an analytics platform

I'm just starting to explorer GraphQL as an option for my analytic platform API layer.
My UI is mainly built from tables and charts. most of the times the data represents some DB columns grouped by a dimension.
I've found the following article https://www.microsoft.com/developerblog/2017/09/28/data-independent-graphql-using-view-model-based-schemas/ from Microsoft, describing their take on how suck GraphQL schemas should be designed (see below).
type Query {
channels(source: String!, query:String!, appId:String!, apiKey:String!): [Channel]
lineCharts(source: String!, query:String!, appId:String!, apiKey:String!, filterKey:String, filterValues:[String]): [LineChart]
pieCharts(source: String!, query:String!, appId:String!, apiKey:String!): [PieChart]
barCharts(source: String!, query:String!, appId:String!, apiKey:String!, filterKey:String, filterValues:[String]): [BarChart]
}
type Channel {
name: String
id: Int
}
type LineChart {
id: String
seriesData : [Series]
}
type PieChart {
id: String
labels: [String]
values: [Int]
}
type BarChart {
id: String
seriesData : [Series]
}
type Series {
label: String
x_values: [String]
y_values: [Int]
}
It seems to me that this design is strict, forcing any new chart to be added to the root Query. How can the schema be more generic, without loosing GraphQL benefits?
You could do something with union types and inline/fragments
union Chart = LineChart | PieChart | BarChart
type Query {
charts(
source: String!
query: String!
appId: String!
apiKey: String!
filterKey: String
filterValues: [String]
): [Chart]
}
Then you can have your charts resolver bring ALL the charts and write your queries like
fragment Identifiers on Chart {
__typename
id
}
query {
charts(...) {
...on LineChart {
...Identifiers
seriesData
}
...on PieChart {
...Identifiers
labels
values
}
...on BarChart {
...Identifiers
seriesData
}
}
}
The Identifiers will provide you with some information about what type you're dealing with and it's id, but you can extend it to whatever you like as long as those fields are common to all types on that union (or you can spread it only on some of the types).
There are 2 ways you can go about if you don't want to bring in all the charts:
Add inline fragments for only the types you want, but the rest will still be there, in the form of empty objects.
Pass another argument to the resolver representing the type/s you want
P.S. You can get as granular as you like, there are also interfaces and input types.

Sproutcore datasources and model relationships

I currently have a Sproutcore app setup with the following relationships on my models:
App.Client = SC.Record.extend({
name: SC.Record.attr(String),
brands: SC.Record.toMany('App.Brand', {isMaster: YES, inverse: 'client'})
});
App.Brand = SC.Record.extend({
name: SC.Record.attr(String),
client: SC.Record.toOne('App.Client, {isMaster: NO, inverse: 'brands'})
});
When I was working with fixtures my fixture for a client looked like this:
{
guid: 1,
name: 'My client',
brands: [1, 2]
}
And my fixture for a brand looked like this:
{
guid: 1,
name: 'My brand',
client: 1
}
Which all worked fine for me getting a clients brands and getting a brands client.
My question is in regards to how Datasources then fit into this and how the server response should be formatted.
Should the data returned from the server mirror exactly the format of the fixtures file? So clients should always contain a brands property containing an array of brand ids? And vice versa.
If I have a source list view which displays Clients with brands below them grouped. How would I go about loading that data for the source view with my datasource? Should I make a call to the server to get all the Clients and then follow that up with a call to fetch all the brands?
Thanks
Mark
The json you return will mostly mirror the fixtures. I recently had pretty much the same question as you, so I built a backend in Grails and a front end in SC, just to explore the store and datasources. My models are:
Scds.Project = SC.Record.extend(
/** #scope Scds.Project.prototype */ {
primaryKey: 'id',
name: SC.Record.attr(String),
tasks: SC.Record.toMany("Scds.Task", {
isMaster: YES,
inverse: 'project'
})
});
Scds.Task = SC.Record.extend(
/** #scope Scds.Task.prototype */ {
name: SC.Record.attr(String),
project: SC.Record.toOne("Scds.Project", {
isMaster: NO
})
});
The json returned for Projects is
[{"id":1,"name":"Project 1","tasks":[1,2,3,4,5]},{"id":2,"name":"Project 2","tasks":[6,7,8]}]
and the json returned for tasks, when I select a Project, is
{"id":1,"name":"task 1"}
obviously, this is the json for 1 task only. If you look in the projects json, you see that i put a "tasks" array with ids in it -- thats how the internals know which tasks to get. so to answer your first question, you dont need the id from child to parent, you need the parent to load with all the children, so the json does not match the fixtures exactly.
Now, it gets a bit tricky. When I load the app, I do a query to get all the Projects. The store calls the fetch method on the datasource. Here is my implementation.
Scds.PROJECTS_QUERY = SC.Query.local(Scds.Project);
var projects = Scds.store.find(Scds.PROJECTS_QUERY);
...
fetch: function(store, query) {
console.log('fetch called');
if (query === Scds.PROJECTS_QUERY) {
console.log('fetch projects');
SC.Request.getUrl('scds/project/list').json().
notify(this, '_projectsLoaded', store, query).
send();
} else if (query === Scds.TASKS_QUERY) {
console.log('tasks query');
}
return YES; // return YES if you handled the query
},
_projectsLoaded: function(response, store, query) {
console.log('projects loaded....');
if (SC.ok(response)) {
var recordType = query.get('recordType'),
records = response.get('body');
store.loadRecords(recordType, records);
store.dataSourceDidFetchQuery(query);
Scds.Statechart.sendEvent('projectsLoaded')
} else {
console.log('oops...error loading projects');
// Tell the store that your server returned an error
store.dataSourceDidErrorQuery(query, response);
}
}
This will get the Projects, but not the tasks. Sproutcore knows that as soon as I access the tasks array on a Project, it needs to get them. What it does is call retrieveRecords in the datasource. That method in turn calls retrieveRecord for every id in the tasks array. My retrieveRecord method looks like
retrieveRecord: function(store, storeKey) {
var id = Scds.store.idFor(storeKey);
console.log('retrieveRecord called with [storeKey, id] [%#, %#]'.fmt(storeKey, id));
SC.Request.getUrl('scds/task/get/%#'.fmt(id)).json().
notify(this, "_didRetrieveRecord", store, storeKey).
send();
return YES;
},
_didRetrieveRecord: function(response, store, storeKey) {
if (SC.ok(response)) {
console.log('succesfully loaded task %#'.fmt(response.get('body')));
var dataHash = response.get('body');
store.dataSourceDidComplete(storeKey, dataHash);
} ...
},
Note that you should use sc-gen to generate your datasource, because it provides a fairly well flushed out stub that guides you towards the methods you need to implement. It does not provide a retrieveMethods implementation, but you can provide your own if you don't want to do a single request for each child record you are loading.
Note that you always have options. If I wanted to, I could have created a Tasks query and loaded all the tasks data up front, that way I wouldn't need to go to my server when I clicked a project. So in answer to your second question, it depends. You can either load the brands when you click on the client, or you can load all the data up front, which is probably a good idea if there isn't that much data.