GraphQL stitch and union - express

I have a need to 'aggregate' multiple graphQl services (with same schema) into single read-only (query only) service exposing data from all services. For example:
---- domain 1 ----
"posts": [
{
"title": "Domain 1 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 1 - Second post",
"description": "Content of the second post"
}
]
---- domain 2 ----
"posts": [
{
"title": "Domain 2 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 2 - Second post",
"description": "Content of the second post"
}
]
I understand that 'stitching' is not meant for UC's like this but more to combine different micro-services into same API. In order to have same types (names) into single API, I implemented 'poor man namespaces' by on-the-fly' appending domain name to all data types. However, I'm able only to make a query with two different types like this:
query {
domain_1_posts {
title
description
}
domain_2_posts {
title
description
}
}
but, it results with data set consist out of two arrays:
{
"data": {
"domain_1_posts": [
{ ...},
],
"domain_2_posts": [
{ ...},
]
}
}
I would like to hear your ideas what I can do to combine it into single dataset containing only posts?
One idea is to add own resolver that can call actual resolvers and combine results into single array (if that is supported at all).
Also, as a plan B, I could live with sending 'domain' param to query and then construct query toward first or second domain (but, to keep initial query 'domain-agnostic', e.g. without using domain namses in query itself?
Thanks in advance for all suggestions...

I manage to find solution for my use-case so, I'll leave it here in case that anyone bump into this thread...
As already mentioned, stitching should be used to compose single endpoint from multiple API segments (microservices). In case that you try to stitch schemas containing same types or queries, your request will be 'routed' to pre-selected instance (so, only one).
As #xadm suggested, key for 'merging' data from multiple schemas into singe data set is in using custom fetch logic for Link used for remote schema, as explained:
1) Define custom fetch function matching your business needs (simplified example):
const customFetch = async (uri, options) => {
// do not merge introspection query results!!!
// for introspection query always use predefined (first?) instance
if( operationType === 'IntrospectionQuery'){
return fetch(services[0].uri, options);
}
// array fecth calls to different endpoints
const calls = [
fetch(services[0].uri, options),
fetch(services[1].uri, options),
fetch(services[2].uri, options),
...
];
// execute calls in parallel
const data = await Promise.all(fetchCalls);
// do whatever you need to merge data according to your needs
const retData = customBusinessLogic();
// return new response containing merged data
return new fetch.Response(JSON.stringify(retData),{ "status" : 200 });
}
2) Define link using custom fetch function. If you are using identical schemas you don't need to create links to each instance, just one should be enough.
const httpLink = new HttpLink(services[0].uri, fetch: customFetch });
3) Use Link to create remote executable schema:
const schema = await introspectSchema(httpLink );
return makeRemoteExecutableSchema({
schema,
link: httpLink,
context: ({ req }) => {
// inject http request headers into context if you need them
return {
headers: {
...req.headers,
}
}
},
})
4) If you want to forward http headers all the way to the fetch function, use apollo ContextLink:
// link for forwarding headers through context
const contextLink = setContext( (request, previousContext) => {
if( previousContext.graphqlContext ){
return {
headers: {
...previousContext.graphqlContext.headers
}
}
}
}).concat(http);
Just to mention, dependencies used for this one:
const { introspectSchema, makeRemoteExecutableSchema, ApolloServer } = require('apollo-server');
const fetch = require('node-fetch');
const { setContext } = require('apollo-link-context');
const { HttpLink } = require('apollo-link-http');
I hope that it will be helfull to someone...

Related

How to use a part of intercepted endpoint as a variable in my stub with Cypress

I am testing a frontend and I want to make my test more efficient
I have the following custom command:
cy.intercept('**/api/classification/dd86ac0a-ca23-413b-986c-535b6aad659c/items/**',
{ fixture: 'ItemsInEditor.json' }).as('ItemsInEditorStub')
This works correctly and is intercepts 25 times :). But the Id in the stub file has to be the same as in the requested Endpoint. Otherwise the frontEnd wilt not process it.
At this point I do not want to make 25 stubfiles in the fixture map.
In the printscreen you can see the different calls I need to intercept. The last ID I would like to save as variable and use it in the stub file
The Stub is like this:
{
"item": {
"version": 3,
"title": "Cars",
"rows": [],
"id": "dynamicIdBasedOnEndPoint" <- *Can we make it dynamic based on the ID in the endpoint*
},
"itemState": "Submitted"
}
UPDATE:
What I have for now is just the basic I guess:
cy.intercept('**/api/classification/*/items/**', {
body:
{
item: {
version: 3,
title: 'Cars',
rows: [],
id: '55eb5a28-24d8-4705-b465-8e1454f73ac8' //Still need this value to be dynamic and always the same as the intercepted '**'(wildcard)
},
itemState: "Submitted"
}
})
.as('ItemsInEditorStub')
cy.fixture('ItemsInEditor.json').then(ModFixture => {
cy.intercept('GET', '**/api/classification/**/items/id/**', (req) => {
const id = req.url.split('/').pop(); // last part of url path
ModFixture.item.id = id; // add the id dynamically
req.reply(ModFixture); // send altered fixture
})
}).as('ItemsInEditorStub')
Thanks to #Fody
You can make a dynamic fixture using javascript.
Ref Providing a stub response with req.reply()
cy.fixture('ItemsInEditor.json').then(fixture => {
cy.intercept('**/api/classification/dd86ac0a-ca23-413b-986c-535b6aad659c/items/**',
(req) => {
const id = req.url.split('/').pop(); // last part of url path
fixture.item.id = id; // add the id dynamically
req.reply(fixture); // send altered fixture
}
).as('ItemsInEditorStub')
})

Syncfusion TreeGrid and Grid with WebAPI doesn't work on delete

I've set up a treeGrid (the grid is the same) to get data through the ASP.NET WebAPI using their DataManager:
var categoryID=15;
var dataManager = ej.DataManager({
url: "/API/myrecords?categoryID=" + categoryID,
adaptor: new ej.WebApiAdaptor()
});
$("#treeGridContainer").ejTreeGrid({
dataSource: dataManager,
childMapping: "Children",
treeColumnIndex: 1,
isResponsive: true,
contextMenuSettings: {
showContextMenu: true,
contextMenuItems: ["add", "edit", "delete"]
},
contextMenuOpen: contextMenuOpen,
editSettings: { allowEditing: true, allowAdding: true, allowDeleting: true, mode: 'Normal', editMode: "rowEditing" },
columns: [
{ field: "RecordID", headerText: "ID", allowEditing: false, width: 20, isPrimaryKey: true },
{ field: "RecordName", headerText: "Name", editType: "stringedit" },
],
actionBegin: function (args) {
console.log('ActionBegin: ', args);
if (args.requestType === "add") {
//add new record, managed manually...
var parentID = 0;
if (args.level != 0) {
parentID = args.parentItem.TaxonomyID;
}
args.data.TaxonomyID = 0;
addNewRecord(domainID, parentID, args.data, args.model.selectedRowIndex);
}
}
});
The GET works perfectly.
The PUT works fine as I'm managing it manually because it's not called at all from the DataManager, and in any case I want to manage the update of the records in the TreeGrid.
The problem is with DELETE, that is called by the DataManager when I click Delete from the context menu over an item in the TreeGrid.
It makes a call to the following URL:
http://localhost:50604/API/myrecords?categoryID=15/undefined
and obviously, I get a 405 (Method Not Allowed)
The problem is given by the categoryID parameters that break the RESTful schema, and the DataManager is not able to understand that there is a parameter.
A possible solution could be to send this parameter as a POST variable but the DataManager is not able to do it.
Does anyone have a clue of how to solve it? it's a common scenario in real-world applications.
While populating Tree Grid data using ejDataManger, CRUD actions will be handled using inbuilt Post (insert), Put (update), Delete requestType irrespective of CRUD URL’s. So, no need to bind ‘removeUrl’ for deleting records.
And, in the provided code example parameter is passed in the URL to fetch data hence the reported issue occurs. Using ejQuery’s addParams method we can pass the parameter in URL. You can find the code example to pass the parameter using Tree Grid load event and the parameter is retrieved in server side using DataManager.
[html]
var dataManager = ej.DataManager({
url: "api/Values",
adaptor: new ej.WebApiAdaptor()
});
$("#treeGridContainer").ejTreeGrid({
load: function (args) {
// to pass parameter on load time
args.model.query.addParams("keyId", 48);
},
});
[controller]
public object Get()
{
var queryString = HttpContext.Current.Request.QueryString;
// here we can get the parameter during load time
int num = Convert.ToInt32(queryString["keyId"]);
//..
return new {Items = DataList, Count = DataList.Count() };
}
You can find the sample here for your reference.
Regards,
Syncfusion Team

Read query from apollo cache with a query that doesn't exist yet, but has all info stored in the cache already

I have a graphql endpoint where this query can be entered:
fragment ChildParts {
id
__typename
}
fragment ParentParts {
__typename
id
children {
edges{
node {
...ChildParts
}
}
}
query {
parents {
edges
nodes {
...ParentParts
}
}
}
}
When executed, it returns something like this:
"data": {
"edges": [
"node": {
"id": "<some id for parent>",
"__typename": "ParentNode",
"children": {
"edges": [
node: {
"id": "<some id for child>",
"__typename": "ChildNode"
},
...
]
}
},
...
]
}
Now, with apollo client, after a mutation, I can read this query from the cache, and update / add / delete any ParentNode, and also any ChildNode, but I have to go over the structure returned by this query.
Now, I'm looking for a possibility to get a list of ChildNodes out of the cache (which has those already, as the cache is created as a flat list), to make the update of nested data a bit easier. Is there a possibility of reading a query out of the cache, without having read the same query from the server before?
You can use the client's readFragment method to retrieve any one individual item from the cache. This just requires the id and a fragment string.
const todo = client.readFragment({
id,
fragment: gql`
fragment fooFragment on Foo {
id
bar
qax
}
`,
})
Note that id here is the cache key returned by the dataIdFromObject function -- if you haven't specified a custom function, then (provided the __typename and id or _id fields are present) the default implementation is just:
${result.__typename}:${result.id || result._id}
If you provided your own dataIdFromObject function, you'll need to provide whatever id is returned by that function.
As #Herku pointed out, depending on the use case, it's also possible to use cache redirects to utilize data cached for one query when resolving another one. This is configured as part of setting up your InMemoryCache:
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
book: (_, args, { getCacheKey }) =>
getCacheKey({ __typename: 'Book', id: args.id })
},
},
})
Unfortunately, as of writing this answer, I don't believe there's any method to delete a cached item by id. There's on going discussion here around that point (original issue here).

GraphQL, Dataloader, [ORM or not], hasMany relationship understanding

I'm using for the first time Facebook's dataloader (https://github.com/facebook/dataloader).
What I don't understand is how to use it when I have 1 to many relationships.
Here it is a reproduction of my problem: https://enshrined-hydrant.glitch.me.
If you use this query in the Playground:
query {
persons {
name
bestFriend {
name
}
opponents {
name
}
}
}
you get values.
But if you open the console log here: https://glitch.com/edit/#!/enshrined-hydrant you can see these database calls I want to avoid:
My Person type is:
type Person {
id: ID!
name: String!
bestFriend: Person
opponents: [Person]
}
I can use dataloader good for bestFriend: Person but I don't understand how to use it with opponents: [Person].
As you can see the resolver has to return an array of values.
Have you any hint about this?
You need to create batched endpoints to work with dataloader - it can't do batching by itself.
For example, you probably want the following endpoints:
GET /persons - returns all people
POST /bestFriends, Array<personId>` - returns an array of best friends matchin the corresponding array of `personId`s
Then, your dataloaders can look like:
function batchedBestFriends(personIds) {
return fetch('/bestFriends', {
method: 'POST',
body: JSON.stringify(personIds))
}).then(response => response.json());
// We assume above that the API returns a straight array of the data.
// If the data was keyed, you could add another accessor such as
// .then(data => data.bestFriends)
}
// The `keys` here will just be the accumulated list of `personId`s from the `load` call in the resolver
const bestFriendLoader = new DataLoader(keys => batchedBestFriends(keys));
Now, your resolver will look something like:
const PersonType = new GraphQLObjectType({
...
bestFriend: {
type: BestFriendType,
resolve: (person, args, context) => {
return bestFriendLoader.load(person.id);
}
}
});

Auto-suggest entities using the Wikipedia API

I want to provide an auto-suggest feature to my users, where they can choose from a list of known "things" from a semantic entity database.
I'm looking at using the Wikipedia Media API instead of setting up my own:
https://www.mediawiki.org/wiki/API:Main_page
There is an API tool for testing requests:
https://www.mediawiki.org/wiki/Special:ApiSandbox
For example if a user likes cats:
https://www.wikidata.org/wiki/Q146
The requests would be:
https://en.wikipedia.org/w/api.php?action=query&format=jsonfm&prop=pageterms&list=&meta=&titles=C
https://en.wikipedia.org/w/api.php?action=query&format=jsonfm&prop=pageterms&list=&meta=&titles=Ca
https://en.wikipedia.org/w/api.php?action=query&format=jsonfm&prop=pageterms&list=&meta=&titles=Cat
The user would select Cat from the dropdown, and I would save the ID.
Is this a good approach? How could I improve it?
I managed to load autosuggestions using the Wikipedia REST api:
https://en.wikipedia.org/w/api.php?action=query&format=json&gsrlimit=15&generator=search&origin=*&gsrsearch=
The Angular code:
this.resultsCtrl = new FormControl();
this.resultsCtrl.valueChanges
.debounceTime(400)
.subscribe(name => {
if (name) {
this.filteredResults = this.filterResults(name);
}
});
filterResults(name: string) {
return Observable.create(obs => {
const displaySuggestions = function (response) {
if (!response) {
obs.error(status);
} else {
obs.next(Object.keys(response.query.pages).map(key => response.query.pages[key]));
obs.complete();
}
};
this.http.get('https://en.wikipedia.org/w/api.php?action=query&format=json&gsrlimit=15&generator=search&origin=*&gsrsearch='
+ encodeURI(name))
.subscribe(displaySuggestions);
});
}