Many to many with pivot data to dgraph using graphql schema - sql

I have the bellow many to many relation using a relational DB and I want to transition this to the dgraph DB.
This relation has also extra columns in the pivot table: products_stores like price, disc_price.
I have the bellow dgraph schema using graphql:
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
created_at: DateTime!
updated_at: DateTime!
stores: [Store] #hasInverse(field: products)
}
type Store {
id: ID!
name: String! #id
logo: String
products: [Product] #hasInverse(field: stores)
created_at: DateTime!
updated_at: DateTime!
}
I am newbie to graph databases and I don't know how to define these extra pivot columns.
Any help would be greatly appreciated.

To model a pivot table that is only a linking pivot table holding no additional information, then you model it as you did above. However, if your pivot table contains additional information regarding the relationship, then you will need to model it with an intermediate linking type. Almost the same idea as above. I prefer these linking types to have a name describing the link. For instance I named it in this case Stock but that name could be anything you want it to be. I also prefer camelCase for field names so my example reflects this preference as well. (I added some search directives too)
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
createdAt: DateTime! #search
updatedAt: DateTime! #search
stock: [Stock] #hasInverse(field: product)
}
type Store {
id: ID!
name: String! #id
logo: String
stock: [Stock] #hasInverse(field: store)
createdAt: DateTime! #search
updatedAt: DateTime! #search
}
type Stock {
id: ID!
store: Store!
product: Product!
name: String! #id
price: Float! #search
originLink: String
discPrice: Float #search
}
The hasInverse directive is only required on one edge of the inverse relationship, if you want to for readability you can define it on both ends without any side effects
This model allows you to query many common use cases very simply without needing to do additional join statements like you are probably use to in sql. And the best part about Dgraph is that all of these queries and mutations are generated for you so you don't have to write any resolvers! Here is one example of finding all the items in a store between a certain price range:
query ($storeName: String, $minPrice: Float!, $maxPrice: Float!) {
getStore(name: $storeName) {
id
name
stock(filter: { price: { between: { min: $minPrice, max: $maxPrice } } }) {
id
name
price
product {
id
name
slug
image
}
}
}
}
For a query to find only specific product names in a specific store, then use the cascade directive to remove the undesired Stock nodes (until Dgraph finished nested filters RFC in progress)
query ($storeName: String, $productIDs: [ID!]!) {
getStore(name: $storeName) {
id
name
stock #cascade(fields:["product"]) {
id
name
price
product(filter: { id: $productIDs }) #cascade(fields:["id"]) {
id
name
slug
image
}
}
}
}

Related

Get Empty Rows in TypeORM

I'm trying to write a typeORM query which includes multiple where clauses. I am able to achieve this via where option as follows:
const categories = [
{ name: 'master', categoryTypeId: 2, parentId: 1, locationId: null },
{ name: 'food', categoryTypeId: 3, parentId: null, locationId: null }];
const rows = await Repo.find({
where: categories.map((category) => ({
name: category.name,
categoryTypeId: category.categoryTypeId,
locationId: category.locationId
})),
});
I would want to maintain the mapping b/w the input array and the rows returned. For example, I know that the second category doesn't exist in the DB. I would want to have an empty object in the rows variable so that I know which categories didn't yield any result.
Upon research I have found that we can do something with SQL as mentioned here. But, I'm not sure how do I translate into typeORM if I can.

What is the best way to implement nested cursor pagination with a SQL datasarouce in a graphQL server?

How do you manage effective data fetching with nested cursors in a relay-esk schema (with a SQL data source)?
Do you try make a single complicated SQL query to resolve the N+1 problem with a "LIMIT args_first", "ORDER BY args_orderby" and "WHERE cursor > :args_after"
Do you run 2 queries to the DB and make use of facebook data loader?
For example, I've got a schema structured as below:
enum BookSortKeys {
ID,
TITLE,
PRICE,
UPDATED_AT,
CREATED_AT
}
enum ReviewSortKeys {
ID,
REVIEW,
UPDATED_AT,
CREATED_AT
}
type Book {
id: ID!
title: String!
description: String
price: Float!
updatedAt: String!
createdAt: String!
reviews("""
Returns the elements that come after the specified cursor.
"""
after: String
"""
Returns the elements that come before the specified cursor.
"""
before: String
"""
Returns up to the first `n` elements from the list.
"""
first: Int
"""
Returns up to the last `n` elements from the list.
"""
last: Int
"""
Reverse the order of the underlying list.
"""
reverse: Boolean = false
"""
Sort the underlying list by the given key.
"""
sortKey: ReviewSortKeys = ID): ReviewConnection!
}
type Query {
books("""
Returns the elements that come after the specified cursor.
"""
after: String
"""
Returns the elements that come before the specified cursor.
"""
before: String
"""
Returns up to the first `n` elements from the list.
"""
first: Int
"""
Returns up to the last `n` elements from the list.
"""
last: Int
"""
Supported filter parameters:
- `title`
- `id`
- `price`
- `description`
- `created_at`
- `updated_at`
"""
query: String
"""
Reverse the order of the underlying list.
"""
reverse: Boolean = false
"""
Sort the underlying list by the given key.
"""
sortKey: BookSortKeys = ID): BookConnection!
}
type ReviewConnection {
pageInfo: PageInfo!
edges: [ReviewEdge!]!
}
type ReviewEdge {
cursor: String!
node: Review!
}
type BookConnection {
pageInfo: PageInfo!
edges: [BookEdge!]!
}
type BookEdge {
cursor: String!
node: Book!
}
type PageInfo {
hasNextPage: Boolean!
hasPreviousPage: Boolean!
}
type Review {
review: String!
id: ID!
updatedAt: String!
createdAt: String!
}
type Mutation {
}
schema {
query: Query
mutation: Mutation
}
And I'd like to execute a query like the below and retrieve the data in the most efficient manner.
query GET_BOOKS {
books(first:10, sortKey: PRICE, reverse: true) {
pageInfo {
hasNextPage
hasPreviousPage
}
edges {
cursor
node {
id
title
description
reviews(after:"base64-cursor" first: 5, sortKey: CREATED_AT) {
edges {
node{
review
}
}
}
}
}
}
}
I can very easily convert all of the pagination parameters for the top query (book) into a sql statement but with the nested cursor, I can only see 2 options (mentioned above)... current issues I'm facing before implementing these options are:
If I go the pure SQL approach - is there even a clean way to run a single query and apply the LIMIT and WHERE createdAt > :after_cursor_val at the nested (JOIN) level
If the above is possible, is it more performant than dataloader at scale? As the query seems like it will be pretty verbose and complex if implemented.
What happens if the nested pagination tree grows (i.e. requests with 4 nested paginations)? Would a pure Query object level sql command suffice here? or is it more scalable to add resolvers on each relationship (i.e. book -> reviews has a sql query to pull all this book's specific reviews, reviews -> publications has a query to pull all the review's specific publications it's been in and so on and batch them in data loader)
if you go the dataloader route, the batching seems to use a "WHERE IN" clause, (i.e. SELECT * FROM reviews "reviews" WHERE "reviews".bookId IN (...list of book ids batched) - would adding LIMIT, ORDER BY and WHERE createdAt > :cursor provide unexpected results as my result set is a mix of entries across multiple 'book ids'?
long term, my personal feeling is that the pure sql approach is going to be messy from a code perspective, thoughts on this?

Querying (and filtering) in a many-to-many relationship in Backand

I'm trying to use the pet-owner example to create some sort of playlist app where a playlist can be shared among different users.
I have read both links to understand how many-to-many relationship is created in Backand:
Link 1 -
Link 2
According to pet's example, to get all owners from one pet I should get the pet object (using its id field) and then filter its user_pets list matching the user id. That may work for small amount of users/pets but I'd rather prefer to query user_pets table directly by filtering by user_id and pet_id.
My approach has been this code without success:
$http({
method: 'GET',
url: getUrl(), // this maps to pets_owner "table"
params: {
deep: true,
exclude: 'metadata',
filter: [
{ fieldName: 'pet', operator: 'equals', value: pet_id },
{ fieldName: 'owner', operator: 'equals', value: user_id }
]
}
})
Any idea how to query/filter to get only related results?
Thanks in advance
Because user_id and pet_d are both object fields the operator should be "in"
From Backand docs :
following are the possible operators depending on the field type:
numeric or date fields:
-- equals
....
object fields:
-- in

set two different memory stores for one dojo widget (dijit/form/FilteringSelect) at the same time

I have two different JSON structures. One represent the individual users of the system and other represents groups made of these users. So, I created two memory stores with these (each has different idProperty - userId and groupId, respectively).
I have a filteringSelect dropdown and my requirement is to add both of these as the data store of the list, so that either a valid user or a valid group could be selected from the dropdown.
Two possible ways I could think of doing this :
1) by creating one common memory store of two JSONs - but idProperty is different so not sure how this is possible
2) by adding both the memory stores to the widget but again different idProperty so not sure.
I am very new to using Dojo so any help would be really appreaciated. Thanks in advance!
I think that, if you use a store to represent something (model data), it should be formed so that it can be used properly within a widget.
So in your case I would add both of them to a single store. If they have a different ID (for example when it's a result of a back-end service), then you could map both types of models into a single object structure. For example:
var groups = [{
groupId: 1,
groupName: "Group 1",
users: 10
}, {
groupId: 2,
groupName : "Group 2",
users: 13
}, {
groupId: 3,
groupName : "Group 3",
users: 2
}];
var users = [{
userId: 1,
firstName: "John",
lastName: "Doe"
}, {
userId: 2,
firstName: "Jane",
lastName: "Doe"
}, {
userId: 3,
firstName: "John",
lastName: "Smith"
}];
require(["dojo/store/Memory", "dijit/form/FilteringSelect", "dojo/_base/array", "dojo/domReady!"], function(Memory, FilteringSelect, array) {
var filterData = array.map(groups, function(group) {
return {
id: "GROUP" + group.groupId,
groupId: group.groupId,
name: group.groupName,
type: "group"
};
});
Array.prototype.push.apply(filterData, array.map(users, function(user) {
return {
id: "USER" + user.userId,
userId: user.userId,
name: user.firstName + " " + user.lastName,
type: "user"
};
}));
});
In this example, we have two arrays groups and users, and to merge them I used the map() function of dojo/_base/array and then I concatenated both results.
They still contain their original ID and a type, so you will still be able to reference the original object.
From my previous experiences, I learned that your model data should not represent pure business data, but data that is easily used in the view/user interface.
By giving both arrays a similar object structure, you can easily use them in a dijit/form/FilteringSelect, which you can see here: http://jsfiddle.net/ut5hjbyb/

Sencha Touch SQL proxy

I am using SQL proxy in my Sencha Touch 2 app and I am able to store and retrieve data offline.
What I cannot do and what the Sencha documentation does not seem to provide is how do I customize the Sencha SQL store.
For eg, to create a SQL based store, I did the following -
Ext.define("MyApp.model.Customer", {
extend: "Ext.data.Model",
config: {
fields: [
{name: 'id', type: 'int'},
{name: 'name', type: 'string'},
{name: 'age', type: 'string'}
],
proxy: {
type: "sql",
database: "MyDb",
}
}
});
1. Now, how do i specify the size of the database ?
2. How do I specify constraints on the fields like unique, primary key etc ?
Say, I have 4 columns in my database :
pid, name, age, phone
I want to have a primary key for multiple fields : (pid, name)
If I was creating a table via SQL query, I would do something like -
CREATE TABLE Persons
(
pid int,
name varchar(255),
age int,
phone int,
primary key (pid,name)
);
Now how do I achieve the same via model ?
3. If I want to interact with the database via SQL query, I do the following -
var query = "SELECT * from CUSTOMER";
var db = openDatabase('MyDb', '1.0', 'MyDb', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql(query, [], function (tx, results) {
// do something here
}, null);
});
Is this the best way to do it?