I tried following the offical Shopify Documentation for retrieving ProductMedia.
My Query looks like this:
query getProductMediaById($id: ID!) {
product(id: $id) {
id
media(first: 10) {
edges {
node {
mediaContentType
alt
...mediaFieldsByType
}
}
}
}
}
fragment mediaFieldsByType on Media {
... on ExternalVideo {
id
embeddedUrl
}
... on MediaImage {
image {
...imageAttributes
}
}
... on Model3d {
sources {
url
mimeType
format
filesize
}
}
... on Video {
sources {
url
mimeType
format
height
width
}
}
}
fragment imageAttributes on Image {
altText
url
}
The only thing where I diverged from the official documentation is to put the image attributes to another fragment for code reuse.
But when I try to execute the query I get the following response:
{
"data": {
"product": {
"__typename": "Product",
"id": "Z2lkOi8vc2hvcGlmeS9Qcm9kdWN0LzY3NjcyOTczMzEzMDU=",
"media": {
"__typename": "MediaConnection",
"edges": [
{
"__typename": "MediaEdge",
"node": {
"__typename": "MediaImage",
"mediaContentType": "IMAGE",
"alt": ""
}
}
]
}
}
},
"loading": false,
"networkStatus": 7
}
Or to put it to words my response doesn't contain any information from the mediaFieldsByType fragment.
Any Idea what I'm doing wrong?
Related
I am using the following query:
query myOrgRepos {
organization(login: "COMPANY_NAME") {
repositories(first: 100) {
edges {
node {
name
defaultBranchRef {
target {
... on Commit {
history(after: "2021-01-01T23:59:00Z", before: "2023-02-06T23:59:00Z", author: { emails: "USER_EMAIL" }) {
edges {
node {
oid
}
}
}
}
}
}
}
}
}
}
}
But with accurate names for the orginization and emails, and am persistantly getting the following error for every repo.
{
"type": "INVALID_CURSOR_ARGUMENTS",
"path": [
"organization",
"repositories",
"edges",
20,
"node",
"defaultBranchRef",
"target",
"history"
],
"locations": [
{
"line": 10,
"column": 29
}
],
"message": "`2021-01-01T23:59:00Z` does not appear to be a valid cursor."
},
If I remove the after field, it works just fine. However, I kind of need it. Acording to all the docs that I have read both after and before take the same timestamp. Can't tell where I am going wrong here.
I have tried:
to narrow the gap between before and after
return only a single repository
remove after (works fine without it)
I have a query like below
query {
heroes {
node {
name
}
endCursor
}
}
I am trying to understand how GraphQL can handle the error handling and return partial response. I looked at https://github.com/graphql/dataloader/issues/169 and tried to create a resolver like below;
{
Query: {
heroes: async (_) => {
const heroesData = await loadHeroesFromDataWarehouse();
return {
endCursor: heroesData.endCursor;
node: heroesData.map(h => h.name === 'hulk' ? new ApolloError('Hulk is too powerful') : h)
}
}
}
}
I was hoping it would resolve something like below;
{
"errors": [
{
"message": "Hulk is too powerful",
"path": [
"heroes", "1"
],
}
],
"data": {
"heroes": [
{
"name": "spiderman"
},
null,
{
"name": "ironman"
}
]
}
}
but it is completely failing making the heroes itself null like below;
{
"errors": [
{
"message": "Hulk is too powerful",
"path": [
"heroes"
],
}
],
"data": {
"heroes": null
}
}
How can I make resolver to return me the desired partial response?
Found the solution, basically we need a resolver to resolve the edge model itself;
{
Query: {
heroes: (_) => loadHeroesFromDataWarehouse()
},
HeroesEdge {
node: async (hero) => hero.name === 'hulk' ? new ApolloError('Hulk is too powerful') : hero
}
}
I have a collection "Owners" and I want to return a list of "Owner" matching a filter (any filter), plus the count of "Pet" from the "Pets" collection for that owner, except I don't want the dead pets. (made up example)
I need the returned documents to look exactly like an "Owner" document with the addition of the "petCount" field because I'm using Java Pojos with the Mongo Java driver.
I'm using AWS DocumentDB that does not support $lookup with filters yet. If it did I would use this and I'd be done:
db.Owners.aggregate( [
{ $match: {_id: UUID("b13e733d-2686-4266-a686-d3dae6501887")} },
{ $lookup: { from: 'Pets', as: 'pets', 'let': { ownerId: '$_id' }, pipeline: [ { $match: { $expr: { $ne: ['$state', 'DEAD'] } } } ] } },
{ $addFields: { petCount: { $size: '$pets' } } },
{ $project: { pets: 0 } }
]).pretty()
But since it doesn't this is what I got so far:
db.Owners.aggregate( [
{ $match: {_id: { $in: [ UUID("cbb921f6-50f8-4b0c-833f-934998e5fbff") ] } } },
{ $lookup: { from: 'Pets', localField: '_id', foreignField: 'ownerId', as: 'pets' } },
{ $unwind: { path: '$pets', preserveNullAndEmptyArrays: true } },
{ $match: { 'pets.state': { $ne: 'DEAD' } } },
{ "$group": {
"_id": "$_id",
"doc": { "$first": "$$ROOT" },
"pets": { "$push": "$pets" }
}
},
{ $addFields: { "doc.petCount": { $size: '$pets' } } },
{ $replaceRoot: { "newRoot": "$doc" } },
{ $project: { pets: 0 } }
]).pretty()
This works perfectly, except if an Owner only has "DEAD" pets, then the owner doesn't get returned because all the "document copies" got filtered out by the $match. I'd need the parent document to be returned with petCount = 0 when ALL of them are "DEAD". I cannot figure out how to do this.
Any ideas?
These are the supported operations for DocDB 4.0 https://docs.amazonaws.cn/en_us/documentdb/latest/developerguide/mongo-apis.html
EDIT: update to use $filter as $reduce not supported by aws document DB
You can use $filter to keep only not DEAD pets in the lookup array, then count the size of the remaining array.
Here is the Mongo playground for your reference.
$reduce version
You can use $reduce in your aggregation pipeline to to a conditional sum for the state.
Here is Mongo playground for your reference.
As of January 2022, Amazon DocumentDB added support for $reduce, the solution posted above should work for you.
Reference.
BigQueryIO.write.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED) along with DynamicDestinations we can write to the dynamic table and if the table does not exist it will create the table from TableSchema provided from DynamicDestinations.
I am not able to add clustering fields part of TableSchema model, because it does not have such a feature.
how we can add DynamicDestinations having TableSchema with clustering fields?
bigQuery API is one way to add cluster field to a table
Using this link you can test the API before writing your code
function execute() {
return gapi.client.bigquery.jobs.insert({
"resource": {
"configuration": {
"query": {
"clustering": {
"fields": [
"Field1",
"Field2"
]
},
"query": "select 5",
"destinationTable": {
"datasetId": "Id1",
"projectId": "Project1",
"tableId": "T1"
}
}
}
}
})
.then(function(response) {
// Handle the results here (response.result has the parsed body).
console.log("Response", response);
},
function(err) { console.error("Execute error", err); });
}
And This is a JS example on how to manipulate the parameters:
static setConfiguration(params, configuration) {
//To have a destination table we MUST have a tableId
if (params.destinationTable && params.destinationTable.tableId) {
configuration.query.destinationTable = params.destinationTable
}
if (params.clusteringFields) {
configuration.query.clustering = {fields: params.clusteringFields}
}
if (params.timePartitioning) {
configuration.query.timePartitioning = {
type: 'DAY',
field: params.timePartitioning
}
}
if (params.writeDisposition) {
configuration.query.writeDisposition = params.writeDisposition
}
if (params.queryPriority && params.queryPriority.toUpperCase() === "BATCH") {
configuration.query.priority = "BATCH"
}
if (params.useCache === false) {
configuration.query.useQueryCache = params.useCache
}
if (params.maxBillBytes) {
configuration.query.maximumBytesBilled = params.maxBillBytes
}
if (params.maxBillTier) {
configuration.query.maximumBillingTier = params.maxBillTier
}
}
Now after version 2.16.0, BigQueryIO does provide an option to add clusteringFields in dynamic destinations.
#Override
public TableDestination getTable(String eventName) {
return new TableDestination(tableSpec,
tableDescription, timePartitioning, clustering);
}
Notice that the 4th parameter is clustering, which you can use.
My data structure is like this:
firebase-endpoint/updates/<location_id>/<update_id>
each location has many updates that firebase adds as "array" elements.
How can I index on the "validFrom" property of each update if the location_id is unknown before insertion into the databse?
{
"rules": {
"updates": {
"<location_id>": { // WHAT IS THIS NODE SUPPOSED TO BE?
".indexOn": ["validFrom"]
}
}
}
}
data structure sample
{
"71a57e17cbfd0f524680221b9896d88c5ab400b3": {
"-KBHwULMDZ4EL_B48-if": {
"place_id": "71a57e17cbfd0f524680221b9896d88c5ab400b3",
"name": "Gymbox Bank",
"statusValueId": 2,
"update_id": "NOT_SET",
"user_id": "7017a0f5-04a7-498c-9ccd-c547728deffb",
"validFrom": 1456311760554,
"votes": 1
}
},
"d9a02ab407543155d86b84901c69797cb534ac17": {
"-KBHgPkz_buv7DzOFHbD": {
"place_id": "d9a02ab407543155d86b84901c69797cb534ac17",
"name": "The Ivy Chelsea Garden",
"update_id": "NOT_SET",
"user_id": "7017a0f5-04a7-498c-9ccd-c547728deffb",
"validFrom": 1456307547374,
"votes": 0
}
}
}
Update: I don't think this is a dupe of the said question becauase that question doesn't have a parent object with an unknown id as well. ie both <location_id> and <update_id> are free form keys and cannot be set by hand
I did a bit more digging in the docs and I think this should work:
{
"rules": {
"updates": {
"$location_id": { // $location_id should act like a wild card
".indexOn": ["validFrom"]
}
}
}
}