My data structure is like this:
firebase-endpoint/updates/<location_id>/<update_id>
each location has many updates that firebase adds as "array" elements.
How can I index on the "validFrom" property of each update if the location_id is unknown before insertion into the databse?
{
"rules": {
"updates": {
"<location_id>": { // WHAT IS THIS NODE SUPPOSED TO BE?
".indexOn": ["validFrom"]
}
}
}
}
data structure sample
{
"71a57e17cbfd0f524680221b9896d88c5ab400b3": {
"-KBHwULMDZ4EL_B48-if": {
"place_id": "71a57e17cbfd0f524680221b9896d88c5ab400b3",
"name": "Gymbox Bank",
"statusValueId": 2,
"update_id": "NOT_SET",
"user_id": "7017a0f5-04a7-498c-9ccd-c547728deffb",
"validFrom": 1456311760554,
"votes": 1
}
},
"d9a02ab407543155d86b84901c69797cb534ac17": {
"-KBHgPkz_buv7DzOFHbD": {
"place_id": "d9a02ab407543155d86b84901c69797cb534ac17",
"name": "The Ivy Chelsea Garden",
"update_id": "NOT_SET",
"user_id": "7017a0f5-04a7-498c-9ccd-c547728deffb",
"validFrom": 1456307547374,
"votes": 0
}
}
}
Update: I don't think this is a dupe of the said question becauase that question doesn't have a parent object with an unknown id as well. ie both <location_id> and <update_id> are free form keys and cannot be set by hand
I did a bit more digging in the docs and I think this should work:
{
"rules": {
"updates": {
"$location_id": { // $location_id should act like a wild card
".indexOn": ["validFrom"]
}
}
}
}
Related
I am using the following query:
query myOrgRepos {
organization(login: "COMPANY_NAME") {
repositories(first: 100) {
edges {
node {
name
defaultBranchRef {
target {
... on Commit {
history(after: "2021-01-01T23:59:00Z", before: "2023-02-06T23:59:00Z", author: { emails: "USER_EMAIL" }) {
edges {
node {
oid
}
}
}
}
}
}
}
}
}
}
}
But with accurate names for the orginization and emails, and am persistantly getting the following error for every repo.
{
"type": "INVALID_CURSOR_ARGUMENTS",
"path": [
"organization",
"repositories",
"edges",
20,
"node",
"defaultBranchRef",
"target",
"history"
],
"locations": [
{
"line": 10,
"column": 29
}
],
"message": "`2021-01-01T23:59:00Z` does not appear to be a valid cursor."
},
If I remove the after field, it works just fine. However, I kind of need it. Acording to all the docs that I have read both after and before take the same timestamp. Can't tell where I am going wrong here.
I have tried:
to narrow the gap between before and after
return only a single repository
remove after (works fine without it)
I tried following the offical Shopify Documentation for retrieving ProductMedia.
My Query looks like this:
query getProductMediaById($id: ID!) {
product(id: $id) {
id
media(first: 10) {
edges {
node {
mediaContentType
alt
...mediaFieldsByType
}
}
}
}
}
fragment mediaFieldsByType on Media {
... on ExternalVideo {
id
embeddedUrl
}
... on MediaImage {
image {
...imageAttributes
}
}
... on Model3d {
sources {
url
mimeType
format
filesize
}
}
... on Video {
sources {
url
mimeType
format
height
width
}
}
}
fragment imageAttributes on Image {
altText
url
}
The only thing where I diverged from the official documentation is to put the image attributes to another fragment for code reuse.
But when I try to execute the query I get the following response:
{
"data": {
"product": {
"__typename": "Product",
"id": "Z2lkOi8vc2hvcGlmeS9Qcm9kdWN0LzY3NjcyOTczMzEzMDU=",
"media": {
"__typename": "MediaConnection",
"edges": [
{
"__typename": "MediaEdge",
"node": {
"__typename": "MediaImage",
"mediaContentType": "IMAGE",
"alt": ""
}
}
]
}
}
},
"loading": false,
"networkStatus": 7
}
Or to put it to words my response doesn't contain any information from the mediaFieldsByType fragment.
Any Idea what I'm doing wrong?
I have a collection "Owners" and I want to return a list of "Owner" matching a filter (any filter), plus the count of "Pet" from the "Pets" collection for that owner, except I don't want the dead pets. (made up example)
I need the returned documents to look exactly like an "Owner" document with the addition of the "petCount" field because I'm using Java Pojos with the Mongo Java driver.
I'm using AWS DocumentDB that does not support $lookup with filters yet. If it did I would use this and I'd be done:
db.Owners.aggregate( [
{ $match: {_id: UUID("b13e733d-2686-4266-a686-d3dae6501887")} },
{ $lookup: { from: 'Pets', as: 'pets', 'let': { ownerId: '$_id' }, pipeline: [ { $match: { $expr: { $ne: ['$state', 'DEAD'] } } } ] } },
{ $addFields: { petCount: { $size: '$pets' } } },
{ $project: { pets: 0 } }
]).pretty()
But since it doesn't this is what I got so far:
db.Owners.aggregate( [
{ $match: {_id: { $in: [ UUID("cbb921f6-50f8-4b0c-833f-934998e5fbff") ] } } },
{ $lookup: { from: 'Pets', localField: '_id', foreignField: 'ownerId', as: 'pets' } },
{ $unwind: { path: '$pets', preserveNullAndEmptyArrays: true } },
{ $match: { 'pets.state': { $ne: 'DEAD' } } },
{ "$group": {
"_id": "$_id",
"doc": { "$first": "$$ROOT" },
"pets": { "$push": "$pets" }
}
},
{ $addFields: { "doc.petCount": { $size: '$pets' } } },
{ $replaceRoot: { "newRoot": "$doc" } },
{ $project: { pets: 0 } }
]).pretty()
This works perfectly, except if an Owner only has "DEAD" pets, then the owner doesn't get returned because all the "document copies" got filtered out by the $match. I'd need the parent document to be returned with petCount = 0 when ALL of them are "DEAD". I cannot figure out how to do this.
Any ideas?
These are the supported operations for DocDB 4.0 https://docs.amazonaws.cn/en_us/documentdb/latest/developerguide/mongo-apis.html
EDIT: update to use $filter as $reduce not supported by aws document DB
You can use $filter to keep only not DEAD pets in the lookup array, then count the size of the remaining array.
Here is the Mongo playground for your reference.
$reduce version
You can use $reduce in your aggregation pipeline to to a conditional sum for the state.
Here is Mongo playground for your reference.
As of January 2022, Amazon DocumentDB added support for $reduce, the solution posted above should work for you.
Reference.
I have a JSON with the following structure and I am looking to remove certain parts of it either by using JSON-LD Context or JSON-LD Framing so in this situation I need a specific type2 only in the output along with the parent
{
"#context":{
"#vocab":"http://xyz.abc.com/01#",
"sdf":"http://xyz.abc.com/01#",
"#base":"http://xyz.abc.com/01#",
"rowid":"#id",
"values":"#nest",
"type":"#type"
},
"#id":"sdf:parent",
"type":"type1",
"values":[
{
"value":984657
}
],
"rows":[
{
"rowid":"5637220",
"type":"type2",
"values":[
{
"value":"i am type 2"
}
]
},
{
"rowid":"9847589",
"type":"type3",
"values":[
{
"value":"I am type 3"
}
]
}
]
}
So the output should look somewhat like this with the child either embedded in the parent or separate like below with a predicate hasChild
{
"#graph": [
{
"#id": "http://xyz.abc.com/01#parent",
"#type": "http://xyz.abc.com/01#type1",
"http://xyz.abc.com/01#value": 984657 ,
"hasChild" : "http://xyz.abc.com/5637220"
},
{
"#id": "http://xyz.abc.com/5637220",
"#type": "http://xyz.abc.com/01#type2",
"http://xyz.abc.com/01#value": "i am type 2"
}
]
}
I'd like to specify an analyzer, name it, and use that name in a mapping while creating an index. I'm lost, my ES instance always returns me an error message.
This is, roughly, what I'd like to do:
"settings": {
"mappings": {
"alfedoc": {
"properties": {
"id": { "type": "string" },
"alfefield": { "type": "string", "analyzer": "alfeanalyzer" }
}
}
},
"analysis": {
"analyzer": {
"alfeanalyzer": {
"type": "pattern",
"pattern":"\\s+"
}
}
}
}
But this does not seem to work; the ES instance always returns me an error like
MapperParsingException[mapping [alfedoc]]; nested: MapperParsingException[Analyzer [alfeanalyzer] not found for field [alfefield]];
I tried putting the "analysis" branch of the dictionary at several places (inside the mapping etc.) but to no avail. I guess a working complete example (which I couldn't find up to now) would help me along as well. Probably I'm missing something rather basic.
"analysis" goes in the "settings" block, which goes either before or after the "mappings" block when creating an index.
"settings": {
"analysis": {
"analyzer": {
"alfeanalyzer": {
"type": "pattern",
"pattern": "\\s+"
}
}
}
},
"mappings": {
"alfedoc": { ... }
}
Here's a good complete, example: Example 1