How do I query referenced document fields within an array of objects in Sanity with GROQ? - sanity

I have a document which has an array of objects of which one of the fields is a reference to another document. The following query returns the referenced document _id and _type only, and I need other fields from those documents.
// GROQ query
*[slug.current == $slug]{
title,
slug,
objectArray
}
This results in:
"result": [
0: {
"title": "Test Document"
"slug": {
"_type": "slug"
"current": "test-document"
}
"objectArray": [
0: {...}
1: {
"_key": "583ec1dee738"
"_type": "documentObject"
"objectTitle" : "Test Object"
"documentReference": {
"_ref": "2f9b93b4-4924-45f2-af72-a38f7d9ebeb4"
"_type": "reference"
}
"booleanField": true
}
]
}
]
documentReference has its own set of fields (ie. title) in the schema which I need to be returned in my query.
How do I do this?
I have looked at the Sanity documentation at joins and object projections, but I cannot get the syntax right for when the reference is within an array of objects.

You need to do a join on the reference:
*[slug.current == $slug] {
title,
slug,
objectArray[] {
documentReference->
}
}
The syntax objectArray[] can be thought of as "for each element", and -> does a join to look up the referenced document. In other

Related

how to select a single item and get it's relations in faunadb?

I have two collections which have the data in the following format
{
"ref": Ref(Collection("Leads"), "267824207030650373"),
"ts": 1591675917565000,
"data": {
"notes": "voicemail ",
"source": "key-name",
"name": "Glenn"
}
}
{
"ref": Ref(Collection("Sources"), "266777079541924357"),
"ts": 1590677298970000,
"data": {
"key": "key-name",
"value": "Google Ads"
}
}
I want to be able to query the Leads collection and be able to retrieve the corresponding Sources document in a single query
I came up with the following query to try and use an index but I couldn't get it to run
Let(
{
data: Get(Ref(Collection('Leads'), '267824207030650373'))
},
{
data: Select(['data'],Var('data')),
source: q.Lambda('data',
Match(Index('LeadSourceByKey'), Get(Select(['source'], Var('data') )) )
)
}
)
Is there an easy way to retrieve the Sources document ?
What you are looking for is the following query which I broke down for you in multiple steps:
Let(
{
// Get the Lead document
lead: Get(Ref(Collection("Leads"), "269038063157510661")),
// Get the source key out of the lead document
sourceKey: Select(["data", "source"], Var("lead")),
// use the index to get the values via match
sourceValues: Paginate(Match(Index("LeadSourceValuesByKey"), Var("sourceKey")))
},
{
lead: Var("lead"),
sourceValues: Var("sourceValues")
}
)
The result is:
{
lead: {
ref: Ref(Collection("Leads"), "269038063157510661"),
ts: 1592833540970000,
data: {
notes: "voicemail ",
source: "key-name",
name: "Glenn"
}
},
sourceValues: {
data: [["key-name", "Google Ads"]]
}
}
sourceValues is an array since you specified in your index that there will be two items returned, the key and the value and an index always returns the array. Since your Match could have returned multiple values in case it wasn't a one-to-one, this becomes an array of an array.
This is only one approach, you could also make the index return a reference and Map/Get to get the actual document as explained on the forum.
However, I assume you asked the same question here. Although I applaud asking questions on stackoverflow vs slack or even our own forum, please do not just post the same question everywhere without linking to the others. This makes many people spend a lot of time while the question is already answered elsewhere.
You might probably change the Leads document and put the Ref to Sources document in source:
{
"ref": Ref(Collection("Leads"), "267824207030650373"),
"ts": 1591675917565000,
"data": {
"notes": "voicemail ",
"source": Ref(Collection("Sources"), "266777079541924357"),
"name": "Glenn"
}
}
{
"ref": Ref(Collection("Sources"), "266777079541924357"),
"ts": 1590677298970000,
"data": {
"key": "key-name",
"value": "Google Ads"
}
}
And then query this way:
Let(
{
lead: Select(['data'],Get(Ref(Collection('Leads'), '267824207030650373'))),
source:Select(['source'],Var('lead'))
},
{
data: Var('lead'),
source: Select(['data'],Get(Var('source')))
}
)

MarkLogic - Xpath on JSON document

MarkLogic Version: 9.0-6.2
I am trying to apply Xpath in extract-document-data (using Query Options) on a JSON document shown below. I need to filter out "Channel" property if the underneath property "OptIn" has a value of "True".
{
"Category":
{
"Name": "Severe Weather",
"Channels":[
{
"Channel":
{
"Name":"Email",
"OptIn": "True"
}
},
{
"Channel":
{
"Name":"Text",
"OptIn": "False"
}
}
]
}
}
I tried below code,
'<extract-document-data selected="include">' +
'<extract-path>//*[OptIn="True"]/../..</extract-path>' +
'</extract-document-data>' +
which is only pulling from "Channel" property as shown below.
[
{
"Channel": {
"Name": "Email",
"OptIn": "True"
}
}
]
But my need is to pull from parent "Category" property, but filter out the Channels that have OptIn value as False.
Any pointers?
If I understand correctly, you'd like to extract 'Category', but only with those 'Channel's that have 'OptIn' equalling 'true', right?
Extract-document-data is not advanced enough for that. You best extract entire Categories which have at least one OptIn equalling true (//Category[//OptIn = 'true']), and use a REST transform on the search response to trim down the unwanted Channels..
HTH!

Can Elasticsearch make suggestions for mapping?

Playing around with Elasticsearch I added a document to my index called "pets", that looks like this:
{
"name" : "Piper",
"type" : "dog"
}
Then I added a second document:
{
"name" : "Max",
"type" : "dog",
"breed": "Scottish Terrier"
}
Now, I understand that the mapping of my "pets" index is initially created based on my first document ( unless i define a mapping at some point ). However, I am curious to know if ES can suggest a mapping based on the existing data ( like MySQL's "Propose table structure" ) or maybe update the mapping automatically.
Yes, ElasticSearch will automatically update the mapping.
Sometimes the language in the ElasticSearch documentation makes it sound like once the mapping is set, it cannot be changed. This is only true for the existing fields. Any additional fields will be automatically assigned a type and added to the mapping.
Remember you can always check the mapping of an index with the get mapping API:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-get-mapping.html
For example, with the example you have above, after your first "pet" document the mapping is:
{
"my_index": {
"mappings": {
"pet": {
"properties": {
"name": {
"type": "string"
},
"type": {
"type": "string"
}
}
}
}
}
}
And after the second "pet" document, your mapping is:
{
"my_index": {
"mappings": {
"pet": {
"properties": {
"breed": {
"type": "string"
},
"name": {
"type": "string"
},
"type": {
"type": "string"
}
}
}
}
}
}
I'm not familiar with MySQL's propose table structure, so I can't comment on that...

How do I sort ElasticSearch when it's empty?

Sometimes, I have nothing in the index, sometimes, I have some documents. That's just the nature of my application. When the index does contain documents, I sort by "final_score" descending. My query looks like this:
GET /_search
{
"query": {
"match_all":{}
},
"sort":[
{ "final_score" : "desc" }
]
}
However, this query breaks when there are 0 documents in the index. I would have to remove the sort to make the query work.
How can I make this query work with any amount of documents (0, or more?)
If you don't have field and ask elasticsearch to sort by that field then there is problem,
So,Have mapping for final_score, so that it will not throw error (if nothing is indexed also).
Example:
POST http://localhost:9200/index/type/_mapping
{
"type": {
"properties": {
"final_score": {
"type": "integer"
}
}
}
}

Query Mongo collection for all documents with empty array nested in an array

I have documents that look like this in a collection called movies:
{
"_id" : ObjectId("51c272623021490007000001"),
"movies": [
{
"name": "Booty Call"
"play_times": []
},
{
"name": "Bulletproof"
"play_times": [{...},{...}]
}
]
}
I would like to query for documents where "play_times" is not empty or null. Basically I only care about movies with play times.
If you want to query for separate array elements within documents, that is not possible AFAIK. If you want to get documents which have non-empty play_times, then use $size operator:
movies.play_times : { $size : { $gt : 0} }
To check if field exists, there is an $exists operator.