Unexpected behavior of ARRAY_SLICE in Cosmos Db SQL API - sql

I have Cosmos DB collection (called sample) containing the following documents:
[
{
"id": "id1",
"messages": [
{
"messageId": "message1",
"Text": "Value1"
},
{
"messageId": "message2",
"Text": "Value2"
}
]
},
{
"id": "id2",
"messages": [
{
"messageId": "message3",
"Text": "Value3"
},
{
"messageId": "message4",
"Text": "Value1"
}
]
},
{
"id": "id3",
"messages": [
{
"messageId": "message5",
"Text": "Value1"
},
{
"messageId": "message6",
"Text": "Value2"
}
]
},
{
"id": "id4",
"messages": [
{
"messageId": "message7",
"Text": "Value5"
},
{
"messageId": "message8",
"Text": "Value2"
}
]
},
]
I am trying to retrieve all the Documents, having messages and the first message has the field "Text"= 'Value1'.
In this sample the documents with the ids '1' and '3' would be retrieved. Please notice that the document with id='id2' wouldn't be retrieved,
since the value of the text of the first message is 'Value3'.
The collection as mentioned is called sample and I am running the following Query:
"select sample.id, sample.messages, ARRAY_SLICE(sample.messages, 0, 1)[0].Text as valueOfText from sample"
As you can see in the first two images, I retrieve all Documents and every one of them have the field "valueOfText" set to value of the first message, as expected.
Now when I filter the collection (the third image), I retrieve no results at all.
Is this an expected behavior?

Following your sql, got same results:
But why you have to use ARRAY_SLICE,it is used to return truncated array.Since your requirement is specific:
trying to retrieve all the Documents, having messages and the first
message has the field "Text"= 'Value1'
Just use sql:
SELECT c.id,c.messages,c.messages[0].Text as valueOfText FROM c
where c.messages[0].Text = 'Value1'
Output:

Related

Adobe Analytics - how to get report API 2.0 to get multi-level breakdown data using Java

I need help in getting adobe-analytics data when multiple IDs are passed for multi-level breakdown using API 2.0.
I am getting first level data for V124 ----
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc"
}
]
},
"dimension": "variables/evar124",
but i want to use IDs returned in above call response to in second level breakdown of v124 to get v125 as---
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc",
"filters": [
"0"
]
}
],
"metricFilters": [
{
"id": "0",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267831"
},
{
"id": "1",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267832"
}
]
},
"dimension": "variables/evar125",
This always returns data only for 2629267831 ID and not 2629267832.
I want to get data for thousands of IDs (returned from first API call) in a single API call. What am i doing wrong?

Querying using query parameters in Postman

As a beginner to Postman, I am trying to use Query Parameters to search via filtering by keys. COnsider the following content of a certain endpoint.
[
{
"id": 1,
"name": "The Russian",
"type": "fiction",
"available": true
},
{
"id": 2,
"name": "Just as I Am",
"type": "non-fiction",
"available": false
}
]
1st scenario :
Doing a GET on the endpoint with the syntax {{baseURL}}/books?type=fiction, I get
[
{
"id": 1,
"name": "The Russian",
"type": "fiction",
"available": true
}
]
which is correct.
2nd scenario :
Doing a GET on the endpoint with the syntax {{baseURL}}/books?id=1, I get
[
{
"id": 1,
"name": "The Russian",
"type": "fiction",
"available": true
},
{
"id": 2,
"name": "Just as I Am",
"type": "non-fiction",
"available": false
}
]
which is not filtering by id = 1. It display id = 2 item as well. I was expecting it to show item base on id = 1 only.
AM I missing anything in understanding on how to use the query parameters ?

How to match string and ignore the case in karate?

There is a case where one value sometimes is lower case and sometimes it's upper case. this is the response coming from an API and we have to match if every field in response is correct by ignoring some values. The error text in response sometimes has one keyword in lower and some scenarios it is upper case. How we can ignore one keyword in a string to not match? I don't want to ignore whole text as it works fine if I ignore whole string but is it possible to ignore one keyword only?
Scenario: string matching
* def test =
"""
{
"sourceType": "Error",
"id": "123456",
"type": "searchuser",
"total": 0,
"value": [
{
"details": "this is the user search case",
"source": {
"sourceType": "Error",
"id": "77200203043",
"issue": [
{
"severity": "high",
"code": "678",
"message": {
"text": "No matching User details found"
},
"errorCode": "ERROR401"
}
]
},
"user": {
"status": "active"
}
}
]
}
"""
* match test ==
"""
{
"sourceType": "Error",
"id": "#present",
"type": "searchuser",
"total": 0,
"value": [
{
"details": "#present",
"source": {
"sourceType": "Error",
"id": "#ignore",
"issue": [
{
"severity": "high",
"code": "678",
"message": {
"text": "No matching User details found"
},
"errorCode": "ERROR401"
}
]
},
"user": {
"status": "active"
}
}
]
}
"""
How to ignore the case only for user here? I tried below but it treats #ignore as a value.
"text": "No matching #ignore details found"
I'm not looking at your payload dump but providing a simple example. Use karate.lowerCase():
* def response = { foo: 'Bar' }
* match karate.lowerCase(response) == { foo: 'bar' }
EDIT: you can also extract one value at a time and do a check only for that:
* def response = { foo: 'Bar' }
* def foo = response.foo
* match karate.lowerCase(foo) == 'bar'

Use Athena SQL to get a value from JSON key

I need to get the email address from this 'facets' table I created from my firehose logs (JSON).
Now, I am using Athena to get particular information.
I need to get the email addresses from this:
This is my out of 'facets' when I pass-
SELECT * FROM "sampledb"."facets" limit 10
{email_channel={mail_event={mail={message_id=oadfosadu6237864237615, message_send_timestamp=1622696691764, from_address=abcd#jk.com, destination=[abcd#jk.com], headers_truncated=false, headers=[{name=From, value=abcd#jk.com}, {name=To, value=abcd#jk.com}, {name=MIME-Version, value=1.0}], common_headers={from=ghjk#li.com, to=[abcd#jk.com]}}, send={}, rendering_failure=null}}}
Assuming you have one column which stores json in provided format you can use json_extract with needed paths (and maybe some casts):
with dataset1 as (
select * from (values(JSON
'{
"email_channel": {
"mail_event": {
"mail": {
"message_id": "oadfosadu6237864237615",
"message_send_timestamp": 1622696691764,
"from_address": "abcd#jk.com",
"destination": [
"abcd#jk.com"
],
"headers_truncated": false,
"headers": [
{
"name": "From",
"value": "abcd#jk.com"
},
{
"name": "To",
"value": "abcd#jk.com"
},
{
"name": "MIME-Version",
"value": "1.0"
}
],
"common_headers": {
"from": "ghjk#li.com",
"to": [
"abcd#jk.com"
]
}
},
"send": {},
"rendering_failure": null
}
}
}')) as facets(facet))
select
json_extract(facet, '$.email_channel.mail_event.mail.from_address') mail_from,
CAST(json_extract(facet, '$.email_channel.mail_event.mail.destination') AS ARRAY(VARCHAR)) destination
from dataset1
And output:
mail_from
destination
"abcd#jk.com"
{abcd#jk.com}

Apache Nifi: UpdateRecord replace child values

I'm trying to use UpdateRecord 1.9.0 processor to modify a JSON but it does not replace the values as I want.
this is the source message
{
"type": "A",
"ids": [{
"id": "1",
"value": "abc"
}, {
"id": "2",
"value": "def"
}, {
"id": "3",
"value": "ghi"
}
]
}
and the wanted output
{
"ids": [{
"userId": "1",
}, {
"userId": "2",
}, {
"userId": "3",
}
]
}
I have configured the processor as follows
processor config
Reader:
reader
Schema registry:
schema
writer:
writer
And it works, the output is a JSON without the field 'type' and the ids have the field 'userId' instead 'id' and 'value'.
To fill the value of userId, I defined the replace strategy and the property to replace:
strategy
But the output is wrong. The userId is always filled with the id of the last element in the array:
{
"ids": [{
"userId": "3"
}, {
"userId": "3"
}, {
"userId": "3"
}
]
}
I think the value of the expression is ok because if I try to replace only one record it works fine (/ids[0]/userId, ..id)
Nifi docs has a really similar example (example 3):
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.7.1/org.apache.nifi.processors.standard.UpdateRecord/additionalDetails.html
But it does not work for me.
What am I doing wrong?
thanks
Finally I have used JoltJSONTransform processor instead UpdateRecord
JoltJSONTransform
template:
[
{
"operation": "shift",
"spec": {
"ids":{
"*":{
"id": "ids[&1].userId"
}
}
}
}
]
Easier than UpdateRecord