I have following sample json data.
{
"data": {
"type": "ABC",
"id": "17495500314",
"attributes": {
[!["event": "update",
"gps_vali][1]][1]d": true,
"gps": {
"distance_diff": 6.48,
"total_distance": 848.6
},
"hdop": 79,
"fuel_level": 46.8,
"total_fuel_used": 60443.9,
"location": {
"latitude": 411.372618,
"longitude": -1.254931,
"relative_position": {
"distance": "37",
}
},
"idle_periods": []
},
"relationships": {
"assets": {
"data": [
{
"type": "ABCDFTTG",
"id": "1589799143500003",
"attributes": {
"external_id": "ABCDFTTG",
"hardware_id": "ABCDFTTG"
}
}
]
},
"devices": {
"data": [
{
"type": "ABCDFTTG",
"id": "1585231172900341",
"attributes": {
"serial": "5572016191"
}
},
{
"type": "tablet",
"id": "1587893062600175",
"attributes": {
"serial": "ABCDFTTG"
}
}
]
},
"users": {
"data": [
{
"type": "user",
"id": "ABCDFTTG",
"attributes": {
"external_id": "ABCDFTTG"
}
}
]
}
}
},
"meta": {
"message_id": "11eb-8c75-0b3f87aedbb5",
"consumer_version": "1.2.0",
"origin_version": null,
"timestamp": "2021-06-14T17:42:29Z"
}
}
I want only one row instead of this two. Here is my kusto query which is used for parsing json data into table columns.
Test
|where messageId =="123"
//|mv-expand message=message.data.attributes
|mv-expand message
|mv-expand Value=message.data.relationships.assets.['data']
|mv-expand value_devices=message.data.relationships.devices.['data']
|mv-expand value_user=message.data.relationships.users.['data']
| project type=message.data.type,id=message.data.id,
event=tostring(message.data.attributes.event),
logged_at=tostring(message.data.attributes.logged_at),
distance=toint(message.data.attributes.location.relative_position.distance),
// Value=message.data.relationships.assets.['data'],//.['data']
type_asset=Value.type,asset_id=Value.id,
device_type=value_devices.type,device_id=value_devices.id,
device_attr_serial=value_devices.attributes.serial,
user_type=value_user.type,user_id=value_user.id,
user_external_id=value_user.attributes.external_id
This duplicate row appeared after adding user tag this tag is array so how to handle this array with single id.
I have parse my json data any got the following output.
Expected output should be like
check device_type and device_id columns
Related
This is my api response. Want to extract the value of the Id based on the displayNumber. This display number is a given in the list of values in examples/csv file.
{
"Acc": [
{
"Id": "2b765368696b3441673633325",
"code": "SGD",
"val": 406030.83,
"displayNumber": "8957",
"curval": 406030.83
},
{
"Id": "4e676269685a73787472355776764b50717a4",
"code": "GBP",
"val": 22.68,
"displayNumber": "1881",
"curval": 22.68
},
{
"Id": "526e666d65366e67626244626e6266467",
"code": "SGD",
"val": 38404.44,
"displayNumber": "1004",
"curval": 38404.44
},
],
"combinations": [
{
"displayNumber": "3444",
"Code": "SGD",
"Ids": [
{
"Id": "2b765368696b34416736333254462"
},
{
"Id": "4e676269685a7378747235577"
},
{
"Id": "526e666d65366e6762624d"
}
],
"destId": "3678434b643530456962435272d",
"curval": 3.85
},
{
"displayNumber": "8957",
"code": "SGD",
"Ids": [
{
"Id": "3678434b6435304569624357"
},
{
"Id": "4e676269685a73787472355776764b50717a4"
},
{
"Id": "526e666d65366e67626244626e62664679"
}
],
"destId": "2b765368696b344167363332544",
"curval": 406030.83
},
{
"displayNumber": "1881",
"code": "GBP",
"Ids": [
{
"Id": "3678434b643530456962435275"
},
{
"Id": "2b765368696b3441673"
},
{
"Id": "526e666d65366e67626244626e626"
}
],
"destId": "4e676269685a7378747d",
"curval": 22.68
},
]
}
Examples
|displayNumber|
|8957|
|3498|
|4943|
Below expression works if i give the value
* def tempid = response
* def fromAccount = get[0] tempid.Acc[?(#.displayNumber==8957].Id
I'm not sure how to make this comparison value (i.e. 1881) as a variable which can be read from examples (scenario outline) or a csv file. Went through the documentation, which recommends, karate filters or maps. However, not able to follow how to implement.
You almost got it :-). This is the way you want to solve this
Scenario Outline: Testing SO question for Navneeth
* def tempid = response
* def fromAccount = get[0] tempid.Acc[?(#.displayNumber == <displayNumber>)]
* print fromAccount
Examples:
|displayNumber|
|8957|
|1881|
|3444|
You need to pass the placeholder in examples as -
'<displayNumber>'
Goal: Match the check value is correct for 123S and 123O response in API
First check the value on this location x.details[0].user.school.name[0].codeable.text if it is 123S then check if x.details[0].data.check value is abc
Then check if the value on this location x.details[1].user.school.name[0].codeable.text is 123O then check if x.details[1].data.check is xyz
The response in array inter changes it is not mandatory first element is 123S sometime API returns 123O as first array response.
Sample JSON.
{
"type": "1",
"array": 2,
"details": [
{
"path": "path",
"user": {
"school": {
"name": [
{
"value": "this is school",
"codeable": {
"details": [
{
"hello": "yty",
"condition": "check1"
}
],
"text": "123S"
}
}
]
},
"sample": "test1",
"id": "22222"
},
"data": {
"check": "abc"
}
},
{
"path": "path",
"user": {
"school": {
"name": [
{
"value": "this is school",
"codeable": {
"details": [
{
"hello": "def",
"condition": "check2"
}
],
"text": "123O"
}
}
]
},
"sample": "test",
"id": "11111"
},
"data": {
"check": "xyz"
}
}
]
}
How I did in Postman but how to replicate same in Karate?
var jsonData = pm.response.json();
pm.test("Body matches string", function () {
for(var i=0;i<jsonData.details.length;i++){
if(jsonData.details[i].user.school.name[0].codeable.text == '123S')
{
pm.expect(jsonData.details[i].data.check).to.equal('abc');
}
if(jsonData.details[i].user.school.name[0].codeable.text == '123O')
{
pm.expect(jsonData.details[i].data.check).to.equal('xyz');
}
}
});
2 lines. And this takes care of any number of combinations of lookup values :)
* def lookup = { '123S': 'abc', '123O': 'xyz' }
* match each response.details contains { data: { check: '#(lookup[_$.user.school.name[0].codeable.text])' } }
I have two collections Bill and Employee. Bill contains the information about the monthly student bill and Employee contains all types of people working in the school (Accountant, Teachers, Maintenance etc).
Bill has billVerifyBy and classteacher field which points to the records of Employees.
Bill collection
{
"_id": ObjectId("ab12dns..."), //mongoid
"studentname": "demoUser",
"class": { "section": "A"},
"billVerifiedBy": "121212",
"classteacher": "134239",
}
Employee collection
{
"_id": ObjectId("121212"), // random number
"name": "Darn Morphy",
"department": "Accounts",
"email": "dantest#test.com",
}
{
"_id": ObjectId("134239"),
"name": "Derreck",
"department": "Faculty",
"email": "derrect145#test.com",
}
I need to retrieve the Accounts and Teacher information related to a particular bill. I am using Mongodb lookup to get the information. However, I have to lookup to the same table twice since billVerifiedBy and classteacher belong to the same Employee tables as given below.
db.bill.aggregate([
{
$lookup: {"from": "employee", "localField": "billVerifiedBy", "foreignField": "_id", "as": "accounts"}},
},
{
$lookup: {"from": "employee", "localField": "classteacher", "foreignField": "_id", "as": "faculty"}},
},
{
$project: {
"studentname": 1,
"class": 1,
"verifiedUser": "$accounts.name",
"verifiedByEmail":"$accounts.email",
"facultyName": "$faculty.name",
"facultyEmail": "$faculty.email"
}
}
]
I don't know if this is the good way of arranging the Accounts and Faculty information in the single Employee collection. And is it right thing to lookup twice with same collection. Or should I create separate Accounts and Faculty collection and lookup with it. Please suggest what would be the best approach in terms of performance.
In mongodb, when you want to join multiple documents from the same collection, you can use "$lookup" with its "pipeline" and "let" options. It filters documents that you want to take with defined variables.
db.getCollection('Bill').aggregate([{
"$lookup": {
"as": "lookupUsers",
"from": "Employee",
// define variables that you need to use in pipeline to filter documents
"let": {
"verifier": "$billVerifiedBy",
"teacher": "$classteacher"
},
"pipeline": [{ // filter employees who you need to filter.
"$match": {
"$expr": {
"$or": [{
"$eq": ["$_id", "$$verifier"]
},
{
"$eq": ["$_id", "$$teacher"]
}
]
}
}
},
{ // combine filtered 2 documents in an employee array
"$group": {
"_id": "",
"employee": {
"$addToSet": {
"_id": "$_id",
"name": "$name",
"department": "$department",
"email": "$email"
}
}
}
},
{ // takes item from the array by predefined variable.
"$project": {
"_id": 0,
"billVerifiedBy": {
"$slice": [{
"$filter": {
"input": "$employee",
"cond": {
"$eq": ["$$this._id", "$$verifier"]
}
}
},
1
]
},
"classteacher": {
"$slice": [{
"$filter": {
"input": "$employee",
"cond": {
"$eq": ["$$this._id", "$$teacher"]
}
}
},
1
]
}
}
},
{
"$unwind": "$billVerifiedBy"
},
{
"$unwind": "$classteacher"
},
]
}
},
{
"$unwind": "$lookupUsers"
},
]);
Output is like that:
{
"_id": ObjectId("602916dcf4450742cdebe38d"),
"studentname": "demoUser",
"class": {
"section": "A"
},
"billVerifiedBy": ObjectId("6029172e9ea6c9d4776517ce"),
"classteacher": ObjectId("6029172e9ea6c9d4776517cf"),
"lookupUsers": {
"billVerifiedBy": {
"_id": ObjectId("6029172e9ea6c9d4776517ce"),
"name": "Darn Morphy",
"department": "Accounts",
"email": "dantest#test.com"
},
"classteacher": {
"_id": ObjectId("6029172e9ea6c9d4776517cf"),
"name": "Derreck",
"department": "Faculty",
"email": "derrect145#test.com"
}
}
}
How to write an INNER JOIN query between two data sources that one of them has a dash as it's schema name
Executing the following query on the Druid SQL binary results in a query error
SELECT *
FROM first
INNER JOIN "second-schema" on first.device_id = "second-schema".device_id;
org.apache.druid.java.util.common.ISE: Cannot build plan for query
Is this the correct syntax when trying to refrence a data source that has a dash in it's name?
Schema
[
{
"dataSchema": {
"dataSource": "second-schema",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "ts_start"
},
"dimensionsSpec": {
"dimensions": [
"etid",
"device_id",
"device_name",
"x_1",
"x_2",
"x_3",
"vlan",
"s_x",
"d_x",
"d_p",
"msg_type"
],
"dimensionExclusions": [],
"spatialDimensions": []
}
}
},
"metricsSpec": [
{ "type": "hyperUnique", "name": "conn_id_hll", "fieldName": "conn_id"},
{
"type": "count",
"name": "event_count"
}
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "HOUR",
"queryGranularity": "minute"
}
},
"ioConfig": {
"type": "realtime",
"firehose": {
"type": "kafka-0.8",
"consumerProps": {
"zookeeper.connect": "localhost:2181",
"zookeeper.connectiontimeout.ms": "15000",
"zookeeper.sessiontimeout.ms": "15000",
"zookeeper.synctime.ms": "5000",
"group.id": "flow-info",
"fetch.size": "1048586",
"autooffset.reset": "largest",
"autocommit.enable": "false"
},
"feed": "flow-info"
},
"plumber": {
"type": "realtime"
}
},
"tuningConfig": {
"type": "realtime",
"maxRowsInMemory": 50000,
"basePersistDirectory": "\/opt\/druid-data\/realtime\/basePersist",
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT15m",
"rejectionPolicy": {
"type": "serverTime"
}
}
},
{
"dataSchema": {
"dataSource": "first",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "ts_start"
},
"dimensionsSpec": {
"dimensions": [
"etid",
"category",
"device_id",
"device_name",
"severity",
"x_2",
"x_3",
"x_4",
"x_5",
"vlan",
"s_x",
"d_x",
"s_i",
"d_i",
"d_p",
"id"
],
"dimensionExclusions": [],
"spatialDimensions": []
}
}
},
"metricsSpec": [
{ "type": "doubleSum", "name": "val_num", "fieldName": "val_num" },
{ "type": "doubleMin", "name": "val_num_min", "fieldName": "val_num" },
{ "type": "doubleMax", "name": "val_num_max", "fieldName": "val_num" },
{ "type": "doubleSum", "name": "size", "fieldName": "size" },
{ "type": "doubleMin", "name": "size_min", "fieldName": "size" },
{ "type": "doubleMax", "name": "size_max", "fieldName": "size" },
{ "type": "count", "name": "first_count" }
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "HOUR",
"queryGranularity": "minute"
}
},
"ioConfig": {
"type": "realtime",
"firehose": {
"type": "kafka-0.8",
"consumerProps": {
"zookeeper.connect": "localhost:2181",
"zookeeper.connectiontimeout.ms": "15000",
"zookeeper.sessiontimeout.ms": "15000",
"zookeeper.synctime.ms": "5000",
"group.id": "first",
"fetch.size": "1048586",
"autooffset.reset": "largest",
"autocommit.enable": "false"
},
"feed": "first"
},
"plumber": {
"type": "realtime"
}
},
"tuningConfig": {
"type": "realtime",
"maxRowsInMemory": 50000,
"basePersistDirectory": "\/opt\/druid-data\/realtime\/basePersist",
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT15m",
"rejectionPolicy": {
"type": "serverTime"
}
}
}
]
Based on your schema definitions there are a few observations I'll make.
When doing a join you usually have to list out columns explicitly (not use a *) otherwise you get collisions from duplicate columns. In your join, for example, you have a device_id in both "first" and "second-schema", not to mention all the other columns that are the same across both.
When using a literal delimiter I don't mix them up. I either use them or I don't.
So I think your query will work better in the form of something more like this
SELECT
"first"."etid",
"first"."category",
"first"."device_id",
"first"."device_name",
"first"."severity",
"first"."x_2",
"first"."x_3",
"first"."x_4",
"first"."x_5",
"first"."vlan",
"first"."s_x",
"first"."d_x",
"first"."s_i",
"first"."d_i",
"first"."d_p",
"first"."id",
"second-schema"."etid" as "ss_etid",
"second-schema"."device_id" as "ss_device_id",
"second-schema"."device_name" as "ss_device_name",
"second-schema"."x_1" as "ss_x_1",
"second-schema"."x_2" as "ss_x_2",
"second-schema"."x_3" as "ss_x_3",
"second-schema"."vlan" as "ss_vlan",
"second-schema"."s_x" as "ss_s_x",
"second-schema"."d_x" as "ss_d_x",
"second-schema"."d_p" as "ss_d_p",
"second-schema"."msg_type"
FROM "first"
INNER JOIN "second-schema" ON "first"."device_id" = "second-schema"."device_id";
Obviously feel free to name columns as you see fit, or include exclude columns as needed. Select * will only work when all columns across both tables are unique.
I am using Cosmos DB and have a document with the following simplified structure:
{
"id1":"123",
"stuff": [
{
"id2": "stuff",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big ostrich",
"meta": 1
}
]
},
{
"id3": "default",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
}
My issue is - I have an array of these documents and need to search name to see if it matches my search word. For example I want both big trees and trees to return if a user types in trees.
So currently I push every document into an array and do the following:
For each document
for each stuff
for each a.b.c.d[0].e
for each classes
var splice = name.split(' ')
if (splice.includes(searchWord))
return id1, id2 and id3.
Using cosmosDB I am using SQL with the following code:
client.queryDocuments(
collection,
`SELECT * FROM root r`
).toArray((err, results) => {stuff});
This effectively brings every document in my collection into an array to perform the search manually above as mentioned.
This is going to cause issues when I have 1000s or 1,000,000s of documents in the array and I believe I should be leveraging the search mechanics available within Cosmos itself. Is anyone able to help me to work out what SQL query would be able to perform this type of function?
Having searched everything is it also possible to search the 5 latest documents?
Thanks for any insight in advance!
1.Is anyone able to help me to work out what SQL query would be able to
perform this type of function?
According to your sample and description, I suggest you using ARRAY_CONTAINS in cosmos db sql. Please refer to my sample:
sample documents:
[
{
"id1": "123",
"stuff": [
{
"id2": "stuff",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big ostrich",
"meta": 1
}
]
},
{
"id3": "default",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
},
{
"id1": "456",
"stuff": [
{
"id2": "stuff2",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things2",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "trees",
"meta": 1
}
]
},
{
"id3": "default2",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
},
{
"id1": "789",
"stuff": [
{
"id2": "stuff3",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things3",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big",
"meta": 1
}
]
},
{
"id3": "default3",
"name": "other",
"classes": [
{
"name": "big trees",
"meta": 1
}
]
}
]
}
]
}
}
}
}
]
}
]
query :
SELECT distinct c.id1,stuff.id2,e.id3 FROM c
join stuff in c.stuff
join d in stuff.a.b.c.d
join e in d.e
where ARRAY_CONTAINS(e.classes,{name:"trees"},true)
or ARRAY_CONTAINS(e.classes,{name:"big trees"},true)
output:
2.Having searched everything is it also possible to search the 5 latest
documents?
Per my research, features like LIMIT is not supported in cosmos so far. However , TOP is supported by cosmos db. So if you could add sort field(such as date or id), then you could use sql:
select top 5 from c order by c.sort desc