POSTGRESQL query to extract attributes in JSON - sql

I have the below JSON in a particular DB column. I need a query to extract fields stored within the savings rate(to and from).
{
"data": [
{
"data": {
"intro_visited": {
"portfolio_detail_investment_journey": true,
"dashboard_investments": true,
"portfolio_list_updates": true,
"portfolio_detail_invested": true,
"portfolio_list_offering": true,
"dashboard_more_bottom_bar": true
}
},
"type": "user_properties",
"schema_version": "1"
},
{
"data": {
"savings_info": {
"remind_at": 1583475493291,
"age": 100,
"savings_rate": {
"to": "20",
"from": "4"
},
"recommendation": {
"offering_name": "Emergency Fund",
"amount": "1,11,111",
"offering_status": "not_invested",
"ideal_amount": "1,11,111",
"offering_code": "liquid"
}
}
},
"type": "savings_info",
"schema_version": "1"
}
]
}

To get the "To"
$..data.savings_info.savings_rate.to
To get the "From"
$..data.savings_info.savings_rate.from

This script works
SELECT
<column> ->'data'->2->'data'->'savings_info'->'savings_rate'->>'to' AS to_rate
from <table>

Related

How to check a particular value on basis of condition in karate

Goal: Match the check value is correct for 123S and 123O response in API
First check the value on this location x.details[0].user.school.name[0].codeable.text if it is 123S then check if x.details[0].data.check value is abc
Then check if the value on this location x.details[1].user.school.name[0].codeable.text is 123O then check if x.details[1].data.check is xyz
The response in array inter changes it is not mandatory first element is 123S sometime API returns 123O as first array response.
Sample JSON.
{
"type": "1",
"array": 2,
"details": [
{
"path": "path",
"user": {
"school": {
"name": [
{
"value": "this is school",
"codeable": {
"details": [
{
"hello": "yty",
"condition": "check1"
}
],
"text": "123S"
}
}
]
},
"sample": "test1",
"id": "22222"
},
"data": {
"check": "abc"
}
},
{
"path": "path",
"user": {
"school": {
"name": [
{
"value": "this is school",
"codeable": {
"details": [
{
"hello": "def",
"condition": "check2"
}
],
"text": "123O"
}
}
]
},
"sample": "test",
"id": "11111"
},
"data": {
"check": "xyz"
}
}
]
}
How I did in Postman but how to replicate same in Karate?
var jsonData = pm.response.json();
pm.test("Body matches string", function () {
for(var i=0;i<jsonData.details.length;i++){
if(jsonData.details[i].user.school.name[0].codeable.text == '123S')
{
pm.expect(jsonData.details[i].data.check).to.equal('abc');
}
if(jsonData.details[i].user.school.name[0].codeable.text == '123O')
{
pm.expect(jsonData.details[i].data.check).to.equal('xyz');
}
}
});
2 lines. And this takes care of any number of combinations of lookup values :)
* def lookup = { '123S': 'abc', '123O': 'xyz' }
* match each response.details contains { data: { check: '#(lookup[_$.user.school.name[0].codeable.text])' } }

How to avoid the duplicated data entry after parsing json in kusto?

I have following sample json data.
{
"data": {
"type": "ABC",
"id": "17495500314",
"attributes": {
[!["event": "update",
"gps_vali][1]][1]d": true,
"gps": {
"distance_diff": 6.48,
"total_distance": 848.6
},
"hdop": 79,
"fuel_level": 46.8,
"total_fuel_used": 60443.9,
"location": {
"latitude": 411.372618,
"longitude": -1.254931,
"relative_position": {
"distance": "37",
}
},
"idle_periods": []
},
"relationships": {
"assets": {
"data": [
{
"type": "ABCDFTTG",
"id": "1589799143500003",
"attributes": {
"external_id": "ABCDFTTG",
"hardware_id": "ABCDFTTG"
}
}
]
},
"devices": {
"data": [
{
"type": "ABCDFTTG",
"id": "1585231172900341",
"attributes": {
"serial": "5572016191"
}
},
{
"type": "tablet",
"id": "1587893062600175",
"attributes": {
"serial": "ABCDFTTG"
}
}
]
},
"users": {
"data": [
{
"type": "user",
"id": "ABCDFTTG",
"attributes": {
"external_id": "ABCDFTTG"
}
}
]
}
}
},
"meta": {
"message_id": "11eb-8c75-0b3f87aedbb5",
"consumer_version": "1.2.0",
"origin_version": null,
"timestamp": "2021-06-14T17:42:29Z"
}
}
I want only one row instead of this two. Here is my kusto query which is used for parsing json data into table columns.
Test
|where messageId =="123"
//|mv-expand message=message.data.attributes
|mv-expand message
|mv-expand Value=message.data.relationships.assets.['data']
|mv-expand value_devices=message.data.relationships.devices.['data']
|mv-expand value_user=message.data.relationships.users.['data']
| project type=message.data.type,id=message.data.id,
event=tostring(message.data.attributes.event),
logged_at=tostring(message.data.attributes.logged_at),
distance=toint(message.data.attributes.location.relative_position.distance),
// Value=message.data.relationships.assets.['data'],//.['data']
type_asset=Value.type,asset_id=Value.id,
device_type=value_devices.type,device_id=value_devices.id,
device_attr_serial=value_devices.attributes.serial,
user_type=value_user.type,user_id=value_user.id,
user_external_id=value_user.attributes.external_id
This duplicate row appeared after adding user tag this tag is array so how to handle this array with single id.
I have parse my json data any got the following output.
Expected output should be like
check device_type and device_id columns

Query Druid SQL inner join with a dataSource name that has a dash

How to write an INNER JOIN query between two data sources that one of them has a dash as it's schema name
Executing the following query on the Druid SQL binary results in a query error
SELECT *
FROM first
INNER JOIN "second-schema" on first.device_id = "second-schema".device_id;
org.apache.druid.java.util.common.ISE: Cannot build plan for query
Is this the correct syntax when trying to refrence a data source that has a dash in it's name?
Schema
[
{
"dataSchema": {
"dataSource": "second-schema",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "ts_start"
},
"dimensionsSpec": {
"dimensions": [
"etid",
"device_id",
"device_name",
"x_1",
"x_2",
"x_3",
"vlan",
"s_x",
"d_x",
"d_p",
"msg_type"
],
"dimensionExclusions": [],
"spatialDimensions": []
}
}
},
"metricsSpec": [
{ "type": "hyperUnique", "name": "conn_id_hll", "fieldName": "conn_id"},
{
"type": "count",
"name": "event_count"
}
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "HOUR",
"queryGranularity": "minute"
}
},
"ioConfig": {
"type": "realtime",
"firehose": {
"type": "kafka-0.8",
"consumerProps": {
"zookeeper.connect": "localhost:2181",
"zookeeper.connectiontimeout.ms": "15000",
"zookeeper.sessiontimeout.ms": "15000",
"zookeeper.synctime.ms": "5000",
"group.id": "flow-info",
"fetch.size": "1048586",
"autooffset.reset": "largest",
"autocommit.enable": "false"
},
"feed": "flow-info"
},
"plumber": {
"type": "realtime"
}
},
"tuningConfig": {
"type": "realtime",
"maxRowsInMemory": 50000,
"basePersistDirectory": "\/opt\/druid-data\/realtime\/basePersist",
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT15m",
"rejectionPolicy": {
"type": "serverTime"
}
}
},
{
"dataSchema": {
"dataSource": "first",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "ts_start"
},
"dimensionsSpec": {
"dimensions": [
"etid",
"category",
"device_id",
"device_name",
"severity",
"x_2",
"x_3",
"x_4",
"x_5",
"vlan",
"s_x",
"d_x",
"s_i",
"d_i",
"d_p",
"id"
],
"dimensionExclusions": [],
"spatialDimensions": []
}
}
},
"metricsSpec": [
{ "type": "doubleSum", "name": "val_num", "fieldName": "val_num" },
{ "type": "doubleMin", "name": "val_num_min", "fieldName": "val_num" },
{ "type": "doubleMax", "name": "val_num_max", "fieldName": "val_num" },
{ "type": "doubleSum", "name": "size", "fieldName": "size" },
{ "type": "doubleMin", "name": "size_min", "fieldName": "size" },
{ "type": "doubleMax", "name": "size_max", "fieldName": "size" },
{ "type": "count", "name": "first_count" }
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "HOUR",
"queryGranularity": "minute"
}
},
"ioConfig": {
"type": "realtime",
"firehose": {
"type": "kafka-0.8",
"consumerProps": {
"zookeeper.connect": "localhost:2181",
"zookeeper.connectiontimeout.ms": "15000",
"zookeeper.sessiontimeout.ms": "15000",
"zookeeper.synctime.ms": "5000",
"group.id": "first",
"fetch.size": "1048586",
"autooffset.reset": "largest",
"autocommit.enable": "false"
},
"feed": "first"
},
"plumber": {
"type": "realtime"
}
},
"tuningConfig": {
"type": "realtime",
"maxRowsInMemory": 50000,
"basePersistDirectory": "\/opt\/druid-data\/realtime\/basePersist",
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT15m",
"rejectionPolicy": {
"type": "serverTime"
}
}
}
]
Based on your schema definitions there are a few observations I'll make.
When doing a join you usually have to list out columns explicitly (not use a *) otherwise you get collisions from duplicate columns. In your join, for example, you have a device_id in both "first" and "second-schema", not to mention all the other columns that are the same across both.
When using a literal delimiter I don't mix them up. I either use them or I don't.
So I think your query will work better in the form of something more like this
SELECT
"first"."etid",
"first"."category",
"first"."device_id",
"first"."device_name",
"first"."severity",
"first"."x_2",
"first"."x_3",
"first"."x_4",
"first"."x_5",
"first"."vlan",
"first"."s_x",
"first"."d_x",
"first"."s_i",
"first"."d_i",
"first"."d_p",
"first"."id",
"second-schema"."etid" as "ss_etid",
"second-schema"."device_id" as "ss_device_id",
"second-schema"."device_name" as "ss_device_name",
"second-schema"."x_1" as "ss_x_1",
"second-schema"."x_2" as "ss_x_2",
"second-schema"."x_3" as "ss_x_3",
"second-schema"."vlan" as "ss_vlan",
"second-schema"."s_x" as "ss_s_x",
"second-schema"."d_x" as "ss_d_x",
"second-schema"."d_p" as "ss_d_p",
"second-schema"."msg_type"
FROM "first"
INNER JOIN "second-schema" ON "first"."device_id" = "second-schema"."device_id";
Obviously feel free to name columns as you see fit, or include exclude columns as needed. Select * will only work when all columns across both tables are unique.

Issue when sending Query with Arabic characters through API

I can't send Query with Arabic characters through API. I am trying to send the query from CS-Cart to Quickbooks Online.
I tried to send the query using the arabic letters as the following:
select * from Customer Where DisplayName = 'احمد عبدالعزيز'
it returns:
{
"responseHeader": {
"status": 400,
"message": "Bad Request",
"intuitTid": "2dbec1fd-5dc1-3a14-4a12-7c338db0ee2a",
"realmID": "123146420719144"
},
"response": {
"Fault": {
"Error": [
{
"Message": "Error parsing query",
"Detail": "QueryParserError: Invalid content. Lexical error at line 1, column 45. Encountered: \"\\u0627\" (1575), after : \"\\'\"",
"code": "4000"
}
],
"type": "ValidationFault"
},
"time": "2019-07-04T07:09:03.026-07:00"
}
}
And if I try it after encoding the name and send the query as the following:
select * from Customer Where DisplayName = '%D8%A7%D8%AD%D9%85%D8%AF+%D8%B9%D8%A8%D8%AF%D8%A7%D9%84%D8%B9%D8%B2%D9%8A%D8%B2'
it returns nothing:
{
"QueryResponse": {},
"time": "2019-07-04T07:09:42.698-07:00"
}
I am expecting to get like:
{
"QueryResponse": {
"Customer": [
{
"Taxable": false,
"BillAddr": {
"Id": "924",
"Country": "Saudi Arabia"
},
"ShipAddr": {
"Id": "925",
"Country": "Saudi Arabia"
},
"Job": false,
"BillWithParent": false,
"Balance": 157.5,
"BalanceWithJobs": 157.5,
"CurrencyRef": {
"value": "SAR",
"name": "Saudi Riyal"
},
"PreferredDeliveryMethod": "None",
"IsProject": false,
"domain": "QBO",
"sparse": false,
"Id": "577",
"SyncToken": "0",
"MetaData": {
"CreateTime": "2019-07-01T06:37:32-07:00",
"LastUpdatedTime": "2019-07-01T06:37:33-07:00"
},
"GivenName": "Ramil",
"FamilyName": "Gilaev",
"FullyQualifiedName": "Ramil Gilaev",
"DisplayName": "Ramil Gilaev",
"PrintOnCheckName": "Ramil Gilaev",
"Active": true,
"PrimaryPhone": {
"FreeFormNumber": "123456789"
}
}
],
"startPosition": 1,
"maxResults": 1
},
"time": "2019-07-05T02:12:35.562-07:00"
}
Also I noticed even if the Query is in English name, it results the same.
select * from Customer Where DisplayName = 'Ahmed Al-Khuraisir'
it results:
{
"QueryResponse": {},
"time": "2019-07-05T03:31:11.149-07:00"
}
Please check attached images.
Screenshot 1
Screenshot 2

Max Response Limitation im OTA_AirLowFareSearchRQ

I'm working with Sabre REST API. I have a issue with the OTA_AirLowFareSearchRQ, I try limit the response number using the MaxResponses in the json structure but seems that I make something wrong because the response give to me 95 answers in the cert environment (https://api.cert.sabre.com/).
The json request that I use is:
{
"OTA_AirLowFareSearchRQ": {
"Target": "Production",
"PrimaryLangID": "ES",
"MaxResponses": "15",
"POS": {
"Source": [{
"RequestorID": {
"Type": "1",
"ID": "1",
"CompanyName": {}
}
}]
},
"OriginDestinationInformation": [{
"RPH": "1",
"DepartureDateTime": "2016-04-01T11:00:00",
"OriginLocation": {
"LocationCode": "BOG"
},
"DestinationLocation": {
"LocationCode": "CTG"
},
"TPA_Extensions": {
"SegmentType": {
"Code": "O"
}
}
}],
"TravelPreferences": {
"ValidInterlineTicket": true,
"CabinPref": [{
"Cabin": "Y",
"PreferLevel": "Preferred"
}],
"TPA_Extensions": {
"TripType": {
"Value": "Return"
},
"LongConnectTime": {
"Min": 780,
"Max": 1200,
"Enable": true
},
"ExcludeCallDirectCarriers": {
"Enabled": true
}
}
},
"TravelerInfoSummary": {
"SeatsRequested": [1],
"AirTravelerAvail": [{
"PassengerTypeQuantity": [{
"Code": "ADT",
"Quantity": 1
}]
}]
},
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "10ITINS"
}
}
}
}
}
MaxResponses could be something for internal development which is part of the schema but does not affect the response.
What you can modify is in the IntelliSellTransaction. You used 10ITINS, but the values that will work should be 50ITINS, 100ITINS and 200ITINS.
EDIT2 (as Panagiotis Kanavos said):
RequestType values depend on the business agreement between your company and Sabre. You can't use 100 or 200 without modifying the agreement.
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "50ITINS"
}
}
}
EDIT1:
I have searched a bit more and found:
OTA_AirLowFareSearchRQ.TravelPreferences.TPA_Extensions.NumTrips
Required: false
Type: object
Description: This element allows a user to specify the number of itineraries returned.