So i started to log all queries of my spring boot application via a proxy data source and came across some queries i couldn't explain to myself.
This is the json log of said queries:
[
{
"name": "TOXI",
"connection": 3,
"isolation": "READ_COMMITTED",
"time": 2,
"success": true,
"type": "Prepared",
"batch": false,
"querySize": 1,
"batchSize": 0,
"query": [
"select * from information_schema.sequences"
],
"params": [
[]
]
},
{
"name": "TOXI",
"connection": 3,
"isolation": "READ_COMMITTED",
"time": 2,
"success": true,
"type": "Prepared",
"batch": false,
"querySize": 1,
"batchSize": 0,
"query": [
"select * from \"public\".\"toxi_image\" where 1=0"
],
"params": [
[]
]
},
{
"name": "TOXI",
"connection": 3,
"isolation": "READ_COMMITTED",
"time": 0,
"success": true,
"type": "Prepared",
"batch": false,
"querySize": 1,
"batchSize": 0,
"query": [
"select * from \"public\".\"toxi_tag\" where 1=0"
],
"params": [
[]
]
},
]
The first one still makes sense to me, but the second and third one is where my question start.
Why are they needed? Shouldn't the information schema hold all the table information thats needed? And why is making this statement only for 2 tables and not for the rest of the application?
One last thing to mention, the two tables/entities have a many to many correlation, if that has something to do with it.
Thank you in advance
Related
I need help in getting adobe-analytics data when multiple IDs are passed for multi-level breakdown using API 2.0.
I am getting first level data for V124 ----
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc"
}
]
},
"dimension": "variables/evar124",
but i want to use IDs returned in above call response to in second level breakdown of v124 to get v125 as---
"metricContainer": {
"metrics": [
{
"columnId": "0",
"id": "metrics/event113",
"sort": "desc",
"filters": [
"0"
]
}
],
"metricFilters": [
{
"id": "0",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267831"
},
{
"id": "1",
"type": "breakdown",
"dimension": "variables/evar124",
"itemId": "2629267832"
}
]
},
"dimension": "variables/evar125",
This always returns data only for 2629267831 ID and not 2629267832.
I want to get data for thousands of IDs (returned from first API call) in a single API call. What am i doing wrong?
I am indexing all the file names into the index. But when I search with exact file name in the search query it is returning all other file names also. below is my index definition.
{
"fields": [
{
"name": "id",
"type": "Edm.String",
"facetable": true,
"filterable": true,
"key": true,
"retrievable": true,
"searchable": false,
"sortable": false,
"analyzer": null,
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
},
{
"name": "FileName",
"type": "Edm.String",
"facetable": false,
"filterable": false,
"key": false,
"retrievable": true,
"searchable": true,
"sortable": false,
"analyzer": "keyword-analyzer",
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
}
],
"scoringProfiles": [],
"defaultScoringProfile": null,
"corsOptions": null,
"analyzers": [
{
"name": "keyword-analyzer",
"#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"charFilters": [],
"tokenizer": "keyword_v2",
"tokenFilters": ["lowercase", "my_asciifolding", "my_word_delimiter"]
}
],
"tokenFilters": [
{
"#odata.type": "#Microsoft.Azure.Search.AsciiFoldingTokenFilter",
"name": "my_asciifolding",
"preserveOriginal": true
},
{
"#odata.type": "#Microsoft.Azure.Search.WordDelimiterTokenFilter",
"name": "my_word_delimiter",
"generateWordParts": true,
"generateNumberParts": false,
"catenateWords": false,
"catenateNumbers": false,
"catenateAll": false,
"splitOnCaseChange": true,
"preserveOriginal": true,
"splitOnNumerics": true,
"stemEnglishPossessive": false,
"protectedWords": []
}
],
"#odata.etag": "\"0x8D6FB2F498F9AD2\""
}
Below is my sample data
{
"value": [
{
"id": "1",
"FileName": "SamplePSDFile_1psd2680.psd"
},
{
"id": "2",
"FileName": "SamplePSDFile-1psd260.psd"
},
{
"id": "3",
"FileName": "SamplePSDFile_1psd2689.psd"
},
{
"id": "4",
"FileName": "SamplePSDFile-1psdxx2680.psd"
}
]
}
Below is the Analyze API results
{
"tokens": [
{
"token": "samplepsdfile_1psd2689.psd",
"startOffset": 0,
"endOffset": 26,
"position": 0
},
{
"token": "samplepsdfile",
"startOffset": 0,
"endOffset": 13,
"position": 0
},
{
"token": "psd",
"startOffset": 15,
"endOffset": 18,
"position": 1
},
{
"token": "psd",
"startOffset": 23,
"endOffset": 26,
"position": 2
}
]
}
When I search with the keyword "SamplePSDFile_1psd2689.psd", Azure search returning three records in the results instead of only document 3. Below is my search query and the results.
?search="SamplePSDFile_1psd2689.psd"&api-version=2019-05-06&$count=true&queryType=full&searchMode=All
{
"#odata.count": 3,
"value": [
{
"#search.score": 2.3387241,
"id": "2",
"FileName": "SamplePSDFile-1psd260.psd"
},
{
"#search.score": 2.2493405,
"id": "3",
"FileName": "SamplePSDFile_1psd2689.psd"
},
{
"#search.score": 2.2493405,
"id": "1",
"FileName": "SamplePSDFile_1psd2680.psd"
}
]
}
How I can achieve my expected results. I tried with and without double quotes around the keyword all other options, but no luck. What I am doing wrong here in this case?
Some body suggested to use $filter, but that field wasn't filterable in our case.
Please help me on this.
If you are looking for exact match then you probably don't want any analyzer involved. Give it a try with this line
"analyzer": "keyword-analyzer"
changed to
"analyzer": null
If you need to be able to do exact match on the field and also support partial keyword searches then you need to index the field twice with different names. Maybe append “Exact” to the exact match field name and don’t use an analyzer for that one. The name without exact can have an analyzer. Then search on the field using the right field name index depending on the type of search.
I am trying to log my service request.
First I try to get the service from my partner, upon failure I
try the same from my vendor, hence I need to add the same metrics under two different dimensions.
Following is my log structure, apparently, this is wrong as JSON does not support duplicate elements,
and AWS picks only the latest value in case of duplicates elements.
Kindly suggest the right way of doing this.
{
"_aws": {
"Timestamp": 1574109732004,
"CloudWatchMetrics": [{
"Namespace": "NameSpace1",
"Dimensions": [["Partner"]],
"Metrics": [{
"Name": "requestCount",
"Unit": "Count"
}, {
"Name": "requestFailure",
"Unit": "Count"
}, {
"Name": "responseTime",
"Unit": "Milliseconds"
}]
},
{
"Namespace": "NameSpace1",
"Dimensions": [["vendor"]],
"Metrics": [{
"Name": "requestCount",
"Unit": "Count"
}, {
"Name": "requestSuccess",
"Unit": "Count"
}, {
"Name": "responseTime",
"Unit": "Milliseconds"
}]
}]
},
"Partner": "partnerName",
"requestCount": 1,
"requestFailure": 1,
"responseTime": 1,
"vendor": "vendorName",
"requestCount": 2,
"requestSuccess": 2,
"responseTime": 2,
}
This will give you metrics separated by partner and vendor:
{
"Partner": "partnerName",
"vendor": "vendorName",
"_aws": {
"Timestamp": 1577179437354,
"CloudWatchMetrics": [
{
"Dimensions": [
[
"Partner"
],
[
"vendor"
]
],
"Metrics": [
{
"Name": "requestCount",
"Unit": "Count"
},
{
"Name": "requestFailure",
"Unit": "Count"
},
{
"Name": "requestSuccess",
"Unit": "Count"
},
{
"Name": "responseTime",
"Unit": "Milliseconds"
}
],
"Namespace": "NameSpace1"
}
]
},
"requestCount": 1,
"requestFailure": 1,
"requestSuccess": 1,
"responseTime": 2
}
Note that this will duplicate the metrics between the two dimensions (if partner registers failure it will be registered on the vendor failure metric also). If you need to avoid this, you can either:
have metric names specific to each type (like partnerRequestFailure and vendorRequestFailure)
or you need to publish separate json, one for partner and one for vendor.
In a Zapier Zap I am using an an API GET call to Tsheets to grab a list of Timesheets. I would like to split out each time sheet into line items like line items in a Xero invoice because I would like to save item data from each timesheet to its own row in a Google sheet. (Ideally I would like to save the line data directly to a MySQL database but I see that Zapier currently only support Google sheets saving multiple lines at a time.) However I am having no joy.
I suspect one of two issues:
Zapier expects the word lineitems in the response or
The format of the response is not correct - I seem to have two "results" categories
In my step to Set up Google Sheets Spreadsheet Row I don't get a selection of comma separated items as shown in the example shown on the picture here:
Add an action app that supports line items, and each item will be saved individually
The image is from this page: https://zapier.com/blog/formatter-line-item-automation/ with the caption "Add an action app that supports line items, and each item will be saved individually" For what I get see photo https://cdn.zapier.com/storage/photos/f055dcf11a4b11b86f912f9032780429.png
In the step that returns the data from the API the text response is shown in https://cdn.zapier.com/storage/photos/33129fb7425cfae44be4a81533d6e892.png
and if I return json data it is like this: https://cdn.zapier.com/storage/photos/34da1b98f8941324c35befef8efe350d.png
Can anyone confirm that my suspicions are correct and whether 1 or 2 is the likely culprit.
Is it possible this link Zapier - Catch Hook - JSON Array - Loop over each item in array will lead me to the solution? It looks like it may but I don't see exactly how the writer incorporated it in to his Zap.
Edit: My data returned from the API looks like this:
{
"results": {
"timesheets": {
"11515534": {
"id": 11515534,
"user_id": 1260679,
"jobcode_id": 11974818,
"start": "2018-07-13T14:58:00+10:00",
"end": "2018-07-13T14:58:00+10:00",
"duration": 0,
"date": "2018-07-13",
"tz": 10,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "(Brisbane, Queensland, AU?)",
"on_the_clock": false,
"locked": 0,
"notes": "",
"customfields": {
"118516": "",
"121680": "",
"118530": "",
"118518": "Field supplies, materials"
},
"last_modified": "2018-07-13T04:59:27+00:00",
"attached_files": [
]
},
"11515652": {
"id": 11515652,
"user_id": 1260679,
"jobcode_id": 11974830,
"start": "2018-07-13T14:59:00+10:00",
"end": "2018-07-13T14:59:00+10:00",
"duration": 0,
"date": "2018-07-13",
"tz": 10,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "(Brisbane, Queensland, AU?)",
"on_the_clock": false,
"locked": 0,
"notes": "",
"customfields": {
"118516": "",
"121680": "",
"118530": ""
},
"last_modified": "2018-07-13T05:00:30+00:00",
"attached_files": [
]
},
"39799840": {
"id": 39799840,
"user_id": 1260679,
"jobcode_id": 19280104,
"start": "2018-10-24T11:45:00+11:00",
"end": "2018-10-24T12:00:00+11:00",
"duration": 900,
"date": "2018-10-24",
"tz": 11,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "(Sydney, New South Wales, AU?)",
"on_the_clock": false,
"locked": 0,
"notes": "",
"customfields": {
"118516": "",
"121680": "FP - Field plant Installation",
"118530": "Site cleanup"
},
"last_modified": "2018-10-24T05:56:27+00:00",
"attached_files": [
]
},
"39801850": {
"id": 39801850,
"user_id": 1260679,
"jobcode_id": 19280204,
"start": "2018-10-24T12:00:00+11:00",
"end": "2018-10-24T13:45:00+11:00",
"duration": 6300,
"date": "2018-10-24",
"tz": 11,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "(Sydney, New South Wales, AU?)",
"on_the_clock": false,
"locked": 0,
"notes": "",
"customfields": {
"118516": "",
"121680": "OP - Plant, Vehicles",
"118530": "Load\/Unload"
},
"last_modified": "2018-10-24T05:57:04+00:00",
"attached_files": [
]
},
"40192757": {
"id": 40192757,
"user_id": 1260679,
"jobcode_id": 19280110,
"start": "2018-10-25T08:00:00+11:00",
"end": "2018-10-25T10:00:00+11:00",
"duration": 7200,
"date": "2018-10-25",
"tz": 11,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "TSheets Android App",
"on_the_clock": false,
"locked": 0,
"notes": "From my mobile",
"customfields": {
"118516": "",
"121680": "FW - Plant Assembly",
"118530": "Panels"
},
"last_modified": "2018-10-24T23:02:56+00:00",
"attached_files": [
]
},
"40193033": {
"id": 40193033,
"user_id": 1260679,
"jobcode_id": 19280108,
"start": "2018-10-25T10:00:00+11:00",
"end": "2018-10-25T10:00:00+11:00",
"duration": 0,
"date": "2018-10-25",
"tz": 11,
"tz_str": "Australia\/Brisbane",
"type": "regular",
"location": "TSheets Android App",
"on_the_clock": false,
"locked": 0,
"notes": "",
"customfields": {
"118516": "",
"121680": "FW - Plant Assembly",
"118530": "Panels"
},
"last_modified": "2018-10-24T23:06:05+00:00",
"attached_files": [
]
}
}
},
"more": false
}
And this is my Python code: https://imgur.com/a/8W1X1em
Alright so I think I've worked something out for you. The example you provided Zapier - Catch Hook - JSON Array - Loop over each item in array is definitely on the right track, but, because it relies on webhooks, it probably won't work for you unless you can POST the data from your invoicing application.
Note: I code in Python so my examples will be in Python, that said these examples are pretty much code agnostic and can be replicated in Javascript as well.
I setup a dummy Zap to replicate what is happening with your zap currently
# results = requests.get(url, headers=header)
# results = results.json()
# Dummy result data converted to JSON object after API GET request:
results = {
"results" : {
"timesheets" : {
"timesheet_id_1" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
},
"timesheet_id_2" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
},
"timesheet_id_3" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
}
}
}
}
return results
Reading a bit further here in order for Zapier to map line items it needs to receive the data in an array. The above output is a dictionary object, Zapier does map the values in this dictionary to data that can be accessed later, however it maps the entire dictionary which is why you are seeing the output as multiple fields and as is replicated in my output. What you are looking to do is map a subset of the dictionary AND provide each subset as separate outputs.
What you will want to do is loop through the inner fields of the results dictionary object and execute zaps on the nested "timesheet_id_n". To do so we will have to return a list of line items, as stated above line items must be placed into an array. And so my code to achieve this looks like:
# results = requests.get(url, headers=header)
# results = results.json()
# Dummy result data converted to JSON object after API GET request:
results = {
"results" : {
"timesheets" : {
"timesheet_id_1" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
},
"timesheet_id_2" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
},
"timesheet_id_3" : {
"data_1" : "data",
"data_2" : "data",
"data_3" : "data"
}
}
}
}
# Container for my line items. Each element in this list will be executed on separately
return_results = []
results = results.get("results")
results = results.get("timesheets")
for item in results:
return_results.append({"sheet_id" : item, "sheet_data" : results.get(item)})
return return_results
The output of return_results will be an array of dictionary objects. As these dictionary objects are in array Zapier will treat them as line items, additionally because each line item is a dictionary object Zapier will automatically map each value so that they can be independently be used in later action steps. You can see this demonstrated in the output of my trigger zap in the following screenshots:
output 1
output 2
output 3
Hope this helped!
Using Twitter API, I can get tweets like this :
{
"coordinates": null,
"created_at": "Mon Sep 24 03:35:21 +0000 2012",
"id_str": "250075927172759552",
"entities": {
"urls": [
],
"hashtags": [
{
"text": "freebandnames",
"indices": [
20,
34
]
}
],
"user_mentions": [
]
},
"in_reply_to_user_id_str": null,
"contributors": null,
"text": "Aggressive Ponytail #freebandnames",
"metadata": {
"iso_language_code": "en",
"result_type": "recent"
},
"retweet_count": 0,
"profile_background_color": "C0DEED",
"verified": false,
"geo_enabled": true,
"time_zone": "Pacific Time (US & Canada)",
"description": "Born 330 Live 310",
"default_profile_image": false,
"profile_background_image_url": "http://a0.twimg.com/images/themes/theme1/bg.png",
"statuses_count": 579,
"friends_count": 110,
"following": null,
"show_all_inline_media": false,
"screen_name": "sean_cummings"
},
"in_reply_to_screen_name": null,
"source": "Twitter for Mac",
"in_reply_to_status_id": null
}
You can see that this data is perfect for MongoDB, you can easily write the data to there. I want to store this data on an SQL db like Oracle. I don't know how to store nested parts like :
"entities": {
"urls": [
],
"hashtags": [
{
"text": "freebandnames",
"indices": [
20,
34
]
}
],
"user_mentions": [
]
Can you tell me how I should write such properties on Oracle? Should I create a new table for each nested property(which I am unwilling to do) or is there another way? Is there a magical such that I can store all Tweet data in one place like it's done on NoSQL? Thanks.