JSONPATH:How to filter the array by the length of string filed in objects - size

JSON
{
"code":0,
"success":true,
"errorMsg":null,
"data":{
"list":[
{
"contentId":71687197,
"name":"817店铺浏览",
"createTime":"2021-08-17 11:41:27",
"audiTime":"2021-08-17 12:04:04",
"statusName":"使用中",
"status":1,
"audiStatusName":"审核通过",
"audiStatus":0,
"contentUrl":"http://market.wapa.taobao.com/app/retail-shop/rax-pi/pages/weex?wh_weex=true&sellerId=2211880772357&shopId=0&bizCode=mcloud",
"pageId":"/wireless/decorate?pageId=186699",
"templateType":"店铺浏览6秒模板",
"reason":null,
"tasks":"1629975754101010,1632711553519030,1630668279525055,1631758389579075,1631833251234003,1632367930442084,1631502134666091,1632640663561086,1631270789008004,1630556520622027,1632649453157023",
"pages":[
{
"id":"42b76a87",
"type":"close",
"label":"关闭页面",
"pageId":null,
"content":null,
"x":537,
"y":222,
"ftemplate":null,
"pageName":null,
"title":null,
"component":null
}
],
"openScene":null,
"appContentDetail":null,
"accountType":null
}
],
"count":86,
"type":null,
"bizTypes":null,
"taskRules":null
}
}
I want to filter based on the length of tasks.
This is my attempt.
$...data.list[?(#.tasks.size < 5000)]
$...data.list[?(#.tasks.size() < 5000)]
$...data.list[?(#.tasks.length< 5000)]
aboout this I can succeed on the JSONPATH tool
$...data.list[?(#.tasks.length() < 5000)]
enter image description here

Related

How do I post a bulleted list using the slack api

Background
I am trying to use the slack bolt jdk along with the following dependencies:
// Slack bolt SDK
implementation("com.slack.api:bolt:1.8.1")
implementation("com.slack.api:bolt-servlet:1.8.1")
implementation("com.slack.api:bolt-jetty:1.8.1")
implementation("com.slack.api:slack-api-model-kotlin-extension:1.8.1")
implementation("com.slack.api:slack-api-client-kotlin-extension:1.8.1")
What I want to achieve (in slack)
What I currently am getting (in slack)
What I've tried so far
fun SlashCommandContext.sendSectionAndAck(
message: String,
): Response {
slack.methods(botToken).chatPostMessage { req ->
req
.channel(channelId)
.blocks {
section {
markdownText(message)
}
}
}
return ack()
}
It seems like the markdown is being formatted almost properly. The header and footer are both bold as intended, but for some reason, the bulleted list is not being formatted correctly. I have also tried replacing the * with - without any luck.
In my case, I can call the function with the following input:
val input = """
*Some header text in bold*
- item
- another item
*Some footer text also in bold*
"""
sendSectionAndAck(input)
What am I doing wrong?
The easiest workaround for this would be using '•' character itself in the text.
Slack also uses following as part of the block kit message to reflect bullet points:
"text": "• test",
"blocks": [
{
"type": "rich_text",
"block_id": "erY",
"elements": [
{
"type": "rich_text_list",
"elements": [
{
"type": "rich_text_section",
"elements": [
{
"type": "text",
"text": "test"
}
]
}
],
"style": "bullet",
"indent": 0
}
]
}
Another reference:
https://superuser.com/questions/1282510/how-do-i-make-a-bullet-point-in-a-slack-message
A simple jq script to prefix a stream of lines read from stdin with bullets for the purposes of pasting into a slack message:
jq -rR '"\u2022 \(.)"'

Use sprintf syntax inside logstash's sprintf syntax

For the below data structure:
{
"sprints": [
{
"id": 17193,
"name": "Sprint 12"
},
{
"id": 16510,
"name": "Sprint 11"
}
],
"velocityStatEntries": {
"16510": {
"estimated": {
"value": 49
},
"completed": {
"value": 36
}
},
"17193": {
"estimated": {
"value": 52
},
"completed": {
"value": 70
}
}
}
}
Given this, I want to be able to produce an Elasticsearch object that's easier to handle, by adding the values of the Estimated and Completed fields to the sprints with their matching IDs.
Ideally, I would like to handle this without writing Ruby, but I am not finding a logstash-native solution that handles this scnenario.
First, I split the data on the sprints field using split, so, I only have a single sprints object, and can use [sprints][id] to know what sprint I'm processing.
Then, I have attempted to work with the mutate filter, in one of two ways:
- using merge to add the [velocityStateEntries][] object to the
current sprint
- using add_field to add the two fields I need
Syntactically, is this possible? Ideally, I would want to be able to do a 'double substitution' of sorts, obtaining the estimated time for the current sprint something like:
add_field => {
"estimatedTime" => "%{[velocityStatEntries][%{[sprints][id]}][estimated][value]}"
}
but this only seems to work with a hardcoded format such as "estimatedTime" => "%{[velocityStatEntries][1234][estimated][value]}"
Do I have to use the Ruby format for this?
For what it's worth, the Ruby solution is very simple:
ruby {
code => "
sprintId = event.get('[sprints][id]');
estimated = event.get('[velocityStatEntries]['+(sprintId).to_s+'][estimated][value]');
completed = event.get('[velocityStatEntries]['+(sprintId).to_s+'][completed][value]');
event.set('[sprints][estimatedUnits]', estimated);
event.set('[sprints][completedUnits]', completed);
"
}

Keen-io: i can't delete special event using extraction query filter

using extraction query (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxx/queries/extraction?api_key=xxxx&event_collection=dispatched-orders&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
result: [
{
mobile: "13185716746",
keen : {
timestamp: "2015-02-10T07:10:07.816Z",
created_at: "2015-02-10T07:10:08.725Z",
id: "54d9aed03bc6964a7d311f9e"
},
data : {
itemId: 2130,
num: 1
},
features: {
communityId: 2000,
dispatcherId: 39,
tradeId: 8581
}
}
]
}
but if i use the same filters in my delete query url (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
properties: {
data.num: "num",
keen.created_at: "datetime",
mobile: "string",
keen.id: "string",
features.communityId: "num",
features.dispatcherId: "num",
keen.timestamp: "datetime",
features.tradeId: "num",
data.itemId: "num"
}
}
plz help me ...
It looks like you are issuing a GET request for the delete comment. If you perform a GET on a collection you get back the schema that Keen has inferred for that collection.
You'll want to issue the above as a DELETE request. Here's the cURL command to do that:
curl -X DELETE "https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800"
Note that you'll probably need to URL encode that JSON as you mentioned in your above post!

NodeJS JSON Array filtering

I have used Node to retrieve a set of results from SQL and they're returned like this;
[
{
"event_id": 111111,
"date_time": "2012-11-16T01:59:07.000Z",
"agent_addr": "127.0.0.1",
"priority": 6,
"message": "aaaaaaaaa",
"up_time": 9015040,
"hostname": "bbbbbbb",
"context": "ccccccc"
},
{
"event_id": 111112,
"date_time": "2012-11-16T01:59:07.000Z",
"agent_addr": "127.0.0.1",
"priority": 6,
"message": "aaaaaaaaa",
"up_time": 9015040,
"hostname": "bbbbbbb",
"context": "ddddddd"
},
]
There are usually a lot of entries in the array and I need to efficiently filter the array to show only the entries that have a context of "ccccccc". I've tried a for loop, but it's incredibly slow.
Any suggesstions?
There is a very simple way of doing that if you want to do that in node and don't want to use sql for that you can user javascript built-in Array.filter function.
var output = arr.filter(function(x){return x.context=="ccccccc"}); //arr here is you result array
The ouput array will contains only objects having context "ccccccc".
Another way of doing what Khurrum said, is with the arrow function. It has the same result but some people prefer that notation.
var output = arr.filter(x => x.context == "ccccccc" );
As suggested by Matt, why not include WHERE context = "ccccccc" in yout SQL query?
Else if you must keep all in maybe use one of the following to filter the results
// Place all "ccccccc" context row in an array
var ccccccc = [];
for (var i = results.length - 1; i >= 0; i--) {
if(results[i] == 'ccccccc')
ccccccc.push(results[i]);
};
// Place any context in an named array within an object
var contexts = {};
for (var i = results.length - 1; i >= 0; i--) {
if(contexts[results[i]] == 'undefined')
contexts[results[i]]
contexts[results[i]].push(results[i]);
};
or use the underscore (or similar) filter function.

Unable to filter out n shingle(n - gram) facets using the "exclude" words option provided in the "facets" query

I am trying to make a tagcloud of words and phrases using the facets feature of elasticsearch.
My mapping:
curl -XPOST http://localhost:9200/myIndex/ -d '{
...
"analysis":{
"filter":{
"myCustomShingle":{
"type":"shingle",
"max_shingle_size":3,
"output_unigrams":true
}
},
"analyzer":{ //making a custom analyzer
"myAnalyzer":{
"type":"custom",
"tokenizer":"standard",
"filter":[
"lowercase",
"myCustomShingle",
"stop"
]
}
}
}
...
},
"mappings":{
...
"description":{ //the field to be analyzed for making the tag cloud
"type":"string",
"analyzer":"myAnalyzer",
"null_value" : "null"
},
...
}
Query for generating facets:
curl -X POST "http://localhost:9200/myIndex/myType/_search?&pretty=true" -d '
{
"size":"0",
"query": {
match_all:{}
},
"facets": {
"blah": {
"terms": {
"fields" : ["description"],
"exclude" : [ 'evil' ], //remove facets that contain these words
"size": "50"
}
}
}
}
My problem is, when I insert a word say 'evil' in the "exclude" option of "facets", it successfully removes the facets containing the words(or single shingles) that match 'evil'. But it doesn't remove the 2/3 word shingles, "resident evil" , "evil computer", "my evil cat". How do I remove the facets of phrases containing the "exclude words"?
It isn't completely clear what you want to achieve. You usually wouldn't make facets on analyzed fields. Maybe you could explain why you're making shingles so that we can help achieving what you want in a better way.
With the exclude facet parameter you can exclude some specific entry, but evil is not the same as resident evil. If you want to exclude it you need to specify it. Facets are made based on indexed terms, and resident evil is in fact a single term in the index, which is not the same as the term evil.
Given the choice that you already made for indexing and faceting, there is a way to achieve what you want. Elasticsearch has a really powerful scripting module. You can use a script to decide whether each entry should be included in the facet or not like this:
{
"query": {
"match_all" : {}
},
"facets": {
"tags": {
"terms": {
"field" : "tags",
"script" : "term.contains('evil') ? true : false"
}
}
}
}