Extjs 4 How to merge 2 similar records into one record - extjs4.1

I have Grid, where I want to merge similar records into one record.
I don't want to Group record, I want to Merge record
I mean,
{
{
firstField: "record1",
commonField: "ABC",
fourthField: "4"
},
{
firstField: "record2",
commonField: "ABC",
fourthField: "5"
},
{
firstField: "record3",
commonField: "ABC",
fourthField: "6"
}
}
So this the above JSON u can see there is "commonField" which has similar text.
So I have to show 1 record instead of 3 records, with FourthField added [4+5+6] so my final JSON should become like
{
{
firstField: "record1",
commonField: "ABC",
fourthField: "15"
}
}
Is there any way to achieve this ? like summary we do addition or some operation
similar can we do on Records ?

Related

BigQuery: Get field names of a STRUCT

I have some data in a STRUCT in BigQuery. Below I have visualised an example of the data as JSON:
{
...
siblings: {
david: { a: 1 }
sarah: { b: 1, c: 1 }
}
...
}
I want to produce a field from a query that resembles ["david", "sarah"]. Essentially I just want to get the keys from the STRUCT (object). Note that every user will have different key names in the siblings STRUCT.
Is this possible in BigQuery?
Thanks,
A
Your structs schema must be consistent throughout the table. They can't change keys because they're part of the table schema. To get the keys you simply take a look at the table schema.
If values change, they're probably values in an array - I guess you might have something like this:
WITH t AS (
SELECT 1 AS id, [STRUCT('david' AS name, 33 as age), ('sarah', 42)] AS siblings
union all
SELECT 2, [('ken', 19), ('ryu',21), ('chun li',23)]
)
SELECT * FROM t
If you tried to introduce new keys in the second row or within the array, you'd get an error Array elements of types {...} do not have a common supertype at ....
The first element of the above example in json representation looks like this:
{
"id": "1",
"siblings": [
{
"name": "david",
"age": "33"
},
{
"name": "sarah",
"age": "42"
}
]
}

Query for entire JSON document in nested JSON schema

Background:
I wish to locate the entire JSON document that has a condition where "state" = "new" and where length(Features.id) > 4
{
"id": "123"
"feedback": {
"Features": [
{
"state": "new"
"id": "12345"
}
]
}
}
This is what I have tried to do:
Since this is a nested document. My query looks like this:
A stackoverflow member has helped me to access the nested contents within the query, but is there a way to obtain the full document
I have used:
SELECT VALUE t.id FROM t IN f.feedback.Features where t.state = 'new' and length(t.id)>4
This will give me the ids.
My desire is to have access to the full document with this condition?
{
"id": "123"
"feedback": {
"Features": [
{
"state": "new"
"id": "12345"
}
]
}
}
Any help is appreciated
Try this
SELECT *
FROM f
WHERE
f.feedback.Features[0].state = 'new'
AND length(f.feedback.Features[0].id)>4
Here is the SELECT spec for CosmosDB for more details
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-select
Also, check out "working with JSON" in CosmosDB notes
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-working-with-json
If the Features array has more than 1 value, you can use EXISTS clause to search within them. See specs of EXISTS here with examples:
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-subquery#exists-expression

Search string and return matching substring MongoDB

I am working on a golang project with database MongoDB. I have a collection with following records:
[
{
"_id": 1,
"vals": [
"110",
"2211"
]
},
{
"_id": 1,
"vals": [
"Abcd",
"102"
]
}
]
I want to perform a search like if I pass "11001" then 1st record will return. But I have not found any solution to do the same. I have tried the following query:
db.getCollection('ColName').find({"vals":{"$regex": "^11001", "$options": "i"}})
Characters that are saved in db are less that I passed in the search. If I pass the "110" or "11" then it will gives the result, but my requirement is different I have full string and need to match with 2,3, or 4 characters.
It is about regex.
db.getCollection('ColName').find({"vals":{"$regex": "^110(01)?", "$options": "i"}})
will work for you.
"?" in regex means match 0 or 1.

Postgres - query JSON column value of nested object

I'm using following schema for the JSONB column of my table (named fields). There are several of these field entries.
{
"FIELD_NAME": {
"value" : "FIELD_VALUE",
"meta": {
"indexable": true
}
}
}
I need to find all the fields that contain this object
"meta": {
"indexable": true
}
Here is a naive attempt at having json_object_keys in where clause, which doesn't work, but illustrates what I'm trying to do.
with entry(fields) as (values('{
"login": {
"value": "fred",
"meta": {
"indexable": true
}
},
"password_hash": {
"value": "88a3d1c7463d428f0c44fb22e2d9dc06732d1a4517abb57e2b8f734ce4ef2010",
"meta": {
"indexable": false
}
}
}'::jsonb))
select * from entry where fields->jsonb_object_keys(fields) #> '{"meta": {"indexable": "true"}}'::jsonb;
How can I query on the value of nested object? Can I somehow join the result of json_object_keys with the table iself?
demo:db<>fiddle
First way: using jsonb_each()
SELECT
jsonb_build_object(elem.key, elem.value) -- 3
FROM
entry,
jsonb_each(fields) as elem -- 1
WHERE
elem.value #> '{"meta": {"indexable": true}}' -- 2
Expand all subobjects into one row per "field". This creates 2 columns: the key and the value (in your case login and {"meta": {"indexable": true}, "value": "fred"})
Filter the records by checking the value column for containing the meta object using the #> as you already mentioned
Recreate the JSON object (combining the key/value columns)
Second way: Using jsonb_object_keys()
SELECT
jsonb_build_object(keys, fields -> keys) -- 3
FROM
entry,
jsonb_object_keys(fields) as keys -- 1
WHERE
fields -> keys #> '{"meta": {"indexable": true}}' -- 2
Finding all keys as you did
and 3. are very similar to the first way

Use sprintf syntax inside logstash's sprintf syntax

For the below data structure:
{
"sprints": [
{
"id": 17193,
"name": "Sprint 12"
},
{
"id": 16510,
"name": "Sprint 11"
}
],
"velocityStatEntries": {
"16510": {
"estimated": {
"value": 49
},
"completed": {
"value": 36
}
},
"17193": {
"estimated": {
"value": 52
},
"completed": {
"value": 70
}
}
}
}
Given this, I want to be able to produce an Elasticsearch object that's easier to handle, by adding the values of the Estimated and Completed fields to the sprints with their matching IDs.
Ideally, I would like to handle this without writing Ruby, but I am not finding a logstash-native solution that handles this scnenario.
First, I split the data on the sprints field using split, so, I only have a single sprints object, and can use [sprints][id] to know what sprint I'm processing.
Then, I have attempted to work with the mutate filter, in one of two ways:
- using merge to add the [velocityStateEntries][] object to the
current sprint
- using add_field to add the two fields I need
Syntactically, is this possible? Ideally, I would want to be able to do a 'double substitution' of sorts, obtaining the estimated time for the current sprint something like:
add_field => {
"estimatedTime" => "%{[velocityStatEntries][%{[sprints][id]}][estimated][value]}"
}
but this only seems to work with a hardcoded format such as "estimatedTime" => "%{[velocityStatEntries][1234][estimated][value]}"
Do I have to use the Ruby format for this?
For what it's worth, the Ruby solution is very simple:
ruby {
code => "
sprintId = event.get('[sprints][id]');
estimated = event.get('[velocityStatEntries]['+(sprintId).to_s+'][estimated][value]');
completed = event.get('[velocityStatEntries]['+(sprintId).to_s+'][completed][value]');
event.set('[sprints][estimatedUnits]', estimated);
event.set('[sprints][completedUnits]', completed);
"
}