How to Customize GROUP_CONCAT Separator in Sequelize? - sql

Assuming I have this seqeulize code bellow
groupAccess.findAll({
attributes: [
'id',
'group_name',
'description',
[sequelize.fn('GROUP_CONCAT', sequelize.col('module_name')), 'module_name']
],
group: ['group_name']
})
The result of it's query will be like
id | group_name | description | module_name
----+--------------+---------------+---------------------------
1 | A | desc A | mod_1,mod_2,mod_3
4 | B | desc B | mod_4,mod_5,mod_6,mod_7
8 | C | desc C | mod_8,mod_9,mod_10
11 | D | desc D | mod_11,mod_12
and sequelize.js res will be like:
[
{
id: "1",
group_name: "A",
description: "desc A",
module_name: "mod_1,mod_2,mod_3"
},
{
id: "4",
group_name: "B",
description: "desc B",
module_name: "mod_4,mod_5,mod_6,mod_7"
},
{
id: "8",
group_name: "C",
description: "desc C",
module_name: "mod_8,mod_9,mod_10"
},
{
id: "11",
group_name: "D",
description: "desc D",
module_name: "mod_11,mod_12"
},
]
Problem: How to change the commas (separators) to another symbol as the separator (in e.g: dash-arrow -> symbol as the separator) IN SEQUELIZE?
Expected query:
id | group_name | description | module_name
----+--------------+---------------+---------------------------
1 | A | desc A | mod_1->mod_2->mod_3
4 | B | desc B | mod_4->mod_5->mod_6->mod_7
8 | C | desc C | mod_8->mod_9->mod_10
11 | D | desc D | mod_11->mod_12
Expected res of sequelize.js:
[
{
id: "1",
group_name: "A",
description: "desc A",
module_name: "mod_1->mod_2->mod_3"
},
{
id: "4",
group_name: "B",
description: "desc B",
module_name: "mod_4->mod_5->mod_6->mod_7"
},
{
id: "8",
group_name: "C",
description: "desc C",
module_name: "mod_8->mod_9->mod_10"
},
{
id: "11",
group_name: "D",
description: "desc D",
module_name: "mod_11->mod_12"
},
]

Found this on GitHub (https://github.com/sequelize/sequelize/issues/3390#issuecomment-84649641); not 100% sure if this works (since I'm not familiar with sequelize), but you can try:
groupAccess.findAll({
attributes: [
'id',
'group_name',
'description',
[sequelize.fn('GROUP_CONCAT', sequelize.literal(`module_name SEPARATOR '->'`)), 'module_name']
],
group: ['group_name']
})

Related

MongoDB Aggregating counts with different conditions at once

I'm totally new to MongoDB.
I wonder if it is possible to aggregate counts with different conditions at once.
Such as, there is a collection like below.
_id | no | val |
--------------------
1 | 1 | a |
--------------------
2 | 2 | a |
--------------------
3 | 3 | b |
--------------------
4 | 4 | c |
--------------------
And I want result like below.
Value a : 2
Value b : 1
Value c : 1
How can I get this result all at once?
Thank you:)
db.collection.aggregate([
{
"$match": {}
},
{
"$group": {
"_id": "$val",
"count": { "$sum": 1 }
}
},
{
"$project": {
"field": {
"$arrayToObject": [
[ { k: { "$concat": [ "Value ", "$_id" ] }, v: "$count" } ]
]
}
}
},
{
"$replaceWith": "$field"
}
])
mongoplayground

BigQuery : best use of UNNEST Arrays

I really need some help, I have a big file JSON that I ingested into BigQuery, I want to write a query that uses UNNEST twice, namely I have this like :
{
"categories": [
{
"id": 1,
"name" : "C0",
"properties": [
{
"name": "Property_1",
"value": {
"type": "String",
"value": "11111"
}
},
{
"name": "Property_2",
"value": {
"type": "String",
"value": "22222"
}
}
]}
]}
And I want to do a query that give's me something like this result
---------------------------------------------------------------------
| Category_ID | Name_ID | Property_1 | Property_2 |
------------------------------------------------------------------
| 1 | C0 | 11111 | 22222 |
---------------------------------------------------------------------
I already made something like but it's not working :
SELECT
c.id as Category_ID,
c.name as Name_ID,
p.value.value as p.name
From `DataBase-xxxxxx` CROSS JOIN
UNNEST(categories) AS c,
UNNEST(c.properties) AS p;
Thank you more 🙏

Postgres combine 3 CTEs causes duplicate rows

I'm trying to combine 2 select queries on 2 different tables which have a foreign key in common project_id included with a condition and returned in a single result set with the project_id a json_array called sprints and a json_array called backlog. The output should look something like this.
{
"id": "1920c79d-69d7-4b63-9662-ed5333e9b735",
"name": "Test backend v1",
"backlog_items": [
{
"id": "961b2438-a16b-4f30-83f1-723a05592d68",
"name": "Another User Story 1",
"type": "User Story",
"backlog": true,
"s3_link": null,
"sprint_id": null
},
{
"id": "a2d93017-ab87-4ec2-9589-71f6cebba936",
"name": "New Comment",
"type": "Comment",
"backlog": true,
"s3_link": null,
"sprint_id": null
}
],
"sprints": [
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62ed4",
"name": "Test name 2",
"sprint_items": [
{
"id": "1285825b-1669-40f2-96b8-de02ec80d8bd",
"name": "As an admin I should be able to delete an organization",
"type": "User Story",
"backlog": false,
"s3_link": null,
"sprint_id": "1cd165c7-68f7-4a1d-b018-609989d62ed4"
}
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62f44",
"name": "Test name 1",
"sprint_items": []
}
]
}
In case there are no backlog items associated with the project_id or sprints with the project_id I want to return an empty list. I figured using Postgres COALESCE function might help in this case but I'm not sure how to use is to achieve what I want.
Sprint table
id | end_date | start_date | project_id | name
--------------------------------------+----------+------------+--------------------------------------+-------------
1cd165c7-68f7-4a1d-b018-609989d62ed4 | | | 1920c79d-69d7-4b63-9662-ed5333e9b735 | Test name 2
Sprint item table
id | sprint_id | name | type | s3_link | backlog | project_id
--------------------------------------+--------------------------------------+--------------------------------------------------------+------------+---------+---------+--------------------------------------
961b2438-a16b-4f30-83f1-723a05592d68 | | Another User Story 1 | User Story | | t | 1920c79d-69d7-4b63-9662-ed5333e9b735
a2d93017-ab87-4ec2-9589-71f6cebba936 | | New Comment | Comment | | t | 1920c79d-69d7-4b63-9662-ed5333e9b735
1285825b-1669-40f2-96b8-de02ec80d8bd | 1cd165c7-68f7-4a1d-b018-609989d62ed4 | As an admin I should be able to delete an organization | User Story | | f | 1920c79d-69d7-4b63-9662-ed5333e9b735
The query I'm using right now which returns multiple duplicates in the result set.
with si as (
select si.id, si.name, si.backlog, si.project_id
from sprint_items si
), s as (
select s.id, s.name, s.project_id, jsonb_agg(to_jsonb(si) - 'project_id') as sprint_items
from sprints s
left join sprint_items si
on si.sprint_id = s.id
group by s.id, s.name, s.project_id
), p as (
select p.id, p.name, jsonb_agg(to_jsonb(s) - 'project_id') as sprints,
jsonb_agg(to_jsonb(case when si.backlog = true then si end) - 'project_id') as backlog_items
from projects p
left join s
on s.project_id = p.id
left join si
on si.project_id = p.id
group by p.id, p.name
)
select to_jsonb(p) from p
where p.id = '1920c79d-69d7-4b63-9662-ed5333e9b735'
Updated
This is what the above query is producing in terms of duplicating the sprint items and sprints
{
"id": "1920c79d-69d7-4b63-9662-ed5333e9b735",
"name": "Test backend v1",
"sprints": [
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62ed4",
"name": "Test name 2",
"sprint_items": [
{
"id": "1285825b-1669-40f2-96b8-de02ec80d8bd",
"name": "As an admin I should be able to delete an organization",
"type": "User Story",
"backlog": false,
"s3_link": null,
"sprint_id": "1cd165c7-68f7-4a1d-b018-609989d62ed4"
}
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62ed4",
"name": "Test name 2",
"sprint_items": [
{
"id": "1285825b-1669-40f2-96b8-de02ec80d8bd",
"name": "As an admin I should be able to delete an organization",
"type": "User Story",
"backlog": false,
"s3_link": null,
"sprint_id": "1cd165c7-68f7-4a1d-b018-609989d62ed4"
}
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62ed4",
"name": "Test name 2",
"sprint_items": [
{
"id": "1285825b-1669-40f2-96b8-de02ec80d8bd",
"name": "As an admin I should be able to delete an organization",
"type": "User Story",
"backlog": false,
"s3_link": null,
"sprint_id": "1cd165c7-68f7-4a1d-b018-609989d62ed4"
}
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62f44",
"name": "Test name 1",
"sprint_items": [
null
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62f44",
"name": "Test name 1",
"sprint_items": [
null
]
},
{
"id": "1cd165c7-68f7-4a1d-b018-609989d62f44",
"name": "Test name 1",
"sprint_items": [
null
]
}
],
"backlog_items": [
null,
{
"id": "961b2438-a16b-4f30-83f1-723a05592d68",
"name": "Another User Story 1",
"backlog": true
},
{
"id": "a2d93017-ab87-4ec2-9589-71f6cebba936",
"name": "New Comment",
"backlog": true
},
null,
{
"id": "961b2438-a16b-4f30-83f1-723a05592d68",
"name": "Another User Story 1",
"backlog": true
},
{
"id": "a2d93017-ab87-4ec2-9589-71f6cebba936",
"name": "New Comment",
"backlog": true
}
]
}
Any pointers to what functions I should read up would be greatly appreciated.

How to sum of jsonb inner field in postgres?

I want to sum one field PC value from table test group by metal_id.
I tried this Postgres GROUP BY on jsonb inner field
but not worked for me as I have different JSON format
tbl_test
id json
1 [
{
"pc": "4",
"metal_id": "1"
},
{
"pc": "4",
"metal_id": "5"
}
]
2. [
{
"pc": "1",
"metal_id": "1"
},
{
"pc": "1",
"metal_id": "2"
},
]
output I want is :(group by metal_id and sum of pc).
Thanks in advance!
[
"metal_id": 1
{
"pc": "5",
}
]
You can use json_array_element() to expand the jsonb array, and then aggregate by metal id:
select obj ->> 'metal_id' metal_id, sum((obj ->> 'pc')::int) cnt
from mytable t
cross join lateral jsonb_array_elements(t.js) j(obj)
group by obj ->> 'metal_id'
order by metal_id
Demo on DB Fiddle:
metal_id | cnt
:------- | --:
1 | 5
2 | 1
5 | 4

How to setup properties on CSV import in OrientDB?

I have a CSV file like:
FN | MI | LN | ADDR | CITY | ZIP | GENDER
------------------------------------------------------------------------------
Patricia | | Faddar | 7063 Carr xxx | Carolina | 00979-7033 | F
------------------------------------------------------------------------------
Lui | E | Baves | PO Box xxx | Boqueron | 00622-1240 | F
------------------------------------------------------------------------------
Janine | S | Perez | 25 Calle xxx | Salinas | 00751-3332 | F
------------------------------------------------------------------------------
Rose | | Mary | 229 Calle xxx | Aguadilla | 00603-5536 | F
And I am importing it into OrientDB like:
{
"source": { "file": { "path": "/sample.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "Users" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/orientdb/databases/test",
"dbType": "graph",
"classes": [
{"name": "Users", "extends": "V"}
]
}
}
}
I would like to set the import so that it created properties so that FN becomes first_name, MI becomes middle_name and so on, as well as set some values to lowercase. For ex: Carolina to become carolina
I could probably make this changes from the SCHEMA once the data is added. My reason to do this here is that I have multiple CSV files and I want to keep the the same schema for all
Any ideas?
To rename a field, take a look at the Field transformer:
http://orientdb.com/docs/last/Transformer.html#field-transformer
Rename the field from salary to renumeration:
{ "field":
{ "fieldName": "remuneration",
"expression": "salary"
}
},
{ "field":
{ "fieldName": "salary",
"operation": "remove"
}
}
in the same way, you can apply the lowerCase function to the property
{field: {fieldName:'name', expression: '$input.name.toLowerCase()'}}
Try it and let me know if it works.