UPDATE
I have a very unique case where I am given this from my SQL database.
+------+-------+-------+-------+-------+
| LVL | LVL_1 | LVL_2 | LVL_3 | LVL_4 |
+------+-------+-------+-------+-------+
| PHIL | NULL | NULL | NULL | NULL |
| PHIL | BOB | NULL | NULL | NULL |
| PHIL | BOB | BILL | NULL | NULL |
| PHIL | BOB | BILL | JEN | NULL |
| PHIL | BOB | BILL | JEN | JOE |
+------+-------+-------+-------+-------+
The last LVL column that contains a name represents the person.
For example, this represents PHIL
| PHIL | NULL | NULL | NULL | NULL |
And this represents JEN
| PHIL | BOB | BILL | JEN | NULL |
And this represents JOE (since he is the last level)
| PHIL | BOB | BILL | JEN | JOE |
My ultimate goal is to return this data into a JSON tree structure from ColdFusion like this when I query for 'PHIL':
{
name: 'PHIL',
parent: NULL,
level: 0,
groups: [
{
name: 'BOB',
parent: 'PHIL',
level: 1,
groups: [
{
name: 'BILL',
parent: 'BOB',
level: 2,
groups: [
{
name: 'JEN',
parent: 'BILL',
level: 3,
groups: [
{
name: 'JOE',
parent: 'JEN',
level: 4,
groups: []
}
]
}
]
}
]
}
]
}
If I query for 'BILL', I can only see the the tree data below him like this:
{
name: 'BILL',
parent: 'BOB',
level: 2,
groups: [
{
name: 'JEN',
parent: 'BILL',
level: 3,
groups: [
{
name: 'JOE',
parent: 'JEN',
level: 4,
groups: []
}
]
}
]
}
I'd like to write some SQL command that would be able to produce a tree structure of this data. If it's not possible, I'd like to at least reformat (with SQL commands) the original data into:
+------+--------+
| NAME | PARENT |
+------+--------+
| PHIL | NULL |
| BOB | PHIL |
| BILL | BOB |
| JEN | BILL |
| JOE | JEN |
+------+--------+
So I can perhaps restructure this into a tree data using ColdFusion by following this tutorial http://www.bennadel.com/blog/1069-ask-ben-simple-recursion-example.htm
Is it possible? Can somebody help me on this?
<cfscript>
q = queryNew("LTM,LTM_1,LTM_2,LTM_3,LTM_4");
queryAddRow(q);
QuerySetCell(q, "LTM", "OSTAPOWER");
QuerySetCell(q, "LTM_1", "VENKAT");
QuerySetCell(q, "LTM_2", "LYNN");
QuerySetCell(q, "LTM_3", "SMITH");
QuerySetCell(q, "LTM_4", "HARTLEY");
queryAddRow(q);
QuerySetCell(q, "LTM", "OSTAPOWER");
QuerySetCell(q, "LTM_1", "VENKAT");
QuerySetCell(q, "LTM_2", "LYNN");
QuerySetCell(q, "LTM_3", "SMITH");
QuerySetCell(q, "LTM_4", "SHREVE");
function collect(q) {
var data = {};
for (var row in q)
{
var varName = "data";
for (var i = 0; i <= 4; i++)
{
var col = i == 0 ? "LTM" : "LTM_#i#";
var name = row[col];
if (len(name))
varName = listAppend(varName, name, ".");
else
break;
}
setVariable(varName, {});
}
return data;
}
function transform(tree, nodeName, level=0, parent="")
{
if (structIsEmpty(tree))
return "";
var node = {
'name': nodeName,
'parent': len(parent) ? parent : javacast("null",""),
'level': javacast("int", level),
'groups': []
};
var branch = tree[nodeName];
for (var child in branch)
arrayAppend(node.groups, transform(branch, child, level+1, nodeName));
return node;
}
c=collect(q);
writeDump(transform(c,'OSTAPOWER'));
</cfscript>
Run it: http://www.trycf.com/scratch-pad/pastebin?id=c8YMvGXG
Then just serializeJSON() the result returned from transform().
Related
I'm trying to convert a few columns in Snowflake table into a nested JSON and have tried OBJECT_CONSTRUCT & ARRAY_CONSTRUCT - but, unable to create a nested JSON
Input:
id
product_1
product_1_purchase_date
product_2
product_2_purchase_date
100
XCTMR
01/02/2003
IOPWER
01/02/2005
200
AQWYU
11/20/2016
XCTMR
09/09/2021
Output:
id
json_combined
100
[ { "product_1": { "name": "XCTMR", "product_1_purchase_date": "01/02/2003" } }, { "product_2": { "name": "IOPWER", "product_2_purchase_date": "01/02/2005" } } ]
200
[ { "product_1": { "name": "AQWYU", "product_1_purchase_date": "11/20/2016" } }, { "product_2": { "name": "XCTMR", "product_2_purchase_date": "09/09/2021" } } ]
Solution, with hard-coding field-names only, just need to put functions ARRAY_CONSTRUCT and OBJECT_CONSTRUCT in proper order.
with cte(id,product_1,product_1_purchase_date,product_2,product_2_purchase_date) as
(select * from values
(100,'XCTMR','01/02/2003','IOPWER','01/02/2005'),
(200,'AQWYU','11/20/2016','XCTMR','09/09/2021')
)select
id,
array_construct(object_construct('product_1',
object_construct('name',product_1,'product_1_purchase_date',
product_1_purchase_date)),
object_construct('product_2',
object_construct('name',product_2,'product_2_purchase_date',
product_2_purchase_date)) ) as JSON_COMBINED
from cte;
+-----+-----------------------------------------------+
| ID | JSON_COMBINED |
|-----+-----------------------------------------------|
| 100 | [ |
| | { |
| | "product_1": { |
| | "name": "XCTMR", |
| | "product_1_purchase_date": "01/02/2003" |
| | } |
| | }, |
| | { |
| | "product_2": { |
| | "name": "IOPWER", |
| | "product_2_purchase_date": "01/02/2005" |
| | } |
| | } |
| | ] |
| 200 | [ |
| | { |
| | "product_1": { |
| | "name": "AQWYU", |
| | "product_1_purchase_date": "11/20/2016" |
| | } |
| | }, |
| | { |
| | "product_2": { |
| | "name": "XCTMR", |
| | "product_2_purchase_date": "09/09/2021" |
| | } |
| | } |
| | ] |
+-----+-----------------------------------------------+
Below is with more dynamism -
CTE is pure data.
CTE_1 is creating two pseudo-columns, one for field name product_x and
another to be used as grouping criteria later in OBJECT_AGG We
cannot use aggregate function inside another aggregate function, so
we need to do aggregation multiple times (twice here). Change divide by 2 in this CTE to the number of product_x columns. We can also add a sub-query here to do this calculation, but for the purpose of this solution I've left it as hard-coded.
CTE_1 is also doing the main job of UNPIVOT product related column into rows for dynamic grouping.
CTE_2 is aggregation number - 1, with output like below (showing truncated output). Final query does the main ARRAY_AGG.
+-----+-----------+----------------------------------------------+
| ID | RN | JSON_COMBINED_1 |
|-----+-----------+----------------------------------------------|
| 100 | product_1 | { |
| | | "product_1": { |
| | | "PRODUCT_1_PURCHASE_DATE": "01/02/2003", |
| | | "name": "XCTMR" |
| | | } |
| | | } |
| 200 | product_2 | { |
| | | "product_2": { |
| | | "PRODUCT_2_PURCHASE_DATE": "09/09/2021", |
| | | "name": "XCTMR" |
| | | } |
| | | } |
+-----+-----------+----------------------------------------------+
Main query -
with cte(id,product_1,product_1_purchase_date,product_2,product_2_purchase_date) as
(select * from values
(100,'XCTMR','01/02/2003','IOPWER','01/02/2005'),
(200,'AQWYU','11/20/2016','XCTMR','09/09/2021')
),cte_1 as (
select id,
case when regexp_like(product_field,'product_[[:digit:]]+','i')
then 'name' else product_field end p_field,
product_val,
concat('product_',
to_char(ceil(row_number() over (partition by id order by null)/2))) rn
from cte
unpivot (product_val for product_field in (product_1,product_1_purchase_date,
product_2,product_2_purchase_date))
), cte_2 as (
select id,rn,
object_construct(rn,object_agg(p_field,to_variant(product_val))) JSON_COMBINED_1
from cte_1
group by id,rn
) select id, array_agg(JSON_COMBINED_1) JSON_COMBINED
from cte_2
group by id
order by id;
Final output from above query (with dynamism) -
+-----+------------------------------------------------+
| ID | JSON_COMBINED |
|-----+------------------------------------------------|
| 100 | [ |
| | { |
| | "product_1": { |
| | "PRODUCT_1_PURCHASE_DATE": "01/02/2003", |
| | "name": "XCTMR" |
| | } |
| | }, |
| | { |
| | "product_2": { |
| | "PRODUCT_2_PURCHASE_DATE": "01/02/2005", |
| | "name": "IOPWER" |
| | } |
| | } |
| | ] |
| 200 | [ |
| | { |
| | "product_2": { |
| | "PRODUCT_2_PURCHASE_DATE": "09/09/2021", |
| | "name": "XCTMR" |
| | } |
| | }, |
| | { |
| | "product_1": { |
| | "PRODUCT_1_PURCHASE_DATE": "11/20/2016", |
| | "name": "AQWYU" |
| | } |
| | } |
| | ] |
+-----+------------------------------------------------+
2 Row(s) produced.
I'm totally new to MongoDB.
I wonder if it is possible to aggregate counts with different conditions at once.
Such as, there is a collection like below.
_id | no | val |
--------------------
1 | 1 | a |
--------------------
2 | 2 | a |
--------------------
3 | 3 | b |
--------------------
4 | 4 | c |
--------------------
And I want result like below.
Value a : 2
Value b : 1
Value c : 1
How can I get this result all at once?
Thank you:)
db.collection.aggregate([
{
"$match": {}
},
{
"$group": {
"_id": "$val",
"count": { "$sum": 1 }
}
},
{
"$project": {
"field": {
"$arrayToObject": [
[ { k: { "$concat": [ "Value ", "$_id" ] }, v: "$count" } ]
]
}
}
},
{
"$replaceWith": "$field"
}
])
mongoplayground
I'm currently trying to dynamically obtain information about the structure of my database for a REST API endpoint.
At first I already had a query that provides me with such data:
SELECT
table_catalog as catalog,
table_schema as schema,
table_name as name
FROM
information_schema.tables as tables
WHERE
table_type != 'VIEW'
AND
table_schema NOT IN ('information_schema', 'pg_catalog')
ORDER by
table_catalog ASC,
table_schema ASC,
table_name ASC
Output:
catalog |schema |name |
----------|----------------|---------------------------|
foo |account |hubspot_owners |
foo |account |invitation |
foo |account |organizations |
foo |account |password_recovery |
foo |account |users |
foo |categorytree |category |
foo |changelog |change |
foo |common |cities |
foo |common |countries |
foo |common |industries |
foo |common |language |
foo |common |places |
foo |filesystem |files |
foo |public |casbin_rule |
foo |public |schema_migrations |
foo |scorecard |candidates |
foo |scorecard |candidates_skills |
foo |scorecard |dimensions |
foo |scorecard |interviewers |
foo |scorecard |interviewers_verticals |
foo |scorecard |interviews |
foo |scorecard |interviews_questions |
foo |scorecard |interviews_skills |
foo |scorecard |questions |
foo |scorecard |questions_skills |
foo |scorecard |questions_verticals |
foo |scorecard |skills |
foo |scorecard |sub_dimensions |
foo |scorecard |sub_dimensions_verticals |
foo |scorecard |verticals |
foo |supplybooster |contributions_contributors |
foo |supplybooster |contributions_files |
foo |supplybooster |contributions_repositories |
foo |supplybooster |contributions_summary |
foo |supplybooster |scroll |
foo |supplyboosterapi|assignees |
foo |supplyboosterapi|contributor_notes |
foo |supplyboosterapi|contributor_references |
foo |supplyconnect |profile |
foo |supplyconnect |profile_active_availability|
foo |supplyconnect |profile_badge |
foo |supplyconnect |profile_communication |
foo |supplyconnect |profile_education |
foo |supplyconnect |profile_experience |
foo |supplyconnect |profile_focus_role |
foo |supplyconnect |profile_portfolio |
foo |supplyconnect |profile_skill |
foo |supplyconnect |profile_spoken_language |
Now I would like to know if there is any possibility of obtaining these same results, but nesting them according to its own hierarchy (catalog -> schema -> table).
Desired output:
[
{
label: "foo",
schemas: [
{
label: 'account',
tables: [
{ label: 'hubspot_owners' },
{ label: 'invitation' },
{ label: 'organizations' },
{ label: 'password_recovery' },
{ label: 'users' }
]
},
{
label: 'categorytree',
tables: [
{ label: 'category' }
]
},
{
label: 'changelog',
tables: [
{ label: 'change' }
]
},
{
label: 'common',
tables: [
{ label: 'cities' },
{ label: 'countries' },
{ label: 'industries' },
{ label: 'language' },
{ label: 'places' }
]
}
]
}
]
I know I could at any moment do such transformation after retrieving the data from DB, but I was wondering if I could take advantage of functions like array_to_json, array_agg, row_to_json, etc to do it entirely through SQL.
The furthest I managed to get so far was in this query below, but I already had to put an unexpected limit clause.
select
array_to_json(
array_agg(
row_to_json(
catalogs
)
)
) as data
from
(
select distinct
tables.table_catalog as label,
(
select
*
from (
select (
tables.table_schema
)
from
information_schema.tables as tables
limit 1
) as lol
) as schemas
from
information_schema.tables as tables
where
tables.table_type != 'VIEW'
and
tables.table_schema not in ('information_schema', 'pg_catalog')
) as catalogs
Output:
data |
-------------------------------------------|
[{"label":"foo","schemas":"public"}] |
Is there any succinct way to achieve this? And even if it exists, would it be recommended from a performance point of view?
I have Collection of Category.
with App\Category::all() I get:
ID | PARENT_ID | NAME | DEPTH'
1 | 0 | parent1 | 0
2 | 0 | parent2 | 0
3 | 1 | child1 | 1
4 | 2 | child2 | 1
How I can add custom atrribute (column or method results) to my collection?
The result I wanna get when I write etc: $categories=Category::with('childs');
ID| PARENT_ID | NAME | DEPTH' | CHILDS
1 | 0 | parent1 | 0 | {2 | 1 | child1 | 1 | NULL}
2 | 0 | parent2 | 0 | {3 | 2 | child2 | 1 | NULL}
3 | 1 | child1 | 1 | NULL
4 | 2 | child2 | 1 | NULL
I think you get the idea. I tried use Accessors & Mutators and I successfully added attribute with data etc.
$category->childs; // value should be {12 | 10 | name1 | 1 | NULL}
but I'm stuck because I can't pass data to method with queried data and return it back. I want to use one table, later I will add left and right columns to table to have tree database, now I'm just trying a little simpler - have parent and add children to it's collection
You should use model relationship to itself:
Category.php
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Category extends Model
{
protected $with = ['childs'];
public function childs()
{
return $this->hasMany(Category::class, 'parent_id');
}
}
CategoryController.php
public function index()
{
$categories = Category::all();
return $categories;
}
$categories will return result as you need:
[
{
"id": 1,
"parent_id": 0,
"name": "parent1",
"depth": 0,
"childs": [
{
"id": 3,
"parent_id": 1,
"name": "child1",
"depth": 0,
"childs": []
}
]
},
{
"id": 2,
"parent_id": 0,
"name": "parent2",
"depth": 0,
"childs": [
{
"id": 4,
"parent_id": 2,
"name": "child2",
"depth": 0,
"childs": []
}
]
},
{
"id": 3,
"parent_id": 1,
"name": "child1",
"depth": 0,
"childs": []
},
{
"id": 4,
"parent_id": 2,
"name": "child2",
"depth": 0,
"childs": []
}
]
I have a CSV file like:
FN | MI | LN | ADDR | CITY | ZIP | GENDER
------------------------------------------------------------------------------
Patricia | | Faddar | 7063 Carr xxx | Carolina | 00979-7033 | F
------------------------------------------------------------------------------
Lui | E | Baves | PO Box xxx | Boqueron | 00622-1240 | F
------------------------------------------------------------------------------
Janine | S | Perez | 25 Calle xxx | Salinas | 00751-3332 | F
------------------------------------------------------------------------------
Rose | | Mary | 229 Calle xxx | Aguadilla | 00603-5536 | F
And I am importing it into OrientDB like:
{
"source": { "file": { "path": "/sample.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "Users" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/orientdb/databases/test",
"dbType": "graph",
"classes": [
{"name": "Users", "extends": "V"}
]
}
}
}
I would like to set the import so that it created properties so that FN becomes first_name, MI becomes middle_name and so on, as well as set some values to lowercase. For ex: Carolina to become carolina
I could probably make this changes from the SCHEMA once the data is added. My reason to do this here is that I have multiple CSV files and I want to keep the the same schema for all
Any ideas?
To rename a field, take a look at the Field transformer:
http://orientdb.com/docs/last/Transformer.html#field-transformer
Rename the field from salary to renumeration:
{ "field":
{ "fieldName": "remuneration",
"expression": "salary"
}
},
{ "field":
{ "fieldName": "salary",
"operation": "remove"
}
}
in the same way, you can apply the lowerCase function to the property
{field: {fieldName:'name', expression: '$input.name.toLowerCase()'}}
Try it and let me know if it works.