SQL - Select column and add records based on enum - sql

I'm having some trouble figuring out how to set up my query.
I have a simple 2-column table matching an object id(int) to a tag(string). There's also a legacy data-type, object type(int) that I would like to convert into a tag from the query. For example:
TAG TABLE := { ID, TAG } : (1, FOO), (1, MINT), (2, BAR), (3, FOOBAR), (5, SAUCY)
OBJECT TABLE := { ID, ..., TYPE } : (1, ..., 0), (2, ..., 0), (3, ..., 1),(4, ..., SAUCY)
And the types transfer to tags in the following way (again, an example)
[ 0 -> AWESOME ], [ 1 -> SUPER]
So my goal is to make a query that, using this data, returns:
RETURN TABLE := { ID, TAG_NAME } : (1, AWESOME), (1, FOO), (1, MINT), (2, AWESOME), (2, BAR), (3, FOOBAR), (3, SUPER), (4, SAUCY), (5, SAUCY)
How would I go about setting this up? I tried using case statements for the object type but couldn't get the query to compile... I'm hoping this isn't too tough to create.

Looks to me like a simple UNION ALL:
SELECT ID, TAG FROM TagTable
UNION ALL
SELECT ID, CASE
WHEN TYPE=0 THEN 'AWESOME'
WHEN TYPE=1 THEN 'SUPER'
{etc}
END AS TAG
FROM ObjectTable
Although maybe you need to do some extra join to get your TypeName using the Type in the Object Table. You don't mention where "Awesome" and "Super" come from in your database.

Assuming that
TRANSER_TABLE := {ID, Name} : (0, AWESOME), (1, SUPER)
you can write this:
select ID, TAG
from TAG_TABLE
UNION ALL
select o.ID, t.Name
from
OBJECT_TABLE o
join TRANSER_TABLE t on o.TYPE = t.ID

Related

SQL Query for insert many values in a table and take only a value from another table

I'm looking for insert many values in a table and take the ID refernce from another table. I have tried diffent ways, and finaly I have found this that works.
INSERT INTO tblUserFreeProperty (id, identname, val, pos)
VALUES ((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.G', N'??_??#False', 1),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.Qta_C', N'??_??#0', 2),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.Qta_M', N'??_??#0', 3),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.UbicM', N'??_??#No', 4),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.UbicS', N'??_??#', 5),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.UbicP', N'??_??#', 6),
((SELECT id FROM tblpart where tblPart.ordernr=N'3CFSU05'),N'DSR_Mag.UbicC', N'??_??#', 7);
This works, but I'm looking for a "easy query" because I need to write the command from Visual Studio
The link I noted earlier should have sufficed to explain the correct syntax.
Insert into ... values ( SELECT ... FROM ... )
But seeing as there has been much misinformation on this post, I will show how you should do it.
INSERT INTO tblUserFreeProperty (id, identname, val, pos)
SELECT p.id, v.identname, v.val, v.pos
FROM (VALUES
(N'DSR_Mag.G', N'??_??#False', 1),
(N'DSR_Mag.Qta_C', N'??_??#0', 2),
(N'DSR_Mag.Qta_M', N'??_??#0', 3),
(N'DSR_Mag.UbicM', N'??_??#No', 4),
(N'DSR_Mag.UbicS', N'??_??#', 5),
(N'DSR_Mag.UbicP', N'??_??#', 6),
(N'DSR_Mag.UbicC', N'??_??#', 7)
) AS v(identname, val, pos)
JOIN tblpart p ON p.ordernr = N'3CFSU05';
Note the use of a standard JOIN clause, there are no subqueries. Note also good use of short, meaningful table aliases.
As far as the VALUES table constructor goes, it can also be replaced with a temp table, or table variable, or Table Valued parameter. Or indeed another table.
Side note: I don't know what you are storing in those columns, but it appears you have multiple pieces of info in each. Do not do this. Store each atomic value in its own column.
INSERT tblUserFreeProperty (id, identname, val, pos)
SELECT tblpart.id, X.A, X.B, X.C)
FROM (
VALUES (
(N'DSR_Mag.G0', N'??_??#True', 1),
(N'DSR_Mag.G1', N'??_??#True', 2),
(N'DSR_Mag.G2', N'??_??#False', 3);
)
) X(A,B,C)
CROSS JOIN tblPart
WHERE tblPart.ordernr=N'555'

Is it possible to set e the initial-select value of a recursive CTE query with a parameter?

Using this self-referencing table:
CREATE TABLE ENTRY (
ID integer NOT NULL,
PARENT_ID integer,
... other columns ...
)
There are many top-level rows (with PARENT_ID = NULL) that can have 0 to several levels of child rows, forming a graph like this:
(1, NULL, 'A'),
(2, 1, 'B'),
(3, 2, 'C'),
(4, 3, 'D'),
(5, 4, 'E'),
(6, NULL, 'one'),
(7, 6, 'two'),
(8, 7, 'three'),
(9, 6, 'four'),
(10, 9, 'five'),
(11, 10, 'six');
I want to write a query that would give me the subgraph (all related rows in both directions) for a given row, for instance (just showing the ID values):
ID = 3: (1, 2, 3, 4, 5)
ID = 6: (6, 7, 8, 9, 10, 11)
ID = 7: (6, 7, 8)
ID = 10: (6, 9, 10, 11)
It's similar to the query in ยง3.3 Queries against a Graph of the SQLite documentation, for returning a graph from any of its nodes:
WITH RECURSIVE subtree(x) AS (
SELECT 3
UNION
SELECT e1.ID x FROM ENTRY e1 JOIN subtree ON e1.PARENT_ID = subtree.x
UNION
SELECT e2.PARENT_ID x FROM ENTRY e2 JOIN subtree ON e2.ID = subtree.x
)
SELECT x FROM subtree
LIMIT 100;
... with 3 as the anchor / initial-select value.
This particular query works fine in DBeaver. The sqlite version available in db-fiddle gives a circular reference error, but this nested CTE gives the same result in db-fiddle.
However, I can only get this to work when the initial value is hard-coded in the query. I can't find any mention of how to supply that initial-select value as a parameter.
I'd think it should be straightforward. Maybe the case of having more than one top-level row is very unusual, or I'm overlooking something blindingly obvious?
Any suggestions?
As forpas points out above, SQLite doesn't support passing parameters to stored/user defined functions.
Using a placeholder in the prepared statement from the calling code is a good alternative.

Query for value matching in multiple arrays

I have a table containing user experiences, table contains multiple records of same user
JSON example of data
{
user_id : 1,
location: 'india',
company_id: 5,
...other fields
}
{
user_id : 1,
location: 'united kingdom',
company_id: 6
...other fields
}
I want to run a query that gives me results of users who has worked in companies that satisfies IN condition of multiple arrays
E.g
Array1 of company Id: 1,2,4,5,6,7,8,10
Array2 of company Id: 2,6,50,100,12,4
The query should return users who have worked in one of the companies from both arrays, so IN condition of both the arrays should be satisfied
I tried the following query with no luck
select * from <table> where company_id IN(5,7,8) and company_id IN(1,4,3)
and 2 records of a user with company_id 5 and 4 exists in table
create table my_table (user_id int, company_id int);
insert into my_table (user_id, company_id)
values (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 5);
select user_id from my_table where company_id in (5, 7, 8)
intersect
select user_id from my_table where company_id in (1, 4, 3);
As you described, you need to get intersection of users, who are working in two sets of companies.

How to apply LIMIT only to parent rows

In my Postgres database I have a table that holds a simply hierarchy, something like this:
id | parent_id
---------------
When an item in the table is a "top-level" item, its parent_id is set to NULL
However, when I query my table I retrieve the top-level items and the child items that belong to those items. E.g. if there is a single top-level item with two children my query returns three rows. My query is super simple, it looks something like this:
SELECT
*
FROM
my_table
LIMIT
_limit
OFFSET
_offset
;
When the above returns the three rows, in my business logic I then transform that result into a JSON structure that is then serialized to the client. It looks something like this:
items: [
{
id: 1,
parent_id: null,
items: [
{
id: 2,
parent_id: 1
},
{
id: 3,
parent_id: 1
}
]
}
]
However, as you can see my query has OFFSET and LIMIT for, you guessed it, pagination. The table is quite large and I want to restrict the amount of items that can be requested in a single request.
The problem is that, and continuing to use my single top-level item as an example, if the LIMIT is set to 1 then the children of the top-level item will never be returned.
What I am basically looking for is a way to exclude child rows from counting towards the LIMIT, or, to expand the LIMIT with the total number of child rows found.
You're going to have to do two things:
Get the top level entries to include (paginated)
Run another query for the descendants of the top level
This is a fully recursive example
create table t (id int primary key, parent_id int);
insert into t (id, parent_id) values
(1, null), (2, null), (3, null), (4, 1),
(5, 1), (6, 4), (7, 2), (8, 2),
(9, 8), (10, 3), (11, null), (12, null);
with recursive entries (id, parent_id) as (
(
select
id, parent_id
from t
where parent_id is null
order by id limit 2 -- add offset N here
)
union all
(
select
t.id, t.parent_id
from entries inner join t on (t.parent_id = entries.id)
)
)
select * from entries;
https://www.db-fiddle.com/f/g3G2t3mVo7fBhQa9QCA71P/0

Query to get count of distinct items in groupings

I have a table that stores created grouping for items from another table like this:
table1
table2
So giving the above, I want to write a query that returns the count of items from table1 that a grouping has been created for.
It may sound like doing the below but that is actually not what I'm looking for because the groups have to be manually created for them to appear in table 2 so you may have an item from table1 that does't exist in table 2 because the grouping hasn't been created (i.e id: 555).
SELECT count(id)
FROM table1
WHERE group IS NOT NULL
The above will return 4 but I need something that looks at table2 and returns 3 which is count of items from table1 whose group exists in the category column of table2.
My real table for this can be pretty large up to 100k+ rows so I don't think it is efficient to check if group string from table1 it exists in table2 one by one as that would probably take forever to run - or is that the only viable solution?
PS: tried to use table markdown but I must have screwed up somehow
PPS categories column is not of json type, its just string
Not sure that this will be faster but you can prepare an existing categories aggregate. Something like that (also you can try set_union instead of array_agg with flatten and array_distinct):
SELECT array_distinct(flatten(array_agg(CAST(JSON_EXTRACT(categories, '$.x') as ARRAY(VARCHAR)))))
FROM table2
And check that group is in the result.
Assuming that table2 would not contain any groups in the array that are not there in table1, you can try the following:
WITH table1(id, "group", qty) AS (
SELECT *
FROM (VALUES (111, 'cups', 1),
(222, 'plates', 2),
(333, 'spoons', 5),
(444, null, 2),
(555, 'knives', 2))
),
table2(group_id, categories, count_inventory) as (
SELECT *
FROM (VALUES ('A1', CAST(MAP(ARRAY['x'], ARRAY[ARRAY['cups', 'plates']]) AS JSON), 3),
('B1', CAST(MAP(ARRAY['x'], ARRAY[ARRAY['cups']]) AS JSON), 1),
('C1', CAST(MAP(ARRAY['x'], ARRAY[ARRAY['cups', 'spoons']]) AS JSON), 6),
('C4', CAST(MAP(ARRAY['x'], ARRAY[ARRAY['spoons']]) AS JSON), 5)
))
SELECT reduce(
array_agg(CAST(json_extract(categories, '$.x') AS ARRAY(VARCHAR))),
array[],
(s, x) -> array_union(s, x),
x -> cardinality(x)
)
FROM table2 WHERE categories is not null;