Query for value matching in multiple arrays - sql

I have a table containing user experiences, table contains multiple records of same user
JSON example of data
{
user_id : 1,
location: 'india',
company_id: 5,
...other fields
}
{
user_id : 1,
location: 'united kingdom',
company_id: 6
...other fields
}
I want to run a query that gives me results of users who has worked in companies that satisfies IN condition of multiple arrays
E.g
Array1 of company Id: 1,2,4,5,6,7,8,10
Array2 of company Id: 2,6,50,100,12,4
The query should return users who have worked in one of the companies from both arrays, so IN condition of both the arrays should be satisfied
I tried the following query with no luck
select * from <table> where company_id IN(5,7,8) and company_id IN(1,4,3)
and 2 records of a user with company_id 5 and 4 exists in table

create table my_table (user_id int, company_id int);
insert into my_table (user_id, company_id)
values (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 5);
select user_id from my_table where company_id in (5, 7, 8)
intersect
select user_id from my_table where company_id in (1, 4, 3);
As you described, you need to get intersection of users, who are working in two sets of companies.

Related

Can you sort the result in GROUP BY?

I have two tables one is objects with the attribute of id and is_green.The other table is object_closure with the attributes of ancestor_id, descendant_od, and created_at. ie.
Objects: id, is_green
Object_closure: ancestor_id, descendant_od, created_at
There are more attributes in the Object table but not necessary to mention in this question.
I have a query like this:
-- create a table
CREATE TABLE objects (
id INTEGER PRIMARY KEY,
is_green boolean
);
CREATE TABLE object_Closure (
ancestor_id INTEGER ,
descendant_id INTEGER,
created_at date
);
-- insert some values
INSERT INTO objects VALUES (1, 1 );
INSERT INTO objects VALUES (2, 1 );
INSERT INTO objects VALUES (3, 1 );
INSERT INTO objects VALUES (4, 0 );
INSERT INTO objects VALUES (5, 1 );
INSERT INTO objects VALUES (6, 1 );
INSERT INTO object_Closure VALUES (1, 2, 12-12-2020 );
INSERT INTO object_Closure VALUES (1, 3, 12-13-2020 );
INSERT INTO object_Closure VALUES (2, 3, 12-14-2020 );
INSERT INTO object_Closure VALUES (4, 5, 12-15-2020 );
INSERT INTO object_Closure VALUES (4, 6, 12-16-2020 );
INSERT INTO object_Closure VALUES (5, 6, 12-17-2020 );
-- fetch some values
SELECT
O.id,
P.id,
group_concat(DISTINCT P.id ) as p_ids
FROM objects O
LEFT JOIN object_Closure OC on O.id=OC.descendant_id
LEFT JOIN objects P on OC.ancestor_id=P.id AND P.is_green=1
GROUP BY O.id
The result is
query result
I would like to see P.id for O.id=6 is also 5 instead of null. Afterall,5 is still a parentID (p.id). More importantly, I also want the id shown in P.id as the first created id if there are more than one. (see P.created_at).
I understand the reason why it happens is that the first one the system pick is null, and the null was created by the join with the condition of is_green; however, I need to filter out those objects that are green only in the p.id.
I cannot do an inner join (because I need the other attributes of the table and sometimes both P.id and p_ids are null, but still need to show in the result) I cannot restructure the database. It is already there and cannot be changed. I also cannot just use a Min() or Max() aggregation because I want the ID that is picked is the first created one.
So is there a way to skip the null in the join?
or is there a way to filter the selection in the select clause?
or do an order by before the grouping?
P.S. My original code concat the P.id by the order of P.created_at. For some reason, I cannot replicate it in the online SQL simulator.

SQL number of distinct values for the value in another column

Let's say I have a table with user logins:
create table usr_logins
(
id int primary key,
logint_date date,
user_name text,
os_ver int
);
insert into usr_logins
values (1, '2018-12-23', 'Jack', 10)
,(2, '2018-12-24', 'Sam', 11)
,(3, '2018-12-24', 'Jack', 10)
,(4, '2018-12-24', 'Ann', 10)
,(5, '2018-12-25', 'Sam', 10)
,(6, '2019-12-26', 'Sam', 10)
I need to get a list of user names with a number of different OS versions used by them.
Note that only Sam has logins from os_ver 10 and 11.
This is what I need:
Since you need to get all usernames and corresponding distinct os_version count displayed (as os_num), you can try the following:
select user_name, count(DISTINCT os_ver) as os_num from usr_logins group by user_name
It's not clear if you are expecting only user Sam or not since you've highlighted in red, but if so you can do
select user_name, Count(distinct os_ver) os_num
from usr_logins
group by User_Name
having Count(distinct os_ver)>1
select user_name, count(distinct os_ver) osCount from usr_logins group by user_name having osCount > 1
Hope I understood you question correctly

How to apply LIMIT only to parent rows

In my Postgres database I have a table that holds a simply hierarchy, something like this:
id | parent_id
---------------
When an item in the table is a "top-level" item, its parent_id is set to NULL
However, when I query my table I retrieve the top-level items and the child items that belong to those items. E.g. if there is a single top-level item with two children my query returns three rows. My query is super simple, it looks something like this:
SELECT
*
FROM
my_table
LIMIT
_limit
OFFSET
_offset
;
When the above returns the three rows, in my business logic I then transform that result into a JSON structure that is then serialized to the client. It looks something like this:
items: [
{
id: 1,
parent_id: null,
items: [
{
id: 2,
parent_id: 1
},
{
id: 3,
parent_id: 1
}
]
}
]
However, as you can see my query has OFFSET and LIMIT for, you guessed it, pagination. The table is quite large and I want to restrict the amount of items that can be requested in a single request.
The problem is that, and continuing to use my single top-level item as an example, if the LIMIT is set to 1 then the children of the top-level item will never be returned.
What I am basically looking for is a way to exclude child rows from counting towards the LIMIT, or, to expand the LIMIT with the total number of child rows found.
You're going to have to do two things:
Get the top level entries to include (paginated)
Run another query for the descendants of the top level
This is a fully recursive example
create table t (id int primary key, parent_id int);
insert into t (id, parent_id) values
(1, null), (2, null), (3, null), (4, 1),
(5, 1), (6, 4), (7, 2), (8, 2),
(9, 8), (10, 3), (11, null), (12, null);
with recursive entries (id, parent_id) as (
(
select
id, parent_id
from t
where parent_id is null
order by id limit 2 -- add offset N here
)
union all
(
select
t.id, t.parent_id
from entries inner join t on (t.parent_id = entries.id)
)
)
select * from entries;
https://www.db-fiddle.com/f/g3G2t3mVo7fBhQa9QCA71P/0

Querying a SQL Table using conditions from another table

Stuck on SQL college question!
I want to search the table Em_Sum and find any Em_num that went from column Em_before (4, 5, 6) to Em_after (6), but I only want to query Employees who have the type_id 1, 2 or 3 which can be seen in table Em_Type.
This is what I have so far
SELECT Em_Sum.Em_num
FROM Em_Sum
FULL JOIN Em_Type ON Em_Type.Em_num = Em_Sum.Em_num
WHERE Em_Type.Type_id IN (1, 2, 3)
AND Em_Sum.Em_before IN (4, 5, 6)
AND Em_Sum.Em_after IN (6) ;
I'm just confused as to how to query the Em_Type table using Type_id
Good thing you've tried to find the answer yourself. I think what you did is correct.
First you indeed join the 2 tables. Next you can filter on either table you want.
e.g.:
SELECT sum.Em_num
FROM Em_Sum sum --we are giving this table an alias 'sum'
JOIN Em_Type type --this table gets alias 'type'
--now join both tables on the primary/foreign key 'employee number (=Em_num)':
ON type .Em_num = Em_Sum.Em_num
WHERE type.Type_id IN (1, 2, 3)
AND sum.Em_before IN (1, 2, 3, 4, 5, 6)
AND sum.Em_after IN (1, 2, 3) ;

SQL - Select column and add records based on enum

I'm having some trouble figuring out how to set up my query.
I have a simple 2-column table matching an object id(int) to a tag(string). There's also a legacy data-type, object type(int) that I would like to convert into a tag from the query. For example:
TAG TABLE := { ID, TAG } : (1, FOO), (1, MINT), (2, BAR), (3, FOOBAR), (5, SAUCY)
OBJECT TABLE := { ID, ..., TYPE } : (1, ..., 0), (2, ..., 0), (3, ..., 1),(4, ..., SAUCY)
And the types transfer to tags in the following way (again, an example)
[ 0 -> AWESOME ], [ 1 -> SUPER]
So my goal is to make a query that, using this data, returns:
RETURN TABLE := { ID, TAG_NAME } : (1, AWESOME), (1, FOO), (1, MINT), (2, AWESOME), (2, BAR), (3, FOOBAR), (3, SUPER), (4, SAUCY), (5, SAUCY)
How would I go about setting this up? I tried using case statements for the object type but couldn't get the query to compile... I'm hoping this isn't too tough to create.
Looks to me like a simple UNION ALL:
SELECT ID, TAG FROM TagTable
UNION ALL
SELECT ID, CASE
WHEN TYPE=0 THEN 'AWESOME'
WHEN TYPE=1 THEN 'SUPER'
{etc}
END AS TAG
FROM ObjectTable
Although maybe you need to do some extra join to get your TypeName using the Type in the Object Table. You don't mention where "Awesome" and "Super" come from in your database.
Assuming that
TRANSER_TABLE := {ID, Name} : (0, AWESOME), (1, SUPER)
you can write this:
select ID, TAG
from TAG_TABLE
UNION ALL
select o.ID, t.Name
from
OBJECT_TABLE o
join TRANSER_TABLE t on o.TYPE = t.ID