SQL select in from a field containing comma separated keys - sql

Is there a way I can construct a SQL statement that will help me retrieve active records based on status then references to oldids stored in another field?
Let's say I want to join the data below to another table. For ID=4, it is meant to imply that IDs 1,3 and 4 are combined and the surviving record is 4.
So when I want to join it with another table, how can I have scvid 104 be linked to transactions of IDs 1,3, and 4?
select *
from tbl
where scvid in (id, oldids)?
Sample data:
scvid id oldid status
------------------------------
101 1 NULL 0
102 2 NULL 1
103 3 NULL 0
104 4 [1,3] 1

You didn't mention your DB system. Here is the solution for SQL Server (TSQL). You can use it also in another RDBMS with minor changes
SELECT
t1.*, t2.scvid as NEWID
FROM
tbl t1
JOIN
tbl t2 ON
-- first case: if the record is main with [1,3] we link it to the the self
(t1.scvid = t2.scvid) AND (t2.oldid IS NOT NULL)
OR
-- second case: we bulid ",1,3," from "[1,3]"
-- then we get condition ",1,3," LIKE "%,id,%"
-- for the id = 1 and 3 it's TRUE
(REPLACE(REPLACE(t2.oldid,'[',','),']',',')
LIKE '%,'+CAST(t1.id as VARCHAR(100))+',%')
AND (t1.oldid IS NULL)
RESULT:
scvid id oldid status NEWID
101 1 NULL 0 104
103 3 NULL 0 104
104 4 [1,3] 1 104
This output new column NEWID for the old Id in this record so you can JOIN or use it in another way.

For Postgres you can do this by converting the comma separated list into an array.
Something like this:
Sample setup:
create table some_table (id integer);
insert into some_table values (4), (6), (8);
create table service (svcid integer, id integer, oldid text, status integer);
insert into service
values
(101, 1, NULL , 0),
(102, 2, NULL , 1),
(103, 3, NULL , 0),
(104, 4, '1,3', 1);
To get all rows from some_table where the id is either the id column of the service table or any of those in the oldid column you can use:
select *
from some_table st
join (
select svcid, id, oldid, status, string_to_array(s.oldid, ',')::int[]||id as all_ids
from service s
) s on st.id = any(s.all_ids)
This returns:
id | svcid | id | oldid | status | all_ids
---+-------+----+-------+--------+--------
4 | 104 | 4 | 1,3 | 1 | {1,3,4}

This works on SQL Server.
Since that LIKE syntax supports a negative character class as [^0-9].
select
old.scvid as old_scvid, old.id as old_id,
new.scvid as new_scvid, new.id as new_id, new.oldid as new_oldids
from tbl new
left join tbl old
on (old.status = 0 and new.oldid like concat('%[^0-9]',old.id,'[^0-9]%'))
where new.status = 1
and new.oldid is not null
To bad that the table doesn't have a "newid" field, instead of that "oldid" with a range.
That would make it a lot easier to join.

Related

How to update multiple SQL records with same condition

I have this table
id | attributeId | value
--------------------------
1 | 1 | abc
2 | 1 | def
I want to update this table where "attributeId = 1" with these values {"123", "456", "789"} so the table will look like this:
id | attributeId | value
--------------------------
1 | 1 | 123
2 | 1 | 456
3 | 1 | 789
My idea is to delete all the old records and then add new records but I think there are more better method to do this. Is there any better way?
Consider following:
Alter table Your_Table DROP COLUMN VALUE
CREATE TABLE TEMP (ID INT, VALUE VARCHAR(3));
INSERT INTO TMP VALUES (1, '123'), (1, '456'), (1, '789');
SELECT A.*, B.VALUE INTO NEW_TABLE FROM Your_Table a join TMP b on a.id = b.id;
The new_table will have your requested structure.
If your goal is to replace the table, then just delete all the rows and insert new values:
truncate table t;
insert into t (id, attributeId, value)
values (1, 1, 123),
(2, 1, 456),
(3, 1, 789);
If you don't want the original rows that are not in the new data, I would not bother trying to figure out the differences between the tables. The truncate should be pretty fast and bulk updates are usually faster than update some records and insert some others.

SQLite query - filter name where each associated id is contained within a set of ids

I'm trying to work out a query that will find me all of the distinct Names whose LocationIDs are in a given set of ids. The catch is if any of the LocationIDs associated with a distinct Name are not in the set, then the Name should not be in the results.
Say I have the following table:
ID | LocationID | ... | Name
-----------------------------
1 | 1 | ... | A
2 | 1 | ... | B
3 | 2 | ... | B
I'm needing a query similar to
SELECT DISTINCT Name FROM table WHERE LocationID IN (1, 2);
The problem with the above is it's just checking if the LocationID is 1 OR 2, this would return the following:
A
B
But what I need it to return is
B
Since B is the only Name where both of its LocationIDs are in the set (1, 2)
You can try to write two subquery.
get count by each Name
get count by your condition.
then join them by count amount, which means your need to all match your condition count number.
Schema (SQLite v3.17)
CREATE TABLE T(
ID int,
LocationID int,
Name varchar(5)
);
INSERT INTO T VALUES (1, 1,'A');
INSERT INTO T VALUES (2, 1,'B');
INSERT INTO T VALUES (3, 2,'B');
Query #1
SELECT t2.Name
FROM
(
SELECT COUNT(DISTINCT LocationID) cnt
FROM T
WHERE LocationID IN (1, 2)
) t1
JOIN
(
SELECT COUNT(DISTINCT LocationID) cnt,Name
FROM T
WHERE LocationID IN (1, 2)
GROUP BY Name
) t2 on t1.cnt = t2.cnt;
| Name |
| ---- |
| B |
View on DB Fiddle
You can just use aggregation. Assuming no duplicates in your table:
SELECT Name
FROM table
WHERE LocationID IN (1, 2)
GROUP BY Name
HAVING COUNT(*) = 2;
If Name/LocationID pairs can be duplicated, use HAVING COUNT(DISTINCT LocationID) = 2.

How to do selection in PostgreSQL with join when more than one row satisfies requirements?

How to do selection to get JSON array in one cell when doing INNER JOIN when there are more than 1 values to join?
ex Tables:
T1:
id | name
1 Tom
2 Dom
T2:
user_id | product
1 Milk
2 Cookies
2 Banana
Naturally I do SELECT * FROM T1 INNER JOIN T2 ON T1.id = T2.user_id.
But then I get:
id | Name | product
1 Tom Milk
2 Dom Cookies
2 Dom Banana
But I want to get:
id | Name | product
1 Tom [{"product":"Milk}]
2 Dom [{"product":"Cookies"}, {"product":"Banana"}]
If I do something with agg functions, then I need to put everything else in GROUP BY, where I have at least 10 arguments. And whole query takes more than 5 minutes.
My T1 is around 4000 rows and T2 around 300 000 rows, each associated with some row in T1.
Is there a better way?
Using LATERAL you can solve it as given example below:
-- The query
SELECT *
FROM table1 t1,
LATERAL ( SELECT jsonb_agg(
jsonb_build_object( 'product', product )
)
FROM table2
WHERE user_id = t1.id
) t2( product );
-- Result
id | name | product
----+------+-------------------------------------------------
1 | Tom | [{"product": "Milk"}]
2 | Dom | [{"product": "Cookies"}, {"product": "Banana"}]
(2 rows)
-- Test data
CREATE TABLE IF NOT EXISTS table1 (
id int,
"name" text
);
INSERT INTO table1
VALUES ( 1, 'Tom' ),
( 2, 'Dom' );
CREATE TABLE IF NOT EXISTS table2 (
user_id int,
product text
);
INSERT INTO table2
VALUES ( 1, 'Milk' ),
( 2, 'Cookies' ),
( 2, 'Banana' );

SQL Ignore duplicate primary keys

Imagine you have a string of results from a SELECT statement:
ID (pk) Name Address
1 a b
1 c d
1 e f
2 a b
3 a d
2 a d
Is it possible to alter the SQL statement to get one record ONLY for the record with ID 1?
I have a SELECT statement that displays multiple values which can have the same primary key. I want to only take one of those records, if say, I have 5 records with the same primary key.
SQL: http://pastebin.com/cFCBA2Uy
Screenshot: http://i.imgur.com/UlMBZhC.png
What I want is to show only one file which is for e.g. File Number: 925, 890
You stated that no matter which row to choose when there are more than one row for the same Id, you just want one row for each id.
The following query does what you asked for:
DECLARE #T table
(
id int,
name varchar(50),
address varchar(50)
)
INSERT INTO #T VALUES
(1, 'a', 'b'),
(1, 'c', 'd'),
(1, 'e', 'f'),
(2, 'a', 'b'),
(3, 'a', 'd'),
(2, 'a', 'd');
WITH A AS
(
SELECT
t.id, t.name, t.address,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY (SELECT NULL)) AS RowNumber
FROM
#T t
)
SELECT
A.id, A.name, A.address
FROM
A
WHERE
A.RowNumber = 1
But I think there should be a criteria. If you find one, express your criteria as the ORDER BY inside the OVER clause.
EDIT:
Here you have the result:
+----+------+---------+
| id | name | address |
+----+------+---------+
| 1 | a | b |
| 2 | a | b |
| 3 | a | d |
+----+------+---------+
Disclaimer: the query I wrote is non-deterministic, different conditions (indexes, statistics, etc) might lead to different results.

Insert -> select and add my column

how i can added my values in column
for example
t1
id | name | surname | mycolumn
1 | f | g |
+++|++++++|+++++++++|++++++++++
and t2
u_id | u_name | u_surname
1 | 2f | 2g
+++++|++++++++|+++++++++++
:)
so, query
INSERT INTO t1 SELECT (u_name,u_surname) FROM t2 WHERE u_id = 1
how set value mycolumn, in my variable?
If I understood your question: you are trying to insert values into a table from another table but they have different column names and different column count. In that case, you can simply rename the columns of the second table since you are querying the result, but you will need a third column, for that use NULL if you don't have a value yet
INSERT INTO t1
SELECT u_id id,u_name name,u_surname surname, null mycolumn
FROM t2 WHERE u_id = 1