PostgreSQL: How to insert multiple values without multiple selects? - sql

I have the problem that I need to insert into a table with 2 entries, where one value is constant but fetched from another table, and the other one is the actual content that changes.
Currently I have something like
INSERT INTO table (id, content) VALUES
((SELECT id FROM customers WHERE name = 'Smith'), 1),
((SELECT id FROM customers WHERE name = 'Smith'), 2),
((SELECT id FROM customers WHERE name = 'Smith'), 5),
...
As this is super ugly, how can I do the above in Postgres without the constant SELECT repetition?

Yet another solution:
insert into table (id, content)
select id, unnest(array[1, 2, 5]) from customers where name = 'Smith';

You can cross join the result of the select with your values:
INSERT INTO table (id, content)
select c.id, d.nr
from (
select id
from customers
where name = 'Smith'
) as c
cross join (values (1), (2), (5) ) as d (nr);
This assumes that the name is unique (but so does your original solution).

Well, I believe you can do something like this:
DECLARE
id customers.id%TYPE;
BEGIN
select c.id into id FROM customers c WHERE name = 'Smith';
INSERT INTO table (id, content) VALUES
(id, 1),
(id, 2),
....
END;

Related

How do I insert values into a table with a column-wise uniqueness check?

Create table
CREATE TABLE `my_table`
(
id Uint64,
name String,
PRIMARY KEY (id)
);
Insert values
INSERT INTO `my_table`
( id, name )
VALUES (1, 'name1'),
(2, 'name2'),
(3, 'name3');
#
id
name
0
1
"name1"
1
2
"name2"
2
3
"name3"
How add VALUES (4, 'name1') and skip add VALUES (3, 'name1')?
The available syntax is described here: https://cloud.yandex.com/docs/ydb/yql/reference/syntax/insert_into
From the documentation link that you provided in the comments I see that the databse that you use does not support a statement equivalent to INSERT OR IGNORE... to suppress errors if a unique constraint is violated.
As an alternative you can use INSERT ... SELECT.
If your database supports EXISTS:
INSERT INTO my_table
SELECT 3, 'name1'
WHERE NOT EXISTS (SELECT * FROM my_table WHERE id = 3);
Or you can use a LEFT JOIN:
INSERT INTO my_table
SELECT t.id, t.name
FROM (SELECT 3 AS id, 'name1' AS name) AS t
LEFT JOIN my_table AS m
ON m.id = t.id
WHERE m.id IS NULL;

How to get unique records from 3 tables

I have 3 tables and I am trying to get unique results from all 3 tables (including other columns from each table).
I have tried union approach but that approach only works when I have single column selected from each table.
As soon as I want another corresponding column value from each table, I don't get unique values for the field I am trying to get.
Sample Database and query available here as well: http://www.sqlfiddle.com/#!18/1b9a6/10
Here is the example tables i have created.
CREATE TABLE TABLEA
(
id int,
city varchar(6)
);
INSERT INTO TABLEA ([id], [city])
VALUES
(1, 'A'),
(2, 'B'),
(3, 'C');
CREATE TABLE TABLEB
(
id int,
city varchar(6)
);
INSERT INTO TABLEB ([id], [city])
VALUES
(1, 'B'),
(2, 'C'),
(3, 'D');
CREATE TABLE TABLEC
(
id int,
city varchar(6)
);
INSERT INTO TABLEC ([id], [city])
VALUES
(1, 'C'),
(2, 'D'),
(2, 'E');
Desired result:
A,B,C,D,E
Unique city from all 3 table combined. By unique, I am referring to DISTINCT city from the combination of all 3 tables. Yes, the id is different for common values between tables but it doesn't matter in my use-case if id is coming from table A, B OR C, as long as I am getting DISTINCT (aka UNIQUE) city across all 3 tables.
I tried this query but no luck (city B is missing in the output):
SELECT city, id
FROM
(SELECT city, id
FROM TABLEA
WHERE city NOT IN (SELECT city FROM TABLEB
UNION
SELECT city FROM TABLEC)
UNION
SELECT city, id
FROM TABLEB
WHERE city NOT IN (SELECT city FROM TABLEA
UNION
SELECT city FROM TABLEC)
UNION
SELECT city, id
FROM TABLEC) AS mytable
try this. As this should give you distinct city with there first appear id:
select distinct min(id) over(partition by city) id, city from (
select * from TABLEA
union all
select * from TABLEB
union all
select * from TABLEC ) uni
You got the right idea, just wrap the UNION results in a subquery/temp table and then apply the DISTINCT
WITH TABLEE AS (
SELECT city, id FROM TABLEA
UNION
SELECT city, id FROM TABLEB
UNION
SELECT city, id FROM TABLEC
)
SELECT DISTINCT city
FROM TABLEE

How to Select Specific Rows into a New Table?

I am using SQLite 3. I have a table MyTable, as follows:
Create table mytable (ID as INTEGER, OrderID as INTEGER);
Insert into mytable (ID, OrderID) values (1, 1);
Insert into mytable (ID, OrderID) values (1, 2);
Insert into mytable (ID, OrderID) values (2, 1);
Insert into mytable (ID, OrderID) values (2, 3);
Insert into mytable (ID, OrderID) values (3, 1);
For two rows with same ID but different OrderID, like(1, 1) and (1, 2), we will call them duplicate rows.
Now I need to pick out all duplicate rows and put them into a new table called MyDupTable. For the above sample, MyDupTable should contains:
(1, 1);
(1, 2);
(2, 1);
(2, 3);
So I use the following statement:
Select * into MyDupTable from MyTable group by ID having Count(ID) > 1;
But sqlite gives an error message “near “into”, syntax error”, why?
Thanks
You can do it by using sub-query, sub-query will pick all the duplicate id and then by using in pick rest of the columns from table and insert it in to new table
insert into MyDupTable
select * from mytable where ID in(
select ID from mytable
group by ID
having Count(*) > 1
)
you can create table by using existing table
CREATE TABLE MyDupTable AS
select * from mytable where ID in(
select ID from mytable
group by ID
having Count(*) > 1
)
your query analysis
Select * into MyDupTable from MyTable group by ID having Count(ID) > 1;
1.you have used group by id but not found any selection column from MyTable , number of column used in selection must used in group by
in sql lite insert table docs
I would use INSERT INTO. . . statement with EXISTS :
INSERT INTO MyDupTable (ID, orderid)
SELECT ID, orderid
FROM mytable mt
WHERE EXISTS (SELECT 1 FROM mytable mt1 WHERE mt1.ID = mt.ID AND mt.orderid <> mt1.orderid);
Notes :
Always qualify all columns names when you use INSERT INTO Statement.

How to optmize SQL query (search by criteria in history/actual versioning pattern)

I have actual and history tables in PostgreSQL database.
create table actual (id int, name text, version int);
create table history (id int, name text, version int, actual_id int);
When a record changes it is copied to the history table and the actual version increments. Rows cannot be deleted.
E.g. if we have 3 records A1, B1, C1 (1 is the version number) and change B's name, then the actual table will contain A1, B2, C1 and history - B1. We could then change C's name and actual data will be A1, B2, C3 and history - B1, C1
Unfortunately, this requires to use UNION ALL in order to search records by criteria across the specific version:
select * from (
select row_number() over (partition by id order by version desc) rn,
id, name, version
from
(
select h.actual_id as id, h.name, h.version from history h
union all
select * from actual
) x
where version <= 2
) y
where rn = 1 and name like '%something%';
This is obviosly a fullscan by name in y recordset (although both actual and history tables have indexes on name columns). And I cannot move and name like '%something%' into where version <= 2 as it could find the name in the previous versions but not in the latest.
How to optimize this query? It is possible to tell postgres to use indexes in actual and history tables?
Here below it the whole test case:
create table actual (id int, name text, version int);
create table history (id int, name text, version int, actual_id int);
insert into actual values (1, 'A', 3);
insert into actual values (2, 'B', 2);
insert into actual values (3, 'C', 2);
insert into actual values (4, 'D_changed', 5);
insert into history values (1, 'A', 1, 1);
insert into history values (2, 'B', 1, 2);
insert into history values (3, 'C', 1, 3);
insert into history values (4, 'D_old', 4, 4);
insert into history values (5, 'D_very_old', 2, 4);
select * from (
select row_number() over (partition by id order by version desc) rn,
id, name, version
from
(
select h.actual_id as id, h.name, h.version from history h
union all
select * from actual
) x
where version <= 5 -- and name like '%old%' - this finds wrong record ver=4
) y
where rn = 1 and name like '%old%';
I think the whole approach of using the version number is not a good idea. I managed to replace it to a more traditional way by adding from_date and to_date columns to the tables and the query became as simple as this:
select * from
(
select h.actual_id as id, h.name, h.start_date, h.end_date from history h
union all
select * from actual
) x
where ? between start_date and end_date

Select DISTINCT from two columns in t-SQL

Say, if I have the following table:
CREATE TABLE tbl (ID INT, Type UNIQUEIDENTIFIER)
INSERT tbl VALUES
(1, N'D9D09D5B-AF63-484C-8229-9762B52972D0'),
(2, N'D9D09D5B-AF63-484C-8229-9762B52972D6'),
(3, N'D9D09D5B-AF63-484C-8229-9762B52972D9'),
(3, N'D9D09D5B-AF63-484C-8229-9762B52972D2'),
(4, N'D9D09D5B-AF63-484C-8229-9762B52972D0')
and I need to select distinct ID columns but also whatever the Type column value that is associated with it. If I do the following:
select distinct id, type from tbl
It returns the whole table when I need only this:
1, N'D9D09D5B-AF63-484C-8229-9762B52972D0'
2, N'D9D09D5B-AF63-484C-8229-9762B52972D6'
3, N'D9D09D5B-AF63-484C-8229-9762B52972D9'
4, N'D9D09D5B-AF63-484C-8229-9762B52972D0'
I know it must be something simple, but what am I missing here?
As per you comment you need to want to select first type in the list. So you can achieve this by using subquery like this:
SELECT id, (SELECT TOP 1 type FROM tbl a WHERE id = b.id)
FROM tbl b GROUP BY id
See this SQLFiddle
select id, min(type) from tbl group by id