How to Select Specific Rows into a New Table? - sql

I am using SQLite 3. I have a table MyTable, as follows:
Create table mytable (ID as INTEGER, OrderID as INTEGER);
Insert into mytable (ID, OrderID) values (1, 1);
Insert into mytable (ID, OrderID) values (1, 2);
Insert into mytable (ID, OrderID) values (2, 1);
Insert into mytable (ID, OrderID) values (2, 3);
Insert into mytable (ID, OrderID) values (3, 1);
For two rows with same ID but different OrderID, like(1, 1) and (1, 2), we will call them duplicate rows.
Now I need to pick out all duplicate rows and put them into a new table called MyDupTable. For the above sample, MyDupTable should contains:
(1, 1);
(1, 2);
(2, 1);
(2, 3);
So I use the following statement:
Select * into MyDupTable from MyTable group by ID having Count(ID) > 1;
But sqlite gives an error message “near “into”, syntax error”, why?
Thanks

You can do it by using sub-query, sub-query will pick all the duplicate id and then by using in pick rest of the columns from table and insert it in to new table
insert into MyDupTable
select * from mytable where ID in(
select ID from mytable
group by ID
having Count(*) > 1
)
you can create table by using existing table
CREATE TABLE MyDupTable AS
select * from mytable where ID in(
select ID from mytable
group by ID
having Count(*) > 1
)
your query analysis
Select * into MyDupTable from MyTable group by ID having Count(ID) > 1;
1.you have used group by id but not found any selection column from MyTable , number of column used in selection must used in group by
in sql lite insert table docs

I would use INSERT INTO. . . statement with EXISTS :
INSERT INTO MyDupTable (ID, orderid)
SELECT ID, orderid
FROM mytable mt
WHERE EXISTS (SELECT 1 FROM mytable mt1 WHERE mt1.ID = mt.ID AND mt.orderid <> mt1.orderid);
Notes :
Always qualify all columns names when you use INSERT INTO Statement.

Related

How to filter a table based on queried ids from another table in Snowflake

I'm trying to filter a table based on the queried result from another table.
create temporary table test_table (id number, col_a varchar);
insert into test_table values
(1, 'a'),
(2, 'b'),
(3, 'aa'),
(4, 'a'),
(6, 'bb'),
(7, 'a'),
(8, 'c');
create temporary table test_table_2 (id number, col varchar);
insert into test_table_2 values
(1, 'aa'),
(2, 'bb'),
(3, 'cc'),
(4, 'dd'),
(6, 'ee'),
(7, 'ff'),
(8, 'gg');
Here I want to find out all the id's in test_table with value "a" in col_a, and then I want to filter for rows with one of these id's in test_table_2. I've tried this below way, but got an error: SQL compilation error: syntax error line 6 at position 39 unexpected 'cte'.
with cte as
(
select id from test_table
where col_a = 'a'
)
select * from test_table_2 where id in cte;
This approach below does work, but with large tables, it tends to be very slow. Is there a better more efficient way to scale to very large tables?
with cte as
(
select id from test_table
where col_a = 'a'
)
select t2.* from test_table_2 t2 join cte on t2.id=cte.id;
I would express this using exists logic:
SELECT id
FROM test_table_2 t2
WHERE EXISTS (
SELECT 1
FROM test_table t1
WHERE t2.id = t1.id AND
t1.col_a = 'a'
);
This has one advantage over a join in that Snowflake can stop scanning the test_table_2 table as soon as it finds a match.
your first error can be fixed as below. Joins are usually better suited for lookups compared to exists or in clause if you have a large table.
with cte as
(
select id from test_table
where col_a = 'a'
)
select * from test_table_2 where id in (select distinct id from cte);

How do I insert values into a table with a column-wise uniqueness check?

Create table
CREATE TABLE `my_table`
(
id Uint64,
name String,
PRIMARY KEY (id)
);
Insert values
INSERT INTO `my_table`
( id, name )
VALUES (1, 'name1'),
(2, 'name2'),
(3, 'name3');
#
id
name
0
1
"name1"
1
2
"name2"
2
3
"name3"
How add VALUES (4, 'name1') and skip add VALUES (3, 'name1')?
The available syntax is described here: https://cloud.yandex.com/docs/ydb/yql/reference/syntax/insert_into
From the documentation link that you provided in the comments I see that the databse that you use does not support a statement equivalent to INSERT OR IGNORE... to suppress errors if a unique constraint is violated.
As an alternative you can use INSERT ... SELECT.
If your database supports EXISTS:
INSERT INTO my_table
SELECT 3, 'name1'
WHERE NOT EXISTS (SELECT * FROM my_table WHERE id = 3);
Or you can use a LEFT JOIN:
INSERT INTO my_table
SELECT t.id, t.name
FROM (SELECT 3 AS id, 'name1' AS name) AS t
LEFT JOIN my_table AS m
ON m.id = t.id
WHERE m.id IS NULL;

A sql query to create multiple rows in different tables using inserted id

I need to insert a row into one table and use this row's id to insert two more rows into a different table within one transaction. I've tried this
begin;
insert into table default values returning table.id as C;
insert into table1(table1_id, column1) values (C, 1);
insert into table1(table1_id, column1) values (C, 2);
commit;
But it doesn't work. How can I fix it?
updated
You need a CTE, and you don't need a begin/commit to do it in one transaction:
WITH inserted AS (
INSERT INTO ... RETURNING id
)
INSERT INTO other_table (id)
SELECT id
FROM inserted;
Edit:
To insert two rows into a single table using that id, you could do that two ways:
two separate INSERT statements, one in the CTE and one in the "main" part
a single INSERT which joins on a list of values; a row will be inserted for each of those values.
With these tables as the setup:
CREATE TEMP TABLE t1 (id INTEGER);
CREATE TEMP TABLE t2 (id INTEGER, t TEXT);
Method 1:
WITH inserted1 AS (
INSERT INTO t1
SELECT 9
RETURNING id
), inserted2 AS (
INSERT INTO t2
SELECT id, 'some val'
FROM inserted1
RETURNING id
)
INSERT INTO t2
SELECT id, 'other val'
FROM inserted1
Method 2:
WITH inserted AS (
INSERT INTO t1
SELECT 4
RETURNING id
)
INSERT INTO t2
SELECT id, v
FROM inserted
CROSS JOIN (
VALUES
('val1'),
('val2')
) vals(v)
If you run either, then check t2, you'll see it will contain the expected values.
Please find the below query:
insert into table1(columnName)values('stack2');
insert into table_2 values(SCOPE_IDENTITY(),'val1','val2');

PostgreSQL: How to insert multiple values without multiple selects?

I have the problem that I need to insert into a table with 2 entries, where one value is constant but fetched from another table, and the other one is the actual content that changes.
Currently I have something like
INSERT INTO table (id, content) VALUES
((SELECT id FROM customers WHERE name = 'Smith'), 1),
((SELECT id FROM customers WHERE name = 'Smith'), 2),
((SELECT id FROM customers WHERE name = 'Smith'), 5),
...
As this is super ugly, how can I do the above in Postgres without the constant SELECT repetition?
Yet another solution:
insert into table (id, content)
select id, unnest(array[1, 2, 5]) from customers where name = 'Smith';
You can cross join the result of the select with your values:
INSERT INTO table (id, content)
select c.id, d.nr
from (
select id
from customers
where name = 'Smith'
) as c
cross join (values (1), (2), (5) ) as d (nr);
This assumes that the name is unique (but so does your original solution).
Well, I believe you can do something like this:
DECLARE
id customers.id%TYPE;
BEGIN
select c.id into id FROM customers c WHERE name = 'Smith';
INSERT INTO table (id, content) VALUES
(id, 1),
(id, 2),
....
END;

Counting repeated data

I'm trying to get maximum repeat of integer in table I tried many ways but could not make it work. The result I'm looking for is as:
"james";"108"
As this 108 when I concat of two fields loca+locb repeated two times but others did not I try below sqlfiddle link with sample table structure and the query I tried... sqlfiddle link
Query I tried is :
select * from (
select name,CONCAT(loca,locb),loca,locb
, row_number() over (partition by CONCAT(loca,locb) order by CONCAT(loca,locb) ) as att
from Table1
) tt
where att=1
please click here so you can see complete sample table and query I tried.
Edite: adding complete table structure and data:
CREATE TABLE Table1
(name varchar(50),loca int,locb int)
;
insert into Table1 values ('james',100,2);
insert into Table1 values ('james',100,3);
insert into Table1 values ('james',10,8);
insert into Table1 values ('james',10,8);
insert into Table1 values ('james',10,7);
insert into Table1 values ('james',10,6);
insert into Table1 values ('james',0,7);
insert into Table1 values ('james',10,0);
insert into Table1 values ('james',10);
insert into Table1 values ('james',10);
and what I'm looking for is to get (james,108) as that value is repeated two time in entire data, there is repetion of (james,10) but that have null value of loca so Zero value and Null value is to be ignored only those to be considered that have value in both(loca,locb).
SQL Fiddle
select distinct on (name) *
from (
select name, loca, locb, count(*) as total
from Table1
where loca is not null and locb is not null
group by 1,2,3
) s
order by name, total desc
WITH concat AS (
-- get concat values
SELECT name,concat(loca,locb) as merged
FROM table1 t1
WHERE t1.locb NOTNULL
AND t1.loca NOTNULL
), concat_count AS (
-- calculate count for concat values
SELECT name,merged,count(*) OVER (PARTITION BY name,merged) as merged_count
FROM concat
)
SELECT cc.name,cc.merged
FROM concat_count cc
WHERE cc.merged_count = (SELECT max(merged_count) FROM concat_count)
GROUP BY cc.name,cc.merged;
SqlFiddleDemo
select name,
newvalue
from (
select name,
CONCAT(loca,locb) newvalue,
COUNT(CONCAT(loca,locb)) as total,
row_number() over (order by COUNT(CONCAT(loca,locb)) desc) as att
from Table1
where loca is not null
and locb is not null
GROUP BY name, CONCAT(loca,locb)
) tt
where att=1