INSERT SELECT FROM VALUES casting - sql

It's often desirable to INSERT from a SELECT expression (e.g. to qualify with a WHERE clause), but this can get postgresql confused about the column types.
Example:
CREATE TABLE example (a uuid primary key, b numeric);
INSERT INTO example
SELECT a, b
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)) as data(a,b);
=> ERROR: column "a" is of type uuid but expression is of type text
This can be fixed by explicitly casting in the values:
INSERT INTO example
SELECT a, b
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a'::uuid, NULL::numeric)) as data(a,b);
But that's messy and a maintenance burden. Is there some way to make postgres understand that the VALUES expression has the same type as a table row, i.e. something like
VALUES('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)::example%ROWTYPE
Edit:
The suggestion of using (data::example).* is neat, but unfortunately it complete seems to screw up the postgres query planner when combined with a WHERE clause like so:
INSERT INTO example
SELECT (data::example).*
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)) as data
WHERE NOT EXISTS (SELECT * FROM example
WHERE (data::example)
IS NOT DISTINCT FROM example);
This takes minutes with a large table.

You can cast a record to a row type of your table:
INSERT INTO example
SELECT (data::example).*
FROM (
VALUES
('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL),
('54514c89-f188-490a-abbb-268f9154ab2c', 42)
) as data;
data::example casts the complete row to a record of type example. The (...).* then turns that into the columns defined in the table type example

You could use VALUES directly:
INSERT INTO example(a, b)
VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL);
DBFiddle Demo
Or just cast once:
INSERT INTO example(a, b)
SELECT a::uuid, b::numeric
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL),
('bb53b5a8-d453-11e7-9296-cec278b6b50a',1) ) as data(a,b);
DBFiddle Demo2
Note, please always explicitly define columns list.

Related

How to return ids of rows with conflicting values?

I am looking to insert or update values in an SQLite database (version > 3.35) avoiding multiple queries. upsert along with returning seems promising :
CREATE TABLE phonebook2(
name TEXT PRIMARY KEY,
phonenumber TEXT,
validDate DATE
);
INSERT INTO phonebook2(name,phonenumber,validDate)
VALUES('Alice','704-555-1212','2018-05-08')
ON CONFLICT(name) DO UPDATE SET
phonenumber=excluded.phonenumber,
validDate=excluded.validDate
WHERE excluded.validDate>phonebook2.validDate RETURNING name;
This helps me track names corresponding to inserted/modified rows. How to find rows where phonebook2 values conflict with values upserted in above statement, but no insert or update happened due to where clause?
The RETURNING clause can't be used to get non-affected rows.
What you can do is execute a SELECT statement before the UPSERT:
WITH cte(name, phonenumber, validDate) AS (VALUES
('Alice', '704-555-1212', '2018-05-08'),
('Bob','804-555-1212', '2018-05-09')
)
SELECT *
FROM phonebook2 p
WHERE EXISTS (
SELECT *
FROM cte c
WHERE c.name = p.name AND c.validDate <= p.validDate
);
In the CTE you may include as many tuples as you want

SQL Snowflake - Put an SQL list / array into a column

Based on a specific project architecture, I have a LIST ('Bob', 'Alice') that I want to SELECT as a column (and do a specific JOIN afterwards).
Right now, I did :
SELECT *
FROM TABLE(flatten(input => ('Bob', 'Alice'))) as v1
But this resulted in one row / two columns, and I need one column / two rows (to do the JOIN).
Same if I use :
select * from (values ('Bob', 'Alice'))
The basic idea would be to PIVOT, however, the list may be of arbitrary length so I can't manually list all column names in PIVOT query...
Also I can't use the following (which would work) :
select * from (values ('Bob'), ('Alice'))
because I inherit the list as a string and can't modify it on the fly.
If you have a fixed set of values that you are wanting to JOIN against, and looking at some of the SQL you have tried the correct form to use VALUES is:
select * from (values ('Bob'), ('Alice'));
or
select * from values ('Bob'), ('Alice');
if you have a exist array you can FLATTEN it like for first example
SELECT v1.value::text
FROM TABLE(flatten(input => array_construct('Bob', 'Alice'))) as v1;
V1.VALUE::TEXT
Bob
Alice
or if you have a string "Bob, Alice" then SPLIT_TO_TABLE
SELECT trim(v1.value)
FROM TABLE(split_to_table('Bob, Alice', ',')) as v1;
If the input is provided as ('Bob','Alice') then STRTOK_SPLIT_TO_TABLE could be used:
SELECT table1.value
FROM table(strtok_split_to_table($$('Bob','Alice')$$, '(),''')) AS table1;
Output:

Use inserted value as a parameter for other inserts

There is a db2 database with two tables. The first one, table1, has autoincrement column ID. It is the foreign key for the table2.
A am writing an HTML generator for SQL queries. So with some input parameters it generates a query or multiple queries. It is not connected to the database.
What I need is to get that autoincrement field and use it in next queries.
So basically, the scenario is:
insert into table1;
select autogenerated field ID;
insert into table2 using that ID;
insert into table2 using that ID;
...some more similar inserts...
insert into table2 using that ID;
And all that SQL query should be generated and then used as a single SQL script.
I was thinking about something like this:
SELECT ID FROM FINAL TABLE (INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.))
But I don't know, how I can write the result into a variable so I could use it in next queries like this:
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES ($ID, t2value1, t2value2, etc.)
I could just paste that select instead of $ID, but the second query can be used several times with the same $ID and different values.
EDIT: DB2 10.5 on Linux.
You can chain several inserts together using CTEs, like so:
WITH idcte (id) as (
SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
)
),
ins1 (id) as (
SELECT foreignKeyCol FROM FINAL TABLE (
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
SELECT id, t2value1, t2value2, etc.
FROM idcte
)
),
-- more CTEs
SELECT foreignKeyCol FROM FINAL TABLE (
-- your last INSERT ... SELECT FROM
)
Essentially you will have to wrap each INSERT into a SELECT FROM FINAL TABLE for this to work.
Alternatively, you can use a global variable to keep the ID value:
CREATE VARIABLE myNewId INT;
SET myNewId = (SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
));
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES (myNewId, t2value1, t2value2, etc.);
DROP VARIABLE myNewId;
This assumes a recent version of Db2 for LUW.

RETURNING rows using unnest()?

I'm trying to return a set of rows after doing UPDATE.
Something like this.
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING unnest(old_noti.notis);
but postgres complains, rightly so:
set-valued function called in context that cannot accept a set
How am I supposed to go about implementing this?
That is, RETURNING a set of rows from SELECTed array after UPDATE?
I'm aware that a function can achieve this using RETURNS SETOF but rather prefer not to if possible.
Use WITH statement:
WITH upd AS (
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING old_noti.notis
)
SELECT unnest(notis) FROM upd;
Use a data-modifying CTE.
You can use a set-returning function in the SELECT list, but it is cleaner to move it to the FROM list with a LATERAL subquery since Postgres 9.3. Especially if you need to extract multiple columns (from a row type like you commented). It would also be inefficient to call unnest() multiple times.
WITH upd AS (
UPDATE notis n
SET notis = '{}'::noti_record_type[] -- explicit cast optional
FROM (
SELECT user_id, notis
FROM notis
WHERE user_id = 2
FOR UPDATE
) old_n
WHERE old_n.user_id = n.user_id
RETURNING old_n.notis
)
SELECT n.*
FROM upd u, unnest(u.notis) n; -- implicit CROSS JOIN LATERAL
If the array can be empty and you want to preserve empty / NULL results use LEFT JOIN LATERAL ... ON true. See:
What is the difference between LATERAL JOIN and a subquery in PostgreSQL?
Call a set-returning function with an array argument multiple times
Also, multiple set-returning functions in the same SELECT can exhibit surprising behavior. Avoid that.
This has been sanitized with Postgres 10. See:
What is the expected behaviour for multiple set-returning functions in SELECT clause?
Alternative to unnest multiple arrays in parallel before and after Postgres 10:
Unnest multiple arrays in parallel
Related:
Return pre-UPDATE column values using SQL only
Behavior of composite / row values
Postgres has an oddity when assigning a row type (or composite or record type) from a set-returning function to a column list. One might expect that the row-type field is treated as one column and assigned to the respective column, but that is not so. It is decomposed automatically (one row-layer only!) and assigned element-by-element.
So this does not work as expected:
SELECT (my_row).*
FROM upd u, unnest(u.notis) n(my_row);
But this does (like #klin commented):
SELECT (my_row).*
FROM upd u, unnest(u.notis) my_row;
Or the simpler version I ended up using:
SELECT n.*
FROM upd u, unnest(u.notis) n;
Another oddity: A composite (or row) type with a single field is decomposed automatically. Thus, table alias and column alias end up doing the same in the outer SELECT list:
SELECT n FROM unnest(ARRAY[1,2,3]) n;
SELECT n FROM unnest(ARRAY[1,2,3]) n(n);
SELECT n FROM unnest(ARRAY[1,2,3]) t(n);
SELECT t FROM unnest(ARRAY[1,2,3]) t(n); -- except output column name is "t"
For more than one field, the row-wrapper is preserved:
SELECT t FROM unnest(ARRAY[1,2,3]) WITH ORDINALITY t(n); -- requires 9.4+
Confused? There is more. For composite types (the case at hand) like:
CREATE TYPE my_type AS (id int, txt text);
While this works as expected:
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n;
You are in for a surprise here:
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n(n);
And that's the error I had: When providing a column list, Postgres decomposes the row and assigns provided names one-by-one. Referring to n in the SELECT list does not return the composite type, but only the (renamed) first element. I had mistakenly expected the row type and tried to decompose with (my_row).* - which only returns the first element nonetheless.
Then again:
SELECT t FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) t(n);
(Be aware that the first element has been renamed to "n"!)
With the new form of unnest() taking multiple array arguments (Postgres 9.4+):
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n;
Column aliases only for the first two output columns:
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n(a, b);
Column aliases for all output columns:
SELECT *
FROM unnest(ARRAY[(1,'foo')::my_type, (2,'bar')::my_type]
, ARRAY[(3,'baz')::my_type, (4,'bak')::my_type]) n(a,b,c,d);
db<>fiddle here
Old sqlfiddle
Probably
For:
SELECT *
FROM unnest (ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n(a, b);
Use:
SELECT *
FROM unnest (ARRAY[(1, 'foo')::text, (2, 'bar')::text]
, ARRAY[(3, 'baz')::text, (4, 'bak')::text]) WITH ORDINALITY AS t(first_col, second_col);

SQL Server - Contain Multiples Values

I need retrieve a value of columm with SELECT. But, I have multiple values ...
I don't know what the user go select in checkbox...
Ex:
Insert Into MyTable (dados) Values ('a1') I want the result = Angulo 1
Insert Into MyTable (dados) Values ('a2';'a3') I want the result = Angulo 2
Insert into MyTable (dados) Values ('a3'; a1) I want the result = Angulo 3; Angulo 1
Insert into MyTable (dados) Values ('a6'; 'a7'; 'a4') I want the result = Angulo 6; Angulo 7;Angulo4
I am Trying with SELECT CASE WHEN. But it still fails...
I suspect you are asking how to use the IN keyword in your SELECT statements? It is a little unclear what you are trying to do.
Try this:
SELECT *
FROM MyTable
WHERE dados IN ('a6','a7','a4')
Assuming you have a table named MyTable and a column named dados with 3 rows in that table for a6, a7 and a4, this will return all the matches (in this case, all three rows).
Good luck.
When you say:
insert into MyTable(dados)
Values ('a6', 'a7', 'a4')
You are saying "I have one column to put data into called dados." Then, you are providing three values. This will fail in any database (even apart from the fact that the semicolons should be commas).
Perhaps you want:
insert into MyTable(dados)
Values ('a6;a7;a4')
That is only one value, a string.
This suggests a denormalized database. You might want three different rows in a table, one for each value, connected together by some key.
here are some examples if you're using sql server 2008 and above:
if(OBJECT_ID('tempdb..#dados') is not null)
DROP TABLE #dados
select top 100 * INTO #dados FROM
(
values(1,2,3),
(4,5,6),
(7,8,9)
) t(a,b,c)
select * FROM #dados
INSERT INTO #dados (a,b,c)
values(11,22,33),
(44,55,66),
(77,88,99)
SELECT * FROM #dados
INSERT INTO #dados (a,b,c)
SELECT * FROM
(
values(111,222,333),
(444,555,666),
(777,888,999)
) t(a,b,c)
SELECT * FROM #dados
If you want to insert multiple rows (not columns) the syntax is
Insert Into
MyTable (dados)
Values
('a1'),
('a2')
Looks like you're trying to ask for two things.
How to insert multiple values would be done in the following way:
Insert Into MyTable (dados) Values ('a6'),('a7'),('a4')
If you want to return the actual values 'Angulo' + the number, you can use the following:
CREATE TABLE MyTable
(
Dados varchar(255)
)
Insert Into MyTable (dados) Values ('a12')
Insert Into MyTable (dados) Values ('a2'),('a3')
Insert Into MyTable (dados) Values ('a3'),('a1')
Insert Into MyTable (dados) Values ('a6'),('a7'),('a4')
SELECT 'Angulo'+ SUBSTRING(dados,PATINDEX('%[0-9]%',dados),LEN(dados))
FROM MyTable
It will find the first number (assuming it's always the first number you're after) and get the rest of them. It will then append it with the prefix 'Angulo' (e.g Angulo1, Angulo7, etc)
If these aren't what you're after. Please can you explain further what you need.