INSERT INTO ... RETURNING multiple columns (PostgreSQL) - sql

I've searched around for an answer and it seems definitive but I figured I would double check with the Stack Overflow community:
Here's what I'm trying to do:
INSERT INTO my_table VALUES (a, b, c)
RETURNING (SELECT x, y, z FROM x_table, y_table, z_table
WHERE xid = a AND yid = b AND zid = c)
I get an error telling me I can't return more than one column.
It works if I tell it SELECT x FROM x_table WHERE xid = a.
Is this at all possible in a single query as opposed to creating a seperate SELECT query?
I'm using PostgreSQL 8.3.

Try this.
with aaa as (
INSERT INTO my_table VALUES(a, b, c)
RETURNING a, b, c)
SELECT x, y, z FROM x_table, y_table, z_table
WHERE xid = (select a from aaa)
AND yid = (select b from aaa)
AND zid = (select c from aaa);
In 9.3 similar query works.

#corvinusz answer was wrong for 8.3 but gave me a great idea that worked so thanks!
INSERT INTO my_table VALUES (a, b, c)
RETURNING (SELECT x FROM x_table WHERE xid = a),
(SELECT y FROM y_table WHERE yid = b),
(SELECT z FROM z_table WHERE zid = c)
I have no idea why the way it's stated in the question is invalid but at least this works.

I found this approach (within a function!)
DO $$
DECLARE
returner_ID int;
returner_Name text;
returner_Age int;
BEGIN
INSERT INTO schema.table
("ID", "Name", "Age")
VALUES
('1', 'Steven Grant', '30')
RETURNING
"ID",
"Name",
"Age"
INTO
returner_ID,
returner_Name,
returner_Ag
END; $$

Related

Snowflake: Decimal or null input to function results in "Unsupported subquery type"

Given the following function:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
RETURNS float AS
$$
select sum(1/(1+exp(-(series - c)/4)))
from (
select (a + ((row_number()) over(order by 0))*1) series
from table(generator(rowcount => 10000)) x
qualify series <= b
)
$$;
I get all the expected results when executing the following queries:
select
myfunction(1, 10, 1);
select
myfunction(1, 100, 1);
select
myfunction(1, 10, 1.1);
select
myfunction(0, 1, 89.87);
select
myfunction(0, 1, null);
However when I run the following query:
select
myfunction(a, b, c)
from
(
select
1 as a,
10 as b,
1.1 as c
union
select
0 as a,
1 as b,
null as c
);
I get an error:
"Unsupported subquery type cannot be evaluated".
While this query does work:
select
a, b, myfunction(a, b, c)
from
(
select
1 as a,
10 as b,
1 as c
union
select
1 as a,
100 as b,
1 as c
);
Why can't Snowflake handle null or decimal numbers in the 'c' column when I input multiple rows while individual rows weren't a problem?
And how can this function be rewritten to be able to handle these cases?
SQL UDFs are converted to subqueries (for now), and if Snowflake can not determine the data type returned from these subqueries, you get the "Unsupported subquery" error. The issue is not about decimals or null. The issue is A and C variables (which are used in SUM()) contain different values. For example, the following ones work:
select
myfunction(a, b, c )
from
(
select
1 as a,
1 as b,
1.1 as c
union
select
1 as a,
100 as b,
1.1 as c
);
select
myfunction(a, b, c )
from
(
select
1 as a,
1 as b,
null as c
union
select
1 as a,
100 as b,
null as c
);
You may hit these kinds of errors when you try to write complex functions with SQL UDFs. Sometimes rewriting them can help, but I don't see a way for this one. As a workaround, you may re-write it in JavaScript because JS UDFs are not converted to subqueries:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
RETURNS float
language javascript AS
$$
var res = 0.0;
for (let series = A + 1; series <= B; series++) {
res += (1/(1+Math.exp(-(series - C)/4)));
}
return res;
$$;
According to my tests, the above UDF returns the same result as the SQL version, and it doesn't hit "Unsupported subquery" error.
Weird one. Can you try selecting from the subquery and running it through a cast?
Like this:
select a, b, c
from
(select cast(a as float) as a, cast(b as float) as b, cast(c as float) as c from
(
select
1 as a,
10 as b,
1 as c
union
select
1 as a,
100 as b,
null as c
) as t) as x
In the end implementing it as a python function allowed for also handling all the edge cases:
CREATE
OR REPLACE FUNCTION myfunction(a float, b float, c float)
returns float
language python
runtime_version=3.8
handler='compute'
as
$$
def compute(a, b, c):
import math
if b < a:
return None
if c is None:
return None
res = []
step_size = 1
it = a
while it < b:
res.append(it)
it += step_size
res = sum([1/(1+math.exp(-1*(i-c)/4)) for i in res])
return res
$$;

Record type comparison with different numbers of columns isn't failing

Why does the following query not trigger a "cannot compare record types with different numbers of columns" error in PostgreSQL 11.6?
with
s AS (SELECT 1)
, main AS (
SELECT (a) = (b) , (a) = (a), (b) = (b), a, b -- I expect (a) = (b) fails
FROM s
, LATERAL (select 1 as x, 2 as y) AS a
, LATERAL (select 5 as x) AS b
)
select * from main;
While this one does:
with
x AS (SELECT 1)
, y AS (select 1, 2)
select (x) = (y) from x, y;
See the note in the docs on row comparison
Errors related to the number or types of elements might not occur if the comparison is resolved using earlier columns.
In this case, because a.x=1 and b.x=5, it returns false without ever noticing that the number of columns doesn't match. Change them to match, and you will get the same exception (which is also why the 2nd query does have that exception).
testdb=# with
s AS (SELECT 1)
, main AS (
SELECT a = b , (a) = (a), (b) = (b), a, b -- I expect (a) = (b) fails
FROM s
, LATERAL (select 5 as x, 2 as y) AS a
, LATERAL (select 5 as x) AS b
)
select * from main;
ERROR: cannot compare record types with different numbers of columns

Insert Values into table of specific row

I am looking to insert some values into a column based on the selection of a specific row. I have a table with columns a, b, c, and d. I want to insert the values 1, 2, and 3 into columns b, c, and d when column a = X. I cannot find how to do this.
Toad for oracle is my platform and I am looking for SQL code.
You can either update them one at a time:
update mytable set b = 1 where a = X;
update mytable set c = 2 where a = X;
update mytable set d = 3 where a = X;
Or update them all in one go:
update mytable set b = 1,c = 2,d = 3 where a = X;
Alternatively, assuming 'a' is a primary key column or unique index and there is only 1 row where a = X, if you only have 4 columns and you want to update 3 of them you could delete your row and re-insert the whole lot:
delete from mytable where a = X;
insert into mytable values(X, 1, 2, 3);
update mytable set b = 1,c = 2,d = 3 where a = X;
You can use INSERT INTO...SELECT and give condition to your insert queries such as:
INSERT
INTO table_name (b, c, d)
VALUES (bValue, cValue, dValue)
/* Select Condition */
WHERE a=1

Multiple Columns in an "in" statement

I am using DB 2 and i am trying to write a query which checks multiple columns against a given set of values.Like field a, field b and field c against values x,y,z,f. One way that i can think for is writing same condition 3 times with or i.e. field a in ('x','y','z','f') or field b in .... and so on . Please let me know if there is some other efficient and easy way to accomplish this. I am looking for a query like if any of the condition is true return yes else no . Please suggest !
This may or may not work on as400:
create table a (a int not null, b int not null);
insert into a (a,b) values (1,1),(1,3),(2,3),(0,23);
select a.*
from a
where a in (1,2) or b in (1,2);
A B
----------- -----------
1 1
1 3
2 3
Rewriting as a join:
select a.*
from a
join ( values (1),(2) ) b (x)
on b.x in (a.a, a.b);
A B
----------- -----------
1 1
1 3
2 3
Assuming the column data types are the same, Create a subquery joining all the columns you want to search with your IN into one column with a union
SELECT *
FROM (
SELECT
YOUR_TABLE_PRIMARY_KEY
,A AS Col
FROM YOUR_TABLE
UNION ALL
SELECT
YOUR_TABLE_PRIMARY_KEY
,B AS Col
FROM YOUR_TABLE
UNION ALL
SELECT
YOUR_TABLE_PRIMARY_KEY
,C AS Col
FROM YOUR_TABLE
) AS SQ
WHERE
SQ.Col IN ('x','y','z','f')
Make sure to include the table key so you know which row the data refers to
You can create a regular expression that describe the set of characters and use it with xquery
Assuming you're on a supported version of the OS (tested on 7.1 TR6), this should work...
with sel (val) as (values ('x'),('y'),('f'))
select * from mytbl
where flda in (select val from sel)
or fldb in (select val from sel)
or fldc in (select val from sel)
Expanding on the above since your OP asked for "condition is true return yes else no"
Assuming you've got the key to a row to check, would 'yes' or the empty set be good enough? somekey is the key for the row you want to check.
with sel (val) as (values ('x'),('y'),('f'))
select 'yes' from mytbl
where thekey = somekey
and ( flda in (select val from sel)
or fldb in (select val from sel)
or fldc in (select val from sel)
)
It's actually rather difficult to return a value when you don't have a matching row. Here's one way. Note I've switch to 1=yes, 0=no..
with sel (val) as (values ('x'),('y'),('f'))
select 1 from mytbl
where thekey = somekey
and ( flda in (select val from sel)
or fldb in (select val from sel)
or fldc in (select val from sel)
)
UNION ALL
select 0
from sysibm.sysdummy1
order by 1 desc
fetch first row only

Easiest way to eliminate NULLs in SELECT DISTINCT?

I am working on a query that is fairly similar the following:
CREATE TABLE #test (a char(1), b char(1))
INSERT INTO #test(a,b) VALUES
('A',NULL),
('A','B'),
('B',NULL),
('B',NULL)
SELECT DISTINCT a,b FROM #test
DROP TABLE #test
The result is, unsurprisingly,
a b
-------
A NULL
A B
B NULL
The output I would like to see in actuality is:
a b
-------
A B
B NULL
That is, if a column has a value in some records but not in others, I want to throw out the row with NULL for that column. However, if a column has a NULL value for all records, I want to preserve that NULL.
What's the simplest/most elegant way to do this in a single query?
I have a feeling that this would be simple if I weren't exhausted on a Friday afternoon.
Try this:
select distinct * from test
where b is not null or a in (
select a from test
group by a
having max(b) is null)
You can get the fiddle here.
Note if you can only have one non-null value in b, this can be simplified to:
select a, max(b) from test
group by a
Try this:
create table test(
x char(1),
y char(1)
);
insert into test(x,y) values
('a',null),
('a','b'),
('b', null),
('b', null)
Query:
with has_all_y_null as
(
select x
from test
group by x
having sum(case when y is null then 1 end) = count(x)
)
select distinct x,y from test
where
(
-- if a column has a value in some records but not in others,
x not in (select x from has_all_y_null)
-- I want to throw out the row with NULL
and y is not null
)
or
-- However, if a column has a NULL value for all records,
-- I want to preserve that NULL
(x in (select x from has_all_y_null))
order by x,y
Output:
X Y
A B
B NULL
Live test: http://sqlfiddle.com/#!3/259d6/16
EDIT
Seeing Mosty's answer, I simplified my code:
with has_all_y_null as
(
select x
from test
group by x
-- having sum(case when y is null then 1 end) = count(x)
-- should have thought of this instead of the code above. Mosty's logic is good:
having max(y) is null
)
select distinct x,y from test
where
y is not null
or
(x in (select x from has_all_y_null))
order by x,y
I just prefer CTE approach, it has a more self-documenting logic :-)
You can also put documentation on non-CTE approach, if you are conscious of doing so:
select distinct * from test
where b is not null or a in
( -- has all b null
select a from test
group by a
having max(b) is null)
;WITH CTE
AS
(
SELECT DISTINCT * FROM #test
)
SELECT a,b
FROM CTE
ORDER BY CASE WHEN b IS NULL THEN 9999 ELSE b END ;
SELECT DISTINCT t.a, t.b
FROM #test t
WHERE b IS NOT NULL
OR NOT EXISTS (SELECT 1 FROM #test u WHERE t.a = u.a AND u.b IS NOT NULL)
ORDER BY t.a, t.b
This is a really weird requirement. I wonder how you need it.
SELECT DISTINCT a, b
FROM test t
WHERE NOT ( b IS NULL
AND EXISTS
( SELECT *
FROM test ta
WHERE ta.a = t.a
AND ta.b IS NOT NULL
)
)
AND NOT ( a IS NULL
AND EXISTS
( SELECT *
FROM test tb
WHERE tb.b = t.b
AND tb.a IS NOT NULL
)
)
Well, I don't particularly like this solution, but it seems the most appropriate to me. Note that your description of what you want sounds exactly like what you get with a LEFT JOIN, so:
SELECT DISTINCT a.a, b.b
FROM #test a
LEFT JOIN #test b ON a.a = b.a
AND b.b IS NOT NULL
SELECT a,b FROM #test t where b is not null
union
SELECT a,b FROM #test t where b is null
and not exists(select 1 from #test where a=t.a and b is not null)
Result:
a b
---- ----
A B
B NULL
I'll just put here a mix of two answers that solved my issue, because my View was more complex
--IdCompe int,
--Nome varchar(30),
--IdVanBanco int,
--IdVan int
--FlagAtivo bit,
--FlagPrincipal bit
select IdCompe
, Nome
, max(IdVanBanco)
, max(IdVan)
, CAST(MAX(CAST(FlagAtivo as INT)) AS BIT) FlagAtivo
, CAST(MAX(CAST(FlagPrincipal as INT)) AS BIT) FlagPrincipal
from VwVanBanco
where IdVan = {IdVan} or IdVan is null
group by IdCompe, Nome order by IdCompe asc
Thanks to mosty mostacho and
kenwarner