How to cast SELECT result to a typed value - sql

Is there any common practice to use SELECT result as a typed value, e.g. for functions arguments?
Could it be something like this?
func((SELECT number FROM numbers WHERE user_id = 1 LIMIT 1).number::numeric)
I thought about CURSOR for such a task but I'm not really sure. Thank you for any advice!
I'm using PostgreSQL so if there is any specific solution feel free to share.

Use the FROM clause or a common table expression:
SELECT func(a.x, b.y)
FROM (SELECT ... LIMIT 1) AS a
CROSS JOIN (SELECT ... LIMIT 1) AS b;
or
WITH a AS (SELECT ... LIMIT 1),
b AS (SELECT ... LIMIT 1)
SELECT func(a.x, b.y)
FROM a CROSS JOIN b;

Doesn't this do what you want?
func( (SELECT number FROM numbers WHERE user_id = 1 LIMIT 1)::numeric )
The subquery is returning (at most) one value so it is a scalar subquery and is equivalent to a scalar reference in the query.
That said, there are many other ways to express this. For instance:
func( (SELECT number::numeric FROM numbers WHERE user_id = 1 LIMIT 1) )
or:
(SELECT func(number::numeric) FROM numbers WHERE user_id = 1 LIMIT 1)
or moving to the from clause and using a lateral join. Or calculating the result in a CTE or subquery.

Related

PostgreSQL: How to return a subarray dynamically using array slices in postgresql

I need to sum a subarray from an array using postgresql.
I need to create a postgresql query that will dynamically do this as the upper and lower indexes will be different for each array.
These indexes will come from two other columns within the same table.
I had the below query that will get the subarray:
SELECT
SUM(t) AS summed_index_values
FROM
(SELECT UNNEST(int_array_column[34:100]) AS t
FROM array_table
WHERE id = 1) AS t;
...but I then realised I couldn't use variables or SELECT statements when using array slices to make the query dynamic:
int_array_column[SELECT array_index_lower FROM array_table WHERE id = 1; : SELECT array_index_upper FROM array_table WHERE id = 1;]
...does anyone know how I can achieve this query dynamically?
No need for sub-selects, just use the column names:
SELECT SUM(t) AS summed_index_values
FROM (
SELECT UNNEST(int_array_column[tb.array_index_lower:tb.array_index_upper]) AS t
FROM array_table tb
WHERE id = 1
) AS t;
Note that it's not recommended to use set-returning functions (unnest) in the SELECT list. It's better to put that into the FROM clause:
SELECT sum(t.val)
FROM (
SELECT t.val
FROM array_table tb
cross join UNNEST(int_array_column[tb.array_idx_lower:array_idx_upper]) AS t(val)
WHERE id = 1
) AS t;

Calculate MAX for every row in SQL

I have this tables:
Docenza(id, id_facolta, ..., orelez)
Facolta(id, ...)
and I want to obtain, for every facolta, only the id of Docenza who has done the maximum number of orelez and the number of orelez:
id_docenzaP facolta1 max(orelez)
id_docenzaQ facolta2 max(orelez)
...
id_docenzaZ facoltaN max(orelez)
how can I do this? This is what i do:
SELECT DISTINCT ... F.nome, SUM(orelez) AS oreTotali
FROM Docenza D
JOIN Facolta F ON F.id = D.id_facolta
GROUP BY F.nome
I obtain somethings like:
docenzaP facolta1 maxValueForidP
docenzaQ facolta1 maxValueForidQ
...
docenzaR facolta2 maxValueForidR
docenzaS facolta2 maxValueForidS
...
docenzaZ facoltaN maxValueForFacoltaN
How can I take only the max value for every facolta?
Presumably, you just want:
SELECT F.nome, sum(orelez) AS oreTotali
FROM Docenza D JOIN
Facolta F
ON F.id = D.id_facolta
GROUP BY F.nome;
I'm not sure what the SELECT DISTINCT is supposed to be doing. It is almost never used with GROUP BY. The . . . suggests that you are selecting additional columns, which are not needed for the results you want.
This is untested, and since you didn't provide sample data with expected results I can't be sure it's really what you need.
It's a bit ugly and I'm sure there is some clever correlated sub query approach, but I've never been good with those.
SELECT st.focolta,
s_orelez,
TMP3.id_docenza
FROM some_table AS st
INNER
JOIN (SELECT *
FROM (SELECT focolta,
s_orelez,
id_docenza,
ROW_NUMBER() OVER -- Get the ranking of the orelez sum by focolta.
( PARTITION BY focolta
ORDER BY s_orelez DESC
) rn_orelez
FROM (SELECT focolta,
id_docenza,
SUM(orelez) OVER -- Sum the orelez by focolta
( PARTITION BY focolta
) AS s_orelez
FROM some_table
) TMP
) TMP2
WHERE = TMP2.rn_orelez = 1 -- Limit to the highest rank value
) TMP3
ON some_table.focolta = TMP3.focolta; -- Join to focolta to the id associated with the hightest value.

Returning the lowest integer not in a list in SQL

Supposed you have a table T(A) with only positive integers allowed, like:
1,1,2,3,4,5,6,7,8,9,11,12,13,14,15,16,17,18
In the above example, the result is 10. We always can use ORDER BY and DISTINCT to sort and remove duplicates. However, to find the lowest integer not in the list, I came up with the following SQL query:
select list.x + 1
from (select x from (select distinct a as x from T order by a)) as list, T
where list.x + 1 not in T limit 1;
My idea is start a counter and 1, check if that counter is in list: if it is, return it, otherwise increment and look again. However, I have to start that counter as 1, and then increment. That query works most of the cases, by there are some corner cases like in 1. How can I accomplish that in SQL or should I go about a completely different direction to solve this problem?
Because SQL works on sets, the intermediate SELECT DISTINCT a AS x FROM t ORDER BY a is redundant.
The basic technique of looking for a gap in a column of integers is to find where the current entry plus 1 does not exist. This requires a self-join of some sort.
Your query is not far off, but I think it can be simplified to:
SELECT MIN(a) + 1
FROM t
WHERE a + 1 NOT IN (SELECT a FROM t)
The NOT IN acts as a sort of self-join. This won't produce anything from an empty table, but should be OK otherwise.
SQL Fiddle
select min(y.a) as a
from
t x
right join
(
select a + 1 as a from t
union
select 1
) y on y.a = x.a
where x.a is null
It will work even in an empty table
SELECT min(t.a) - 1
FROM t
LEFT JOIN t t1 ON t1.a = t.a - 1
WHERE t1.a IS NULL
AND t.a > 1; -- exclude 0
This finds the smallest number greater than 1, where the next-smaller number is not in the same table. That missing number is returned.
This works even for a missing 1. There are multiple answers checking in the opposite direction. All of them would fail with a missing 1.
SQL Fiddle.
You can do the following, although you may also want to define a range - in which case you might need a couple of UNIONs
SELECT x.id+1
FROM my_table x
LEFT
JOIN my_table y
ON x.id+1 = y.id
WHERE y.id IS NULL
ORDER
BY x.id LIMIT 1;
You can always create a table with all of the numbers from 1 to X and then join that table with the table you are comparing. Then just find the TOP value in your SELECT statement that isn't present in the table you are comparing
SELECT TOP 1 table_with_all_numbers.number, table_with_missing_numbers.number
FROM table_with_all_numbers
LEFT JOIN table_with_missing_numbers
ON table_with_missing_numbers.number = table_with_all_numbers.number
WHERE table_with_missing_numbers.number IS NULL
ORDER BY table_with_all_numbers.number ASC;
In SQLite 3.8.3 or later, you can use a recursive common table expression to create a counter.
Here, we stop counting when we find a value not in the table:
WITH RECURSIVE counter(c) AS (
SELECT 1
UNION ALL
SELECT c + 1 FROM counter WHERE c IN t)
SELECT max(c) FROM counter;
(This works for an empty table or a missing 1.)
This query ranks (starting from rank 1) each distinct number in ascending order and selects the lowest rank that's less than its number. If no rank is lower than its number (i.e. there are no gaps in the table) the query returns the max number + 1.
select coalesce(min(number),1) from (
select min(cnt) number
from (
select
number,
(select count(*) from (select distinct number from numbers) b where b.number <= a.number) as cnt
from (select distinct number from numbers) a
) t1 where number > cnt
union
select max(number) + 1 number from numbers
) t1
http://sqlfiddle.com/#!7/720cc/3
Just another method, using EXCEPT this time:
SELECT a + 1 AS missing FROM T
EXCEPT
SELECT a FROM T
ORDER BY missing
LIMIT 1;

SQL Return Random Numbers Not In Table

I have a table with user_ids that we've gathered from a streaming datasource of active accounts. Now I'm looking to go through and fill in the information about the user_ids that don't do much of anything.
Is there a SQL (postgres if it matters) way to have a query return random numbers not present in the table?
Eg something like this:
SELECT RANDOM(count, lower_bound, upper_bound) as new_id
WHERE new_id NOT IN (SELECT user_id FROM user_table) AS user_id_table
Possible, or would it be best to generate a bunch of random numbers with a scripted wrapper and pass those into the DB to figure out non existant ones?
It is posible. If you want the IDs to be integers, try:
SELECT trunc((random() * (upper_bound - lower_bound)) + lower_bound) AS new_id
FROM generate_series(1,upper_bound)
WHERE new_id NOT IN (
SELECT user_id
FROM user_table)
You can wrap the query above in a subselect, i.e.
SELECT * FROM (SELECT trunc(random() * (upper - lower) + lower) AS new_id
FROM generate_series(1, count)) AS x
WHERE x.new_id NOT IN (SELECT user_id FROM user_table)
I suspect you want a random sampling. I would do something like:
SELECT s
FROM generate_series(1, (select max(user_id) from users) s
LEFT JOIN users ON s.s = user_id
WHERE user_id IS NULL
order by random() limit 5;
I haven't tested this but the idea should work. If you have a lot of users and not a lot of missing id's it will perform better than the other options, but performance no matter what you do may be a problem.
My pragmatic approach would be: generate 500 random numbers and then pick one which is not in the table:
WITH fivehundredrandoms AS ( RANDOM(count, lower_bound, upper_bound) AS onerandom
FROM (SELECT generate_series(1,500)) AS fivehundred )
SELECT onerandom FROM fivehundredrandoms
WHERE onerandom NOT IN (SELECT user_id FROM user_table WHERE user_id > 0) LIMIT 1;
There is way to do what you want with recursive queries, alas it is not nice.
Suppose that you have the following table:
CREATE TABLE test (a int)
To simplify, you want to insert random numbers from 0 to 4 (random() * 5)::int that are not in the table.
WITH RECURSIVE rand (i, r, is_new) AS (
SELECT
0,
null,
false
UNION ALL
SELECT
i + 1,
next_number.v,
NOT EXISTS (SELECT 1 FROM test WHERE test.a = next_number.v)
FROM
rand r,
(VALUES ((random() * 5)::int)) next_number(v)
-- safety check to make sure we do not go into an infinite loop
WHERE i < 500
)
SELECT * FROM rand WHERE rand.is_new LIMIT 1
I'm not super sure, but PostgreSQL should be able to stop iterating once it has one result, since it knows that the query has limit 1.
Nice thing about this query is that you can replace (random() * 5)::int with any id generating function that you want

ORDER BY the IN value list

I have a simple SQL query in PostgreSQL 8.3 that grabs a bunch of comments. I provide a sorted list of values to the IN construct in the WHERE clause:
SELECT * FROM comments WHERE (comments.id IN (1,3,2,4));
This returns comments in an arbitrary order which in my happens to be ids like 1,2,3,4.
I want the resulting rows sorted like the list in the IN construct: (1,3,2,4).
How to achieve that?
You can do it quite easily with (introduced in PostgreSQL 8.2) VALUES (), ().
Syntax will be like this:
select c.*
from comments c
join (
values
(1,1),
(3,2),
(2,3),
(4,4)
) as x (id, ordering) on c.id = x.id
order by x.ordering
In Postgres 9.4 or later, this is simplest and fastest:
SELECT c.*
FROM comments c
JOIN unnest('{1,3,2,4}'::int[]) WITH ORDINALITY t(id, ord) USING (id)
ORDER BY t.ord;
WITH ORDINALITY was introduced with in Postgres 9.4.
No need for a subquery, we can use the set-returning function like a table directly. (A.k.a. "table-function".)
A string literal to hand in the array instead of an ARRAY constructor may be easier to implement with some clients.
For convenience (optionally), copy the column name we are joining to ("id" in the example), so we can join with a short USING clause to only get a single instance of the join column in the result.
Works with any input type. If your key column is of type text, provide something like '{foo,bar,baz}'::text[].
Detailed explanation:
PostgreSQL unnest() with element number
Just because it is so difficult to find and it has to be spread: in mySQL this can be done much simpler, but I don't know if it works in other SQL.
SELECT * FROM `comments`
WHERE `comments`.`id` IN ('12','5','3','17')
ORDER BY FIELD(`comments`.`id`,'12','5','3','17')
With Postgres 9.4 this can be done a bit shorter:
select c.*
from comments c
join (
select *
from unnest(array[43,47,42]) with ordinality
) as x (id, ordering) on c.id = x.id
order by x.ordering;
Or a bit more compact without a derived table:
select c.*
from comments c
join unnest(array[43,47,42]) with ordinality as x (id, ordering)
on c.id = x.id
order by x.ordering
Removing the need to manually assign/maintain a position to each value.
With Postgres 9.6 this can be done using array_position():
with x (id_list) as (
values (array[42,48,43])
)
select c.*
from comments c, x
where id = any (x.id_list)
order by array_position(x.id_list, c.id);
The CTE is used so that the list of values only needs to be specified once. If that is not important this can also be written as:
select c.*
from comments c
where id in (42,48,43)
order by array_position(array[42,48,43], c.id);
I think this way is better :
SELECT * FROM "comments" WHERE ("comments"."id" IN (1,3,2,4))
ORDER BY id=1 DESC, id=3 DESC, id=2 DESC, id=4 DESC
Another way to do it in Postgres would be to use the idx function.
SELECT *
FROM comments
ORDER BY idx(array[1,3,2,4], comments.id)
Don't forget to create the idx function first, as described here: http://wiki.postgresql.org/wiki/Array_Index
In Postgresql:
select *
from comments
where id in (1,3,2,4)
order by position(id::text in '1,3,2,4')
On researching this some more I found this solution:
SELECT * FROM "comments" WHERE ("comments"."id" IN (1,3,2,4))
ORDER BY CASE "comments"."id"
WHEN 1 THEN 1
WHEN 3 THEN 2
WHEN 2 THEN 3
WHEN 4 THEN 4
END
However this seems rather verbose and might have performance issues with large datasets.
Can anyone comment on these issues?
To do this, I think you should probably have an additional "ORDER" table which defines the mapping of IDs to order (effectively doing what your response to your own question said), which you can then use as an additional column on your select which you can then sort on.
In that way, you explicitly describe the ordering you desire in the database, where it should be.
sans SEQUENCE, works only on 8.4:
select * from comments c
join
(
select id, row_number() over() as id_sorter
from (select unnest(ARRAY[1,3,2,4]) as id) as y
) x on x.id = c.id
order by x.id_sorter
SELECT * FROM "comments" JOIN (
SELECT 1 as "id",1 as "order" UNION ALL
SELECT 3,2 UNION ALL SELECT 2,3 UNION ALL SELECT 4,4
) j ON "comments"."id" = j."id" ORDER BY j.ORDER
or if you prefer evil over good:
SELECT * FROM "comments" WHERE ("comments"."id" IN (1,3,2,4))
ORDER BY POSITION(','+"comments"."id"+',' IN ',1,3,2,4,')
And here's another solution that works and uses a constant table (http://www.postgresql.org/docs/8.3/interactive/sql-values.html):
SELECT * FROM comments AS c,
(VALUES (1,1),(3,2),(2,3),(4,4) ) AS t (ord_id,ord)
WHERE (c.id IN (1,3,2,4)) AND (c.id = t.ord_id)
ORDER BY ord
But again I'm not sure that this is performant.
I've got a bunch of answers now. Can I get some voting and comments so I know which is the winner!
Thanks All :-)
create sequence serial start 1;
select * from comments c
join (select unnest(ARRAY[1,3,2,4]) as id, nextval('serial') as id_sorter) x
on x.id = c.id
order by x.id_sorter;
drop sequence serial;
[EDIT]
unnest is not yet built-in in 8.3, but you can create one yourself(the beauty of any*):
create function unnest(anyarray) returns setof anyelement
language sql as
$$
select $1[i] from generate_series(array_lower($1,1),array_upper($1,1)) i;
$$;
that function can work in any type:
select unnest(array['John','Paul','George','Ringo']) as beatle
select unnest(array[1,3,2,4]) as id
Slight improvement over the version that uses a sequence I think:
CREATE OR REPLACE FUNCTION in_sort(anyarray, out id anyelement, out ordinal int)
LANGUAGE SQL AS
$$
SELECT $1[i], i FROM generate_series(array_lower($1,1),array_upper($1,1)) i;
$$;
SELECT
*
FROM
comments c
INNER JOIN (SELECT * FROM in_sort(ARRAY[1,3,2,4])) AS in_sort
USING (id)
ORDER BY in_sort.ordinal;
select * from comments where comments.id in
(select unnest(ids) from bbs where id=19795)
order by array_position((select ids from bbs where id=19795),comments.id)
here, [bbs] is the main table that has a field called ids,
and, ids is the array that store the comments.id .
passed in postgresql 9.6
Lets get a visual impression about what was already said. For example you have a table with some tasks:
SELECT a.id,a.status,a.description FROM minicloud_tasks as a ORDER BY random();
id | status | description
----+------------+------------------
4 | processing | work on postgres
6 | deleted | need some rest
3 | pending | garden party
5 | completed | work on html
And you want to order the list of tasks by its status.
The status is a list of string values:
(processing, pending, completed, deleted)
The trick is to give each status value an interger and order the list numerical:
SELECT a.id,a.status,a.description FROM minicloud_tasks AS a
JOIN (
VALUES ('processing', 1), ('pending', 2), ('completed', 3), ('deleted', 4)
) AS b (status, id) ON (a.status = b.status)
ORDER BY b.id ASC;
Which leads to:
id | status | description
----+------------+------------------
4 | processing | work on postgres
3 | pending | garden party
5 | completed | work on html
6 | deleted | need some rest
Credit #user80168
I agree with all other posters that say "don't do that" or "SQL isn't good at that". If you want to sort by some facet of comments then add another integer column to one of your tables to hold your sort criteria and sort by that value. eg "ORDER BY comments.sort DESC " If you want to sort these in a different order every time then... SQL won't be for you in this case.