Multiply elements in an array according to their position - sql

What is an efficient function I could use to increase the amount of elements in an array by their position?
For example, if I have an array of:
ARRAY[10,20,30,40,50]::INT[]
I would like that to turn into:
ARRAY[10,20,20,30,30,30,40,40,40,40,50,50,50,50,50]::INT[]
So that I have 1 "10", 2 "20", 3 "30", 4 "40", and 5 "50".
For the array:
ARRAY[10,20,30,40,10]::INT[]
I would like that to turn into:
ARRAY[10,20,20,30,30,30,40,40,40,40,10,10,10,10,10]::INT[]
So that I have 6 "10", 2 "20", 3 "30" and 4 "40".

WITH t AS (SELECT ARRAY[10,20,30,40,50]::INT[] AS arr) -- variable for demo
SELECT ARRAY(
SELECT unnest(array_fill(arr[idx], ARRAY[idx])) AS mult
FROM (SELECT arr, generate_subscripts(arr, 1) AS idx FROM t) sub
);
I would wrap the logic into a simple IMMUTABLE SQL function:
CREATE OR REPLACE FUNCTION f_expand_arr(_arr anyarray)
RETURNS anyarray AS
$func$
SELECT ARRAY(
SELECT unnest(array_fill(_arr[idx], ARRAY[idx]))
FROM (SELECT generate_subscripts(_arr, 1) AS idx) sub
)
$func$ LANGUAGE sql IMMUTABLE;
Works for arrays of any base type due to the polymorphic parameter type anyarray:
How to write a function that returns text or integer values?
The manual on generate_subscripts() and array_fill().
Note: This works with the actual array indexes, which can differ from the ordinal array position in Postgres. You may be interested in #Daniel's method to "normalize" the array index:
Normalize array subscripts for 1-dimensional array so they start with 1
The upcoming Postgres 9.4 (currently beta) provides WITH ORDINALITY:
PostgreSQL unnest() with element number
Allowing for this even more elegant and reliable solution:
CREATE OR REPLACE FUNCTION f_expand_arr(_arr anyarray)
RETURNS anyarray AS
$func$
SELECT ARRAY(
SELECT unnest(array_fill(a, ARRAY[idx]))
FROM unnest(_arr) WITH ORDINALITY AS x (a, idx)
)
$func$ LANGUAGE sql IMMUTABLE;
One might still argue that proper order is not actually guaranteed. I claim it is ...
Parallel unnest() and sort order in PostgreSQL
Call:
SELECT f_expand_arr(ARRAY[10,20,30,40,10]::INT[]) AS a2;
Or for values from a table:
SELECT f_expand_arr(a) AS a2 FROM t;
SQL Fiddle.

Related

Select odd or even values from text array

How to select odd or even values from a text array in Postgres?
You can select by index (starts at 1)
select
text_array[1] as first
from (
select '{a,1,b,2,c,3}'::text[] as text_array
) as x
There is not a native function for this: https://www.postgresql.org/docs/13/functions-array.html. I see Postgres supports modulo math (https://www.postgresql.org/docs/13/functions-math.html) but I'm not sure how to apply that here as the below is invalid:
select
text_array[%2] as odd
from (
select '{a,1,b,2,c,3}'::text[] as text_array
) as x
The goal is to get {a,1,b,2,c,3} -> {a,b,c}. Likewise for even, {a,1,b,2,c,3} -> {1,2,3}.
Any guidance would be greatly appreciated!
Generate a list of subscripts (generate_series expression for the odd ones) then extract the array values and aggregate back into arrays. Null values by even subscripts need to be filtered if the array length is odd. Here is an illustration. t CTE is a "table" of sample data.
with t(arr) as
(
values
('{a,1,b,2,c,3}'::text[]),
('{11,12,13,14,15,16,17,18,19,20}'), -- even number of elements
('{21,22,23,24,25,26,27,28,29}') -- odd number of elements
)
select arr,
array_agg(arr[odd]) arr_odd,
array_agg(arr[odd + 1]) filter (where arr[odd + 1] is not null) arr_even
from t
cross join lateral generate_series(1, array_length(arr, 1), 2) odd
group by arr;
arr
arr_odd
arr_even
{21,22,23,24,25,26,27,28,29}
{21,23,25,27,29}
{22,24,26,28}
{a,1,b,2,c,3}
{a,b,c}
{1,2,3}
{11,12,13,14,15,16,17,18,19,20}
{11,13,15,17,19}
{12,14,16,18,20}
Or use these functions:
create function textarray_odd(arr text[]) returns text[] language sql as
$$
select array_agg(arr[i]) from generate_series(1, array_length(arr,1), 2) i;
$$;
create function textarray_even(arr text[]) returns text[] language sql as
$$
select array_agg(arr[i]) from generate_series(2, array_length(arr,1), 2) i;
$$;
select textarray_odd('{a,1,b,2,c,3}'); -- {a,b,c}
select textarray_even('{a,1,b,2,c,3}'); -- {1,2,3}
A more generic alternative:
create function array_odd(arr anyarray) returns anyarray language sql as
$$
select array_agg(v order by i)
from unnest(arr) with ordinality t(v, i)
where i % 2 = 1;
$$;

How to remove elements of array in PostgreSQL?

Is it possible to remove multiple elements from an array?
Before removing elements Array1 is :
{1,2,3,4}
Array2 that contains some elements I wish to remove:
{1,4}
And I want to get:
{2,3}
How to operate?
Use unnest() with array_agg(), e.g.:
with cte(array1, array2) as (
values (array[1,2,3,4], array[1,4])
)
select array_agg(elem)
from cte, unnest(array1) elem
where elem <> all(array2);
array_agg
-----------
{2,3}
(1 row)
If you often need this functionality, define the simple function:
create or replace function array_diff(array1 anyarray, array2 anyarray)
returns anyarray language sql immutable as $$
select coalesce(array_agg(elem), '{}')
from unnest(array1) elem
where elem <> all(array2)
$$;
You can use the function for any array, not only int[]:
select array_diff(array['a','b','c','d'], array['a','d']);
array_diff
------------
{b,c}
(1 row)
With some help from this post:
select array_agg(elements) from
(select unnest('{1,2,3,4}'::int[])
except
select unnest('{1,4}'::int[])) t (elements)
Result:
{2,3}
With the intarray extension, you can simply use -:
select '{1,2,3,4}'::int[] - '{1,4}'::int[]
Result:
{2,3}
Online demonstration
You'll need to install the intarray extension if you didn't already. It adds many convenient functions and operators if you're dealing with arrays of integers.
This answer is the simplest I think:
https://stackoverflow.com/a/6535089/673187
SELECT array(SELECT unnest(:array1) EXCEPT SELECT unnest(:array2));
so you can easily use it in an UPDATE command, when you need to remove some elements from an array column:
UPDATE table1 SET array1_column=(SELECT array(SELECT unnest(array1_column) EXCEPT SELECT unnest('{2, 3}'::int[])));
You can use this function for when you are dealing with bigint/int8 numbers and want to maintain order:
CREATE OR REPLACE FUNCTION arr_subtract(int8[], int8[])
RETURNS int8[] AS
$func$
SELECT ARRAY(
SELECT a
FROM unnest($1) WITH ORDINALITY x(a, ord)
WHERE a <> ALL ($2)
ORDER BY ord
);
$func$ LANGUAGE sql IMMUTABLE;
I got this solution from the following answer to a similar question: https://stackoverflow.com/a/8584080/1544473
User array re-dimension annotation
array[<start index>:<end index>]
WITH t(stack, dim) as (
VALUES(ARRAY[1,2,3,4], ARRAY[1,4])
) SELECT stack[dim[1]+1:dim[2]-1] FROM t

How to count setof / number of keys of JSON in postgresql?

I have a column in jsonb storing a map, like {'a':1,'b':2,'c':3} where the number of keys is different in each row.
I want to count it -- jsonb_object_keys can retrieve the keys but it is in setof
Are there something like this?
(select count(jsonb_object_keys(obj) from XXX )
(this won't work as ERROR: set-valued function called in context that cannot accept a set)
Postgres JSON Functions and Operators Document
json_object_keys(json)
jsonb_object_keys(jsonb)
setof text Returns set of keys in the outermost JSON object.
json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
json_object_keys
------------------
f1
f2
Crosstab isn't feasible as the number of key could be large.
Shortest:
SELECT count(*) FROM jsonb_object_keys('{"a": 1, "b": 2, "c": 3}'::jsonb);
Returns 3
If you want all json number of keys from a table, it gives:
SELECT (SELECT COUNT(*) FROM jsonb_object_keys(myJsonField)) nbr_keys FROM myTable;
Edit: there was a typo in the second example.
You could convert keys to array and use array_length to get this:
select array_length(array_agg(A.key), 1) from (
select json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}') as key
) A;
If you need to get this for the whole table, you can just group by primary key.
While a sub select must be used to convert the JSON keys set to rows, the following tweaked query might run faster by skipping building the temporary array:
SELECT count(*) FROM
(SELECT jsonb_object_keys('{"a": 1, "b": 2, "c": 3}'::jsonb)) v;
and it's a bit shorter ;)
To make it a function:
CREATE OR REPLACE FUNCTION public.count_jsonb_keys(j jsonb)
RETURNS bigint
LANGUAGE sql
AS $function$
SELECT count(*) from (SELECT jsonb_object_keys(j)) v;
$function$
Alternately, you could simply return the upper bounds of the keys when listed as an array:
SELECT
ARRAY_UPPER( -- Grab the upper bounds of the array
ARRAY( -- Convert rows into an array.
SELECT JSONB_OBJECT_KEYS(obj)
),
1 -- The array's dimension we're interested in retrieving the count for
) AS count
FROM
xxx
Using '{"a": 1, "b": 2, "c": 3}'::jsonb as obj, count would result in a value of three (3).
Pasteable example:
SELECT
ARRAY_UPPER( -- Grab the upper bounds of the array
ARRAY( -- Convert rows into an array.
SELECT JSONB_OBJECT_KEYS('{"a": 1, "b": 2, "c": 3}'::jsonb)
),
1 -- The array's dimension we're interested in retrieving the count for
) AS count

Vector (array) addition in Postgres

I have a column with numeric[] values which all have the same size. I'd like to take their element-wise average. By this I mean that the average of
{1, 2, 3}, {-1, -2, -3}, and {3, 3, 3}
should be {1, 1, 1}. Also of interest is how to sum these element-wise, although I expect that any solution for one will be a solution for the other.
(NB: The length of the arrays is fixed within a single table, but may vary between tables. So I need a solution which doesn't assume a certain length.)
My initial guess is that I should be using unnest somehow, since unnest applied to a numeric[] column flattens out all the arrays. So I'd like to think that there's a nice way to use this with some sort of windowing function + group by to pick out the individual components of each array and sum them.
-- EXAMPLE DATA
CREATE TABLE A
(vector numeric[])
;
INSERT INTO A
VALUES
('{1, 2, 3}'::numeric[])
,('{-1, -2, -3}'::numeric[])
,('{3, 3, 3}'::numeric[])
;
I've written an extension to do vector addition (and subtraction, multiplication, division, and powers) with fast C functions. You can find it on Github or PGXN.
Given two arrays a and b you can say vec_add(a, b). You can also add either side to a scalar, e.g. vec_add(a, 5).
If you want a SUM aggregate function instead you can find that in aggs_for_vecs, also on PGXN.
Finally if you want to sum up all the elements of a single array, you can use aggs_for_arrays (PGXN).
I discovered a solution on my own which is probably the one I will use.
First, we can define a function for adding two vectors:
CREATE OR REPLACE FUNCTION vec_add(arr1 numeric[], arr2 numeric[])
RETURNS numeric[] AS
$$
SELECT array_agg(result)
FROM (SELECT tuple.val1 + tuple.val2 AS result
FROM (SELECT UNNEST($1) AS val1
,UNNEST($2) AS val2
,generate_subscripts($1, 1) AS ix) tuple
ORDER BY ix) inn;
$$ LANGUAGE SQL IMMUTABLE STRICT;
and a function for multiplying by a constant:
CREATE OR REPLACE FUNCTION vec_mult(arr numeric[], mul numeric)
RETURNS numeric[] AS
$$
SELECT array_agg(result)
FROM (SELECT val * $2 AS result
FROM (SELECT UNNEST($1) AS val
,generate_subscripts($1, 1) as ix) t
ORDER BY ix) inn;
$$ LANGUAGE SQL IMMUTABLE STRICT;
Then we can use the PostgreSQL statement CREATE AGGREGATE to create the vec_sum function directly:
CREATE AGGREGATE vec_sum(numeric[]) (
SFUNC = vec_add
,STYPE = numeric[]
);
And finally, we can find the average as:
SELECT vec_mult(vec_sum(vector), 1 / count(vector)) FROM A;
from http://www.postgresql.org/message-id/4C2504A3.4090502#wp.pl
select avg(unnested) from (select unnest(vector) as unnested from A) temp;
Edit: I think I now understand the question better.
Here is a possible solution drawing heavily upon: https://stackoverflow.com/a/8767450/3430807 I don't consider it elegant nor am I sure it will perform well:
Test data:
CREATE TABLE A
(vector numeric[], id serial)
;
INSERT INTO A
VALUES
('{1, 2, 3}'::numeric[])
,('{4, 5, 6}'::numeric[])
,('{7, 8, 9}'::numeric[])
;
Query:
select avg(vector[temp.index])
from A as a
join
(select generate_subscripts(vector, 1) as index
, id
from A) as temp on temp.id = a.id
group by temp.index

Selecting data into a Postgres array

I have the following data:
name id url
John 1 someurl.com
Matt 2 cool.com
Sam 3 stackoverflow.com
How can I write an SQL statement in Postgres to select this data into a multi-dimensional array, i.e.:
{{John, 1, someurl.com}, {Matt, 2, cool.com}, {Sam, 3, stackoverflow.com}}
I've seen this kind of array usage before in Postgres but have no idea how to select data from a table into this array format.
Assuming here that all the columns are of type text.
You cannot use array_agg() to produce multi-dimensional arrays, at least not up to PostgreSQL 9.4.
(But the upcoming Postgres 9.5 ships a new variant of array_agg() that can!)
What you get out of #Matt Ball's query is an array of records (the_table[]).
An array can only hold elements of the same base type. You obviously have number and string types. Convert all columns (that aren't already) to text to make it work.
You can create an aggregate function for this like I demonstrated to you here before.
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
Call:
SELECT array_agg_mult(ARRAY[ARRAY[name, id::text, url]]) AS tbl_mult_arr
FROM tbl;
Note the additional ARRAY[] layer to make it a multidimensional array (2-dimenstional, to be precise).
Instant demo:
WITH tbl(id, txt) AS (
VALUES
(1::int, 'foo'::text)
,(2, 'bar')
,(3, '}b",') -- txt has meta-characters
)
, x AS (
SELECT array_agg_mult(ARRAY[ARRAY[id::text,txt]]) AS t
FROM tbl
)
SELECT *, t[1][3] AS arr_element_1_1, t[3][4] AS arr_element_3_2
FROM x;
You need to use an aggregate function; array_agg should do what you need.
SELECT array_agg(s) FROM (SELECT name, id, url FROM the_table ORDER BY id) AS s;