I am having two arrays. Both array calculated from functions so both arrays are dynamic but length of both arrays will be same.
a1= ARRAY[1,2,3];
a2= ARRAY[10,20,30];
Now I want to update my table something like this
UPDATE TABLE SET data= CASE
data=a1[1] then a2[1]
data=a1[2] then a2[2]
data=a1[3] then a2[3]END
where id=1;
I tried with adding loop inside CASE but it is not working .
You can make use of array_position to find the matching index in array 1, and query array 2 using this index:
UPDATE TABLE
SET data = a2[array_position(a1, data)]
WHERE id = 1;
http://rextester.com/CBJ37276
Related
In snowflake, how can I filter for null or empty array fields in a column?
Column has an empty [] or string of values in that bracket, tried using array_size(column_name, 1) > 0 but array_size does not function,
Thanks
Are you trying to filter them in or out?
Either way the array_size should work. although there's no second argument
where column_name is not null and array_size(column_name) != 0 worked for me
If you're specifically looking to filter to the records that have an empty array, this approach works too, although it's a little odd.
where column_name = array_construct()
Edit: It seems like your issue is that your column is a string. There's a few ways to work around this
Change your column's datatype to a variant or array
Parse your column before using array functions array_size(TRY_PARSE_JSON(column_name)) != 0
Compare to a string instead column_name is not null and column_name != '[]'
If the column is string, it has to be parsed first:
SELECT *
FROM tab
WHERE ARRAY_SIZE(TRY_PARSE_JSON(column_name)) > 0;
-- excluding NULLs/empty arrays
I need to anonymize a variable in SQL data (VAR NAME = "ArId").
The variable contains 10 numbers + 1 letter + 2 numbers. I need to randomize the 10 first numbers and then keep the letter + the last two numbers.
I have tried the rand() function, but this randomize the whole value.
SELECT TOP 1000 *
FROM [XXXXXXXXXXX].[XXXXXXXXXX].[XXXXX.TEST]
I have only loaded the data.
EDIT (from "answer"):
I have tried: UPDATE someTable
SET someColumn = CONCAT(CAST(RAND() * 10000000000 as BIGINT), RIGHT(someColumn, 3))
However as i am totally new to SQL i don't know how to make this work. I put 'someColumn = new column name for the variable i am crating. RIGHT(someColumn) = the column i am changing. When i do that i get the message that the right function requires 2 arguments??
Example for Zohar: I have a variable containing for example: 1724981628R01On all these values in this variable i would like to randomize the first 10 letters and keep the last three (R01). How can i do that?
A couple things. First, your conversion to a big int does not guarantee that the results has the right number of characters.
Second, rand() is constant for all rows of the query. Try this version:
UPDATE someTable
SET someColumn = CONCAT(FORMAT(RAND(CHECKSUM(NEWID())
), '0000000000'
),
RIGHT(someColumn, 3)
);
I'm using PostgreSQL 9.4 and I'm currently trying to transfer a columns values in an array. For "normal" (not user defined) data types I get it to work.
To explain my problem in detail, I made up a minimal example.
Let's assume we define a composite type "compo" and create a table "test_rel" and insert some values. Looks like this and works for me:
CREATE TYPE compo AS(a int, b int);
CREATE TABLE test_rel(t1 compo[],t2 int);
INSERT INTO test_rel VALUES('{"(1,2)"}',3);
INSERT INTO test_rel VALUES('{"(4,5)","(6,7)"}',3);
Next, we try to get an array with column t2's values. The following also works:
SELECT array(SELECT t2 FROM test_rel WHERE t2='3');
Now, we try to do the same stuff with column t1 (the column with the composite type). My problem is now, that the following does'nt work:
SELECT array(SELECT t1 FROM test_rel WHERE t2='3');
ERROR: could not find array type for data type compo[]
Could someone please give me a hint, why the same statement does'nt work with the composite type? I'm not only new to stackoverflow, but also to PostgreSQL and plpgsql. So, please tell me, when I'm doing something the wrong way.
There were some discussion about this in the PostgreSQL mailing list.
Long story short, both
select array(select array_type from ...)
select array_agg(array_type) from ...
represents a concept of array of arrays, which PostgreSQL doesn't support. PostgreSQL supports multidimensional arrays, but they have to be rectangular. F.ex. ARRAY[[0,1],[2,3]] is valid, but ARRAY[[0],[1,2]] is not.
There were some improvement with both the array constructor & the array_agg() function in 9.5.
Now, they explicitly states, that they will accumulate array arguments as a multidimensional array, but only if all of its parts have equal dimensions.
array() constructor: If the subquery's output column is of an array type, the result will be an array of the same type but one higher dimension; in this case all the subquery rows must yield arrays of identical dimensionality, else the result would not be rectangular.
array_agg(any array type): input arrays concatenated into array of one higher dimension (inputs must all have same dimensionality, and cannot be empty or NULL)
For 9.4, you could wrap the array into a row: this way, you could create something, which is almost an array of arrays:
SELECT array(SELECT ROW(t1) FROM test_rel WHERE t2='3');
SELECT array_agg(ROW(t1)) FROM test_rel WHERE t2='3';
Or, you could use a recursive CTE (and an array concatenation) to workaround the problem, like:
with recursive inp(arr) as (
values (array[0,1]), (array[1,2]), (array[2,3])
),
idx(arr, idx) as (
select arr, row_number() over ()
from inp
),
agg(arr, idx) as (
select array[[0, 0]] || arr, idx
from idx
where idx = 1
union all
select agg.arr || idx.arr, idx.idx
from agg
join idx on idx.idx = agg.idx + 1
)
select arr[array_lower(arr, 1) + 1 : array_upper(arr, 1)]
from agg
order by idx desc
limit 1;
But of course this solution is highly dependent of your data ('s dimensions).
This is the data that I'm trying to query:
Table name: "test", column "data"
7;"{{Hello,50},{Wazaa,90}}"
8;"{{Hello,50},{"Dobar Den",15}}"
To query this data I'm using this SQL query:
SELECT *, pg_column_size(data) FROM test WHERE data[1][1] = 'Hello'
How I can search in all elements but in the first sub element and not in the second for example:
SELECT *, pg_column_size(data) FROM test WHERE data[][1] = 'Hello'
because if I search like this:
SELECT *, pg_column_size(data) FROM test WHERE data[1][1] = "Wazaa"
it won't return anything because I'm hardcoding to look at first sub element and I have to modify it like this:
SELECT *, pg_column_size(data) FROM test WHERE data[2][1] = 'Wazaa'
How to make it to check all parent elements and first sub element?
there is solution using "ANY" to query all elements but I don't want to touch second element in where statement because if I have numbers in first sub element it will query the second parameter which is also number.
SELECT * FROM test WHERE '90' = ANY (data);
PostgreSQL's support for arrays is not particularly good. You can unnest a 1-dimensional array easy enough, but a n-dimensional array is completely flattened, rather than only the first dimension. Still, you can use this approach to find the desired set of records, but it is rather ugly:
SELECT test.*, pg_column_size(test.data) AS column_size
FROM test
JOIN (SELECT id, unnest(data) AS strings FROM test) AS id_strings USING (id)
WHERE id_strings.strings = 'Wazaa';
Alternatively, write this function to reduce a 2-dimensional array into records of 1-dimensional arrays and then you can basically use all of the SQL queries in your question.
I have some columns in PostgreSQL database that are array. I want to add a new value (in UPDATE) in it if the value don't exists, otherwise, don't add anytihing. I don't want to overwrite the current value of the array, but only add the element to it.
Is possible do this in a query or I need to do this inside a function? I'm using PostgreSQL.
This should be as simple as this example for an integer array (integer[]):
UPDATE tbl SET col = col || 5
WHERE (5 = ANY(col)) IS NOT TRUE;
A WHERE clause like:
WHERE 5 <> ALL(col)
would also catch the case of an empty array '{}'::int[], but fail if a NULL value appears as element of the array.
If your arrays never contain NULL as element, consider actual array operators, possibly supported by a GIN index.
UPDATE tbl SET col = col || 5
WHERE NOT col #> '{5}';
See:
Check if value exists in Postgres array
Can PostgreSQL index array columns?