Postgresql- remove the brackets in the table - sql

I sorted the values of a column, now i want remove the brackets inside {} bracktes.
The following code is for the sorting and updating it in:
FOR _reference, _val IN select reference, categories from responses
LOOP
_array = (select (array_agg(p order by p.a )) from (select unnest(_val::text[]) as a) as p);
update responses SET categories = _array where reference = _reference;
END LOOP;
the output of the categories in the table looks like:
{(DSM),(Post)}
I need to that the output looks like:
{DSM,Post}

You are mixing table aliases and column aliases which is the root of your problem.
If you simplify your expression by removing unnecessary levels of nesting and parentheses, things work just fine:
(select array_agg(p.a order by p.a) from unnest(_val::text[]) as p(a))
However you don't need an inefficient PL/pgSQL loop for this. You can do this in a single UPDATE statement.
update responses
set _val = (select array_agg(p.a order by p.a) from unnest(_val) as p(a))
Or slightly more efficient without array_agg()
update responses
set _val = array(select p.a from unnest(_val) as p(a) order by p.a)
_val is apparently an array column, so the cast ::text[] seems unnecessary.

Related

how to use listagg operator so that the query should fetch comma separated values

SELECT (SELECT STRING_VALUE
FROM EMP_NODE_PROPERTIES
WHERE NODE_ID=AN.ID ) containedWithin
FROM EMP_NODE AN
WHERE AN.STORE_ID = ALS.ID
AND an.TYPE_QNAME_ID=(SELECT ID
FROM EMP_QNAME
where LOCAL_NAME = 'document')
AND
AND AN.UUID='13456677';
from the above query I am getting below error.
ORA-01427: single-row subquery returns more than one row
so how to change the above query so that it should fetch comma separated values
This query won't return error you mentioned because
there are two ANDs and
there's no ALS table (or its alias).
I suggest you post something that is correctly written, then we can discuss other errors.
Basically, it is either select string_value ... or select id ... (or even both of them) that return more than a single value.
The most obvious "solution" is to use select DISTINCT
another one is to include where rownum = 1
or, use aggregate functions, e.g. select max(string_value) ...
while the most appropriate option would be to join all tables involved and decide which row (value) is correct and adjust query (i.e. its WHERE clause) to make sure that desired value will be returned.
You would seem to want something like this:
SELECT LISTAGG(NP.STRING_VALUE, ',') WITHIN GROUP(ORDER BY NP.STRING_VALUE)
as containedWithin
FROM EMP_NODE N
JOIN EMP_QNAME Q
ON N.TYPE_QNAME_ID = Q.ID
LEFT JOIN EMP_NODE_PROPERTIES NP
ON NP.NODE_ID = N.ID
WHERE Q.LOCAL_NAME = 'document'
AND AN.UUID = '13456677';
This is a bit speculative because your original query would not run for the reason explained by Littlefoot.

JPA/SQL Server - Use list of Integers as Temp Table

I have a list of Integer objects in my Spring Boot program that I want to use as a reference to update a table, only setting a column to a certain value for records with an id found in this list. Because of the potential length of the list, I want to avoid using the IN clause, as this will likely result in a SQL Server error for too many parameters.
The solution that I am thinking of involves a query similar to the following:
WITH ids as (select * from <list of Integers> as pool(num))
update t set t.column = :value from <table> t, ids where t.id = ids.num
The problem that I see with this is wrapping each Integer in the list in VALUE(), ie:
WITH ids as (select * from value(1),value(2),...etc)
While it seems that I could this query string programmatically by iterating over the list in Java, I would really like to avoid doing so if possible. I did try searching for a solution, but could not find quite what I was looking for. Is there a solution for this?
You are constructing the query anyway, so I personally don't see an issue with constructing a values clause. However, you can also parse a string:
update t
set t.column = :value
from <table> t,
where t.id in (select * from string_split(#ids, ','));
Note: You may need to be careful about type conversion, so if id is an integer:
update t
set t.column = :value
from <table> t,
where t.id in (select try_convert(int, value) from string_split(#ids, ','));

Update from regexp matches in same table without using subquery

I want to fill two columns from the results of a regular expression matching on a column of the same table.
Extracting the matches in an array is easy enough:
select regexp_matches(description, '(?i)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$') matches from room;
(note that only some of the rows match, not all of them)
But in order to do the update I didn't find anything simpler than
1) repeating the regex which would be ridiculous:
update room r set
link=(regexp_matches(description, '(?i)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$'))[1],
description=(regexp_matches(description, '(?i)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$'))[2]
where description ~ '(?i)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$';
2) a query with a subquery and an id join, which looks over complicated and probably not the most efficient:
update room r set link=matches[1], description=matches[2] from (
select id, regexp_matches(description, '(?i)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$') matches from room
) s where matches is not null and r.id=s.id;
What's the proper solution here ? I suspect one of the magical array functions of postgresql would do the trick, or another regexp related function, or maybe something even simpler.
From 9.5, you can use the following syntax:
with p(pattern) as (
select '(?in)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$'
)
update room
set (link, description) = (select m[1], m[2]
from regexp_matches(description, pattern) m)
from p
where description ~ pattern;
This way regexp_matches() executed only once, but this will execute your regex twice. If you want to avoid that you'll need to use a join anyway. Or, you could do:
update room
set (link, description) = (
select coalesce(m[1], l), coalesce(m[2], d)
from (select link l, description d) s,
regexp_matches(d, '(?in)^(https?://\S{4,220}\.(?:jpe?g|png))\s(.*)$') m
);
But this will "touch" every row no matter what. It will just don't modify the values of link and description when there is no match.

MDX equivalent to LIKE

In SQL I like to search a column for matches of a particular string using something like this:
SELECT t.attributeNAME
FROM myTable t
WHERE t.attributeNAME LIKE '%searchString%'
I might like to use that in a temp table and then use the result in subsequent sections of a longer script like so:
--find the keys
SELECT t.attributeKEY
INTO #Temp
FROM myTable t
WHERE t.attributeNAMELIKE '%searchString%'
--use the keys
SELECT SUM(f.measure)
FROM myFacts f
INNER JOIN #Temp t ON
f.attributeKEY = t.attributeKEY
--use the keys again
SELECT SUM(F.measure)
FROM myOtherFacts F
INNER JOIN #Temp t ON
F.attributeKEY = t.attributeKEY
Is there an equivalent to this in MDX? If I have an idea what items from a hierarchy that I'm after can I somehow use a searchString to filter to a specific set of items?
EDIT
As pointed out by Marc Polizzi answer it seems like instr is very useful in this situation and I can do the following:
CREATE SET [xCube].[Set_Names] AS
{FILTER(
[xDimension].[xHierarchy].[xLevel].Members,
(InStr(1, [xDimension].[xHierarchy].CurrentMember.NAME, "WIL") <> 0)
)
}
GO
SELECT
NON EMPTY
[Set_Names]
ON ROWS,
NON EMPTY
[Measures].[x]
ON COLUMNS
FROM [xCube]
GO
SELECT
NON EMPTY
[Set_Names]
ON ROWS,
NON EMPTY
[Measures].[y]
ON COLUMNS
FROM [xCube]
GO
SELECT
NON EMPTY
[Set_Names]
ON ROWS,
NON EMPTY
[Measures].[z]
ON COLUMNS
FROM [xCube]
You might be able to use the Instr function even if it does not support wildcard.
There is no such thing like like in plain MDX, but there is an implementation in the ASSP project: http://asstoredprocedures.codeplex.com/wikipage?title=StringFilters&referringTitle=Home

PostgreSQL case insensitive SELECT on array

I'm having problems finding the answer here, on google or in the docs ...
I need to do a case insensitive select against an array type.
So if:
value = {"Foo","bar","bAz"}
I need
SELECT value FROM table WHERE 'foo' = ANY(value)
to match.
I've tried lots of combinations of lower() with no success.
ILIKE instead of = seems to work but I've always been nervous about LIKE - is that the best way?
One alternative not mentioned is to install the citext extension that comes with PostgreSQL 8.4+ and use an array of citext:
regress=# CREATE EXTENSION citext;
regress=# SELECT 'foo' = ANY( '{"Foo","bar","bAz"}'::citext[] );
?column?
----------
t
(1 row)
If you want to be strictly correct about this and avoid extensions you have to do some pretty ugly subqueries because Pg doesn't have many rich array operations, in particular no functional mapping operations. Something like:
SELECT array_agg(lower(($1)[n])) FROM generate_subscripts($1,1) n;
... where $1 is the array parameter. In your case I think you can cheat a bit because you don't care about preserving the array's order, so you can do something like:
SELECT 'foo' IN (SELECT lower(x) FROM unnest('{"Foo","bar","bAz"}'::text[]) x);
This seems hackish to me but I think it should work
SELECT value FROM table WHERE 'foo' = ANY(lower(value::text)::text[])
ilike could have issues if your arrays can have _ or %
Note that what you are doing is converting the text array to a single text string, converting it to lower case, and then back to an array. This should be safe. If this is not sufficient you could use various combinations of string_to_array and array_to_string, but I think the standard textual representations should be safer.
Update building on subquery solution below, one option would be a simple function:
CREATE OR REPLACE FUNCTION lower(text[]) RETURNS text[] LANGUAGE SQL IMMUTABLE AS
$$
SELECT array_agg(lower(value)) FROM unnest($1) value;
$$;
Then you could do:
SELECT value FROM table WHERE 'foo' = ANY(lower(value));
This might actually be the best approach. You could also create GIN indexes on the output of the function if you want.
Another alternative would be with unnest()
WITH tbl AS (SELECT 1 AS id, '{"Foo","bar","bAz"}'::text[] AS value)
SELECT value
FROM (SELECT id, value, unnest(value) AS val FROM tbl) x
WHERE lower(val) = 'foo'
GROUP BY id, value;
I added an id column to get exactly identical results - i.e. duplicate value if there are duplicates in the base table. Depending on your circumstances, you can probably omit the id from the query to collapse duplicates in the results or if there are no dupes to begin with. Also demonstrating a syntax alternative:
SELECT value
FROM (SELECT value, lower(unnest(value)) AS val FROM tbl) x
WHERE val = 'foo'
GROUP BY value;
If array elements are unique within arrays in lower case, you don't even need the GROUP BY, since every value can only match once.
SELECT value
FROM (SELECT value, lower(unnest(value)) AS val FROM tbl) x
WHERE val = 'foo';
'foo' must be lower case, obviously.
Should be fast.
If you want that fast wit a big table, I would create a functional GIN index, though.
my solution to exclude values using a sub select...
and groupname not ilike all (
select unnest(array[exceptionname||'%'])
from public.group_exceptions
where ...
and ...
)
Regular expression may do the job for most cases
SELECT array_to_string('{"a","b","c"}'::text[],'|') ~* ANY('{"A","B","C"}');
I find creating a custom PostgreSQL function works best for me
CREATE OR REPLACE FUNCTION lower(text_array text[]) RETURNS text[] AS
$BODY$
SELECT (lower(text_array::text))::text[]
$BODY$
LANGUAGE SQL IMMUTABLE;