I have a table in postgresql with a text column that has values like this:
column
-----------
CA;TB;BA;CB
XA;VA
GA;BA;LA
I want to sort the elements that are in each value, so that the query results like this:
column
-----------
BA;CA;CB;TB
VA;XA
BA;GA;LA
I have tried with string_to_array, regexp_split_to_array, array_agg, but I don't seem to get close to it.
Thanks.
I hope this would be easy to understand:
WITH tab AS (
SELECT
*
FROM
unnest(ARRAY[
'CA;TB;BA;CB',
'XA;VA',
'GA;BA;LA']) AS txt
)
SELECT
string_agg(val, ';')
FROM (
SELECT
txt,
regexp_split_to_table(txt, ';') AS val
FROM
tab
ORDER BY
val
) AS sub
GROUP BY
txt;
First I split values to rows (regexp_split_to_table) and sort. Then group by original values and join again with string_agg.
Output:
BA;CA;CB;TB
BA;GA;LA
VA;XA
I'm probably overcomplicating it:
t=# with a(c)as (values('CA;TB;BA;CB')
,('XA;VA')
,('GA;BA;LA'))
, w as (select string_agg(v,';') over (partition by c order by v), row_number() over (partition by c),count(1) over(partition by c) from a,unnest(string_to_array(a.c,';')) v)
select * from w where row_number = count;
string_agg | row_number | count
-------------+------------+-------
BA;CA;CB;TB | 4 | 4
BA;GA;LA | 3 | 3
VA;XA | 2 | 2
(3 rows)
and here with a little ugly hack:
with a(c)as (values
('CA;TB;BA;CB')
,('XA;VA')
,('GA;BA;LA'))
select translate(array_agg(v order by v)::text,',{}',';') from a, unnest(string_to_array(a.c,';')) v group by c;
translate
-------------
BA;CA;CB;TB
BA;GA;LA
VA;XA
(3 rows)
Related
firstly let me describe you my problem. I need to ignore all repeated values in my select query. So for example if I have something like that:
| Other columns| THE COLUMN I'm working with |
| ............ | Value 1 |
| ............ | Value 2 |
| ............ | Value 2 |
I'd like to get the result containing only the row with "Value 1"
Now because of the specifics of my task I need to validate it with subquery.
So I've figured out something like this:
NOT EXISTS (SELECT 1 FROM TABLE fpd WHERE fpd.value = fp.value HAVING count(*) > 2)
It works like I want, but I'm aware of it being slow. Also I've tried putting 1 instead of 2 in HAVING comprassion, but it just returns zero results. Could you explain where does the 2 value come from?
I would suggest window functions:
select t.*
from (select t.*, count(*) over (partition by value) as cnt
from fpd t
) t
where cnt = 1;
Alternatively, you can use not exists with a primary key:
where not exists (select 1
from fpd fpd2
where fpd2.value = fp.value and
fpd2.primarykey <> fp.primarykey
)
SELECT DISTINCT myColumn FROM myTable
I have a table that looks like this:
val | fkey | num
------------------
1 | 1 | 1
1 | 2 | 1
1 | 3 | 1
2 | 3 | 1
What I would like to do is return a set of rows in which values are grouped by 'val', with an array of fkeys, but only where the array of fkeys is greater than 1. So, in the above example, the return would look something like:
1 | [1,2,3]
I have the following query aggregates the arrays:
SELECT val, array_agg(fkey)
FROM mytable
GROUP BY val;
But this returns something like:
1 | [1,2,3]
2 | [3]
What would be the best way of doing this? I guess one possibility would be to use my existing query as a subquery, and do a sum / count on that, but that seems inefficient. Any feedback would really help!
Use Having clause to filter the groups which is having more than fkey
SELECT val, array_agg(fkey)
FROM mytable
GROUP BY val
Having Count(fkey) > 1
Using the HAVING clause as #Fireblade pointed out is probably more efficient, but you can also leverage subqueries:
SQLFiddle: Subquery
SELECT * FROM (
select val, array_agg(fkey) fkeys
from mytable
group by val
) array_creation
WHERE array_length(fkeys,1) > 1
You could also use the array_length function in the HAVING clause, but again, #Fireblade has used count(), which should be more efficient. Still:
SQLFiddle: Having Clause
SELECT val, array_agg(fkey) fkeys
FROM mytable
GROUP BY val
HAVING array_length(array_agg(fkey),1) > 1
This isn't a total loss, though. Using the array_length in the having can be useful if you want a distinct list of fkeys:
SELECT val, array_agg(DISTINCT fkey) fkeys
There may still be other ways, but this method is more descriptive, which may allow your SQL to be easier to understand when you come back to it, years from now.
I'm Nguyen Van Dung,
I found difficulty with checking gap of SERIAL number, each number is a complex of chars and digits
I have a table with following
data
SERIAL NUMBER
3LBCF007787
3LBCF007788
3LBCF007789
3LBCF007790
3LBCF007792
3LBCF007793
3LBCF007794
3LBCF007795
Now I really want to display an output table as below structure
START_SERIAL END_SERIAL
3LBCF007787 3LBCF007790
3LBCF007792 3LBCF007795
Please support me by querying SQL server
Thank you very much
Assuming that the format of serial_number is fixed, here is one way of doing it
WITH split AS
(
SELECT serial_number,
LEFT(serial_number, 5) prefix,
CONVERT(INTEGER, RIGHT(serial_number, 6)) num
FROM table1
), ordered AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY prefix, num) rn,
MIN(num) OVER (PARTITION BY prefix) mnum
FROM split
)
SELECT MIN(serial_number) start_serial,
MAX(serial_number) end_serial
FROM ordered
GROUP BY num - mnum - rn
Output:
| START_SERIAL | END_SERIAL |
|--------------|-------------|
| 3LBCF007787 | 3LBCF007790 |
| 3LBCF007792 | 3LBCF007795 |
Here is SQLFiddle demo
We have a problem grouping arrays into a single array.
We want to join the values from two columns into one single array and aggregate these arrays of multiple rows.
Given the following input:
| id | name | col_1 | col_2 |
| 1 | a | 1 | 2 |
| 2 | a | 3 | 4 |
| 4 | b | 7 | 8 |
| 3 | b | 5 | 6 |
We want the following output:
| a | { 1, 2, 3, 4 } |
| b | { 5, 6, 7, 8 } |
The order of the elements is important and should correlate with the id of the aggregated rows.
We tried the array_agg() function:
SELECT array_agg(ARRAY[col_1, col_2]) FROM mytable GROUP BY name;
Unfortunately, this statement raises an error:
ERROR: could not find array type for data type character varying[]
It seems to be impossible to merge arrays in a group by clause using array_agg().
Any ideas?
UNION ALL
You could "unpivot" with UNION ALL first:
SELECT name, array_agg(c) AS c_arr
FROM (
SELECT name, id, 1 AS rnk, col1 AS c FROM tbl
UNION ALL
SELECT name, id, 2, col2 FROM tbl
ORDER BY name, id, rnk
) sub
GROUP BY 1;
Adapted to produce the order of values you later requested. The manual:
The aggregate functions array_agg, json_agg, string_agg, and xmlagg,
as well as similar user-defined aggregate functions, produce
meaningfully different result values depending on the order of the
input values. This ordering is unspecified by default, but can be
controlled by writing an ORDER BY clause within the aggregate call, as
shown in Section 4.2.7. Alternatively, supplying the input values from
a sorted subquery will usually work.
Bold emphasis mine.
LATERAL subquery with VALUES expression
LATERAL requires Postgres 9.3 or later.
SELECT t.name, array_agg(c) AS c_arr
FROM (SELECT * FROM tbl ORDER BY name, id) t
CROSS JOIN LATERAL (VALUES (t.col1), (t.col2)) v(c)
GROUP BY 1;
Same result. Only needs a single pass over the table.
Custom aggregate function
Or you could create a custom aggregate function like discussed in these related answers:
Selecting data into a Postgres array
Is there something like a zip() function in PostgreSQL that combines two arrays?
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
, STYPE = anyarray
, INITCOND = '{}'
);
Then you can:
SELECT name, array_agg_mult(ARRAY[col1, col2] ORDER BY id) AS c_arr
FROM tbl
GROUP BY 1
ORDER BY 1;
Or, typically faster, while not standard SQL:
SELECT name, array_agg_mult(ARRAY[col1, col2]) AS c_arr
FROM (SELECT * FROM tbl ORDER BY name, id) t
GROUP BY 1;
The added ORDER BY id (which can be appended to such aggregate functions) guarantees your desired result:
a | {1,2,3,4}
b | {5,6,7,8}
Or you might be interested in this alternative:
SELECT name, array_agg_mult(ARRAY[ARRAY[col1, col2]] ORDER BY id) AS c_arr
FROM tbl
GROUP BY 1
ORDER BY 1;
Which produces 2-dimensional arrays:
a | {{1,2},{3,4}}
b | {{5,6},{7,8}}
The last one can be replaced (and should be, as it's faster!) with the built-in array_agg() in Postgres 9.5 or later - with its added capability of aggregating arrays:
SELECT name, array_agg(ARRAY[col1, col2] ORDER BY id) AS c_arr
FROM tbl
GROUP BY 1
ORDER BY 1;
Same result. The manual:
input arrays concatenated into array of one higher dimension (inputs
must all have same dimensionality, and cannot be empty or null)
So not exactly the same as our custom aggregate function array_agg_mult();
select n, array_agg(c) as c
from (
select n, unnest(array[c1, c2]) as c
from t
) s
group by n
Or simpler
select
n,
array_agg(c1) || array_agg(c2) as c
from t
group by n
To address the new ordering requirement:
select n, array_agg(c order by id, o) as c
from (
select
id, n,
unnest(array[c1, c2]) as c,
unnest(array[1, 2]) as o
from t
) s
group by n
I'm looking to find a way to search a table for duplicate values and return those duplicates (or even just one of the set of duplicates) as the result set.
For instance, let's say I have these data:
uid | semi-unique id
1 | 12345
2 | 21345
3 | 54321
4 | 41235
5 | 12345
6 | 21345
I need to return either:
12345
12345
21345
21345
Or:
12345
21345
I've tried googling around and keep coming up short. Any help please?
To get each row, you can use window functions:
select t.*
from (select t.*, count(*) over (partition by [semi-unique id]) as totcnt
from t
) t
where totcnt > 1
To get just one instance, try this:
select t.*
from (select t.*, count(*) over (partition by [semi-unique id]) as totcnt,
row_number() over (partition by [semi-unique id] order by (select NULL)
) as seqnum
from t
) t
where totcnt > 1 and seqnum = 1
The advantage of this approach is that you get all the columns, instead of just the id (if that helps).
Sorry, I was short on time earlier so I couldn't explain my answer. The first query groups the semi_unique_ids that are the same and only returns the ones that have a duplicate.
SELECT semi_unique_id
FROM your_table
GROUP BY semi_unique_id
HAVING COUNT(semi_unique_id) > 1
If you wanted to get the uid in the query too you can easily add it like so.
SELECT uid,
semi_unique_uid
FROM your_table
GROUP BY
semi_unique_id,
uid
HAVING COUNT(semi_unique_id) > 1
Lastly if you would like to get an idea of how many duplicates per row returned you would do the following.
SELECT uid,
semi_unique_uid,
COUNT(semi_unique_uid) AS unique_id_count
FROM your_table
GROUP BY
semi_unique_id,
uid
HAVING COUNT(semi_unique_id) > 1
SELECT t.semi_unique_id AS i
FROM TABLE t
GROUP BY
t.semi_unique_id
HAVING (COUNT(t.semi_unique_id) > 1)
Try this for sql-server