select only tuples where second column always has same value - sql

I have a similar table to this one
ID | CountryID
1 | 22
1 | 22
2 | 19
3 | 0
3 | 14
3 | 18
3 | 21
3 | 22
3 | 23
4 | 19
5 | 9
5 | 9
6 | 14
and I want to group by the first ID column but select only rows, where the CountryID has the same value throughout an ID. The resulting table should look like
ID | CountryID
1 | 22
2 | 19
4 | 19
5 | 9
6 | 14
Any ideas?

I think the following query should work:
SELECT ID, MAX(CountryID)
FROM Table1
GROUP BY ID
HAVING MIN(CountryID) = MAX(CountryID)

SELECT ID, count(distinct CountryID)
FROM Table1
GROUP BY ID
HAVING count(distinct CountryID)=1

Related

How to get columns when using buckets (width_bucket)

I would like to know which row were moved to a bucket.
SELECT
width_bucket(s.score, sl.mins, sl.maxs, 9) as buckets,
COUNT(*)
FROM scores s
CROSS JOIN scores_limits sl
GROUP BY 1
ORDER BY 1;
My actual return:
buckets | count
---------+-------
1 | 182
2 | 37
3 | 46
4 | 15
5 | 29
7 | 18
8 | 22
10 | 11
| 20
What I expect to return:
SELECT buckets FROM buckets_table [...] WHERE scores.id = 1;
How can I get, for example, the column 'id' of table scores?
I believe you can include the id in an array with array_agg. If I recreate your case with
create table test (id serial, score int);
insert into test(score) values (10),(9),(5),(4),(10),(2),(5),(7),(8),(10);
The data is
id | score
----+-------
1 | 10
2 | 9
3 | 5
4 | 4
5 | 10
6 | 2
7 | 5
8 | 7
9 | 8
10 | 10
(10 rows)
Using the following and aggregating the id with array_agg
SELECT
width_bucket(score, 0, 10, 11) as buckets,
COUNT(*) nr_ids,
array_agg(id) agg_ids
FROM test s
GROUP BY 1
ORDER BY 1;
You get
buckets | nr_ids | agg_ids
---------+--------+----------
3 | 1 | {6}
5 | 1 | {4}
6 | 2 | {3,7}
8 | 1 | {8}
9 | 1 | {9}
10 | 1 | {2}
12 | 3 | {1,5,10}

Select all the records in the first table that match each of the records in the second

I'm working with an Access database and have two tables:
ID_1
Number
Some other data
1
1
Data
2
2
Data
3
3
Data
4
4
Data
5
3
Data
6
1
Data
7
2
Data
8
3
Data
9
1
Data
10
1
Data
11
2
Data
12
3
Data
13
4
Data
14
1
Data
15
2
Data
16
3
Data
17
4
Data
18
3
Data
19
3
Data
ID_2
Number
Some other data
1
3
Data
2
1
Data
3
2
Data
4
3
Data
5
2
Data
As you see, both tables have duplicate data. I need a query that would select all the records in the first table that match each of the records in the second, they are related by Number field. It's also necessary that these records aren't repeated (that is, that the query doesn't repeat values when selecting). For the given example I should get this result:
ID
ID_1
Number
Some other data
1
3
3
Data
2
5
3
Data
3
8
3
Data
4
12
3
Data
5
16
3
Data
6
18
3
Data
7
19
3
Data
8
1
1
Data
9
6
1
Data
10
9
1
Data
11
10
1
Data
12
14
1
Data
13
2
2
Data
14
7
2
Data
15
11
2
Data
16
15
2
Data
I was thinking that maybe I could use Join, but I still don't know how; tried Where, but also didn't find a use for it. Could you please help me with that?
I don't see where you're generating your output ID field from - or where you're picking your Data field from so here's the best guess.
SELECT Table1.ID_1, Table1.Number, Table1.[Some other data]
FROM Table1
WHERE (Table1.Number In (SELECT Number From Table2))
ORDER BY Table1.Number, Table1.ID_1;
Looks like this:
MySql DB data structure
create table tbl1(ID_1 serial, Number int);
create table tbl2(ID_2 serial, Number int);
insert into tbl1(Number) values (1),(2),(3),(4),(3),(1),(2),(3),(1),(1),(2),(3),(4),(1),(2),(3),(4),(3),(3);
insert into tbl2(Number) values (3),(1),(2),(3),(2);
query (with s), needed to remove duplicates
the window function count(tbl1.Number) OVER(PARTITION BY Number) sorts the result for us by the count of matched numbers
the #rownum variable is needed to count rows
with s as (select distinct Number from tbl2),
f as (select ID_1,tbl1.Number from tbl1 left join s on
(tbl1.Number=s.Number) where s.Number is not null order by
count(tbl1.Number) OVER(PARTITION BY Number) desc)
select #rownum := #rownum + 1 AS ID,ID_1,Number from f, (SELECT #rownum := 0) r;
results
+------+------+--------+
| ID | ID_1 | Number |
+------+------+--------+
| 1 | 3 | 3 |
| 2 | 5 | 3 |
| 3 | 8 | 3 |
| 4 | 12 | 3 |
| 5 | 16 | 3 |
| 6 | 18 | 3 |
| 7 | 19 | 3 |
| 8 | 1 | 1 |
| 9 | 6 | 1 |
| 10 | 9 | 1 |
| 11 | 10 | 1 |
| 12 | 14 | 1 |
| 13 | 2 | 2 |
| 14 | 7 | 2 |
| 15 | 11 | 2 |
| 16 | 15 | 2 |
+------+------+--------+

full outer join in redshift

I have 2 tables A and B with columns, containing some details of students (all columns are integer):
A:
st_id,
st_subject_id,
B:
st_id,
st_subject_id,
st_count1,
st_count2
st_id means student id, st_subject_id is subject id.
For student id 15, there are following entries:
A:
15 | 1
15 | 2
15 | 3
B:
15 | 1 | 31 | 11
15 | 2 | 30 | 14
15 | 4 | 21 | 6
15 | 5 | 26 | 9
3 subjects in table A and 4 subjects(2 matching with table A and 2 extra) in table B.
I want to display the final result as:
15 | 1 | 31 | 11
15 | 2 | 30 | 14
15 | 3 | null | null
15 | 4 | 21 | 6
15 | 5 | 26 | 9
Can this be done using full outer join in SQL, or by another method?
I think something like this would suffice, but I can't test right now.
Coalesce means that the first non-null value will be selected from both tables.
select
coalesce(A.st_id, B.st_id) st_id,
coalesce(A.st_subject_id, B.st_subject_id) st_subject_id,
B.st_count1,
B.st_count2
from A
full outer join B
on A.st_id = B.st_id and A.st_subject_id = B.st_subject_id

Grouping the data by aggregational condition

I need to compose a query (one transaction) that will group elements and transform element's data into groups by sum of values with minimum sum value - 10 for example. If a value is equal to or greater of 10 it should be a separate group.
ElementId | GroupId | Value
1 | NULL | 12
2 | NULL | 10
3 | NULL | 9
4 | NULL | 13
5 | NULL | 25
GroupId | Sum
empty
Element with id 3 can be grouped with any other element, but ideally with another value that fewer than 10. Also, I need to update the relationship between elements and groups
After the query execution:
ElementId | GroupId | Value
1 | 1 | 12
2 | 2 | 10
3 | 3 | 9
4 | 3 | 13
5 | 4 | 25
GroupId | Sum
1 | 12
2 | 10
3 | 22 (13 + 9)
4 | 25
Any ideas?
You can use analytical function (for given sample data) as follows:
Select groupid_new as groupid,
Sum(value) as value
From
(Select t.*,
Sum(case when value >= 10 then value end) over (order by elementid) as groupid_new
From your_table t) t
Group by groupid_new

PostgreSQL - finding and updating multiple records

I have a table:
ID | rows | dimensions
---+------+-----------
1 | 1 | 15 x 20
2 | 3 | 2 x 10
3 | 5 | 23 x 33
3 | 7 | 15 x 23
4 | 2 | 12 x 32
And I want to have something like that:
ID | rows | dimensions
---+------+-----------
1 | 1 | 15 x 20
2 | 3 | 2 x 10
3a | 5 | 23 x 33
3b | 7 | 15 x 23
4 | 2 | 12 x 32
How can I find the multiple ID value to make it unique?
How can I update the parent table after?
Thanks for your help!
with stats as (
SELECT "ID",
"rows",
row_number() over (partition by "ID" order by rows) as rn,
count(*) over (partition by "ID") as cnt
FROM Table1
)
UPDATE Table1
SET "ID" = CASE WHEN s.cnt > 1 THEN s."ID" || '-' || s.rn
ELSE s."ID"
END
FROM stats s
WHERE S."ID" = Table1."ID"
AND S."rows" = Table1."rows"
I'm assuming you cant have two rows with same ID and same rows other wise you need to include "dimensions" on the WHERE too.
In this case the output is