See if 2 columns are 1 to 1 match in sql - sql

I have a table in sql.
I need to check if columns A and B are 1 to 1 match (Meaning for each row in A, there is only one value in B)
but if you see below, it is not.
Only Cat A and Cat C have one values in Col B.
So is there a way to identify between 2 columns
A B C
Cat A asd 34
Cat A asd 56
Cat B dfg 67
Cat B ghj 7
Cat C ggh 78
Cat D ertrty 9
Cat D tyutyu 6
Cat D tuuiy 45

SELECT A, COUNT(DISTINCT B)
FROM your_table
GROUP BY A
HAVING COUNT(DISTINCT B) = 1;

Related

How to make one column match duplicates in another column

This problem is out of my ability range and I can’t get anywhere with it beyond knowing I can probably use LEAD, LAG or maybe a cursor?
Here is a breakdown of the table and question:
row_id is always an IDENTITY(1, 1) column.
The set_id column always starts out in groups of 3s (two 0s for the first set_id, don't worry about why).
The letter column is alphabetic. There are varying counts of duplicates.
Here's the original table:
row_id
set_id
letter
1
0
A
2
0
A
3
1
A
4
1
B
5
1
B
6
2
B
7
2
B
8
2
C
9
3
C
10
3
C
11
3
D
12
4
D
13
4
D
14
4
D
What I need is a code that: if there is a duplicate letter in the next row, then the set_id in the next row should be the same as the previous row (alt_set_id).
If that doesn't make sense, here is the result I want:
row_id
set_id
letter
alt_set_id
1
0
A
0
2
0
A
0
3
1
A
0
4
1
B
1
5
1
B
1
6
2
B
1
7
2
B
1
8
2
C
2
9
3
C
2
10
3
C
2
11
3
D
3
12
4
D
3
13
4
D
3
14
4
D
3
Here's where I am with code so far, I'm not really close but I think I am on the right path:
SELECT
*,
CASE
WHEN letter = [letter in next row]
THEN 'yes'
ELSE 'no'
END AS 'next row a duplicate?',
'tbd' AS alt_row_id
FROM
(SELECT
*,
LEAD(letter) OVER (ORDER BY row_id) AS 'letter in next row'
FROM
sort_test) AS dt
WHERE
row_id = row_id
That query has the below result set, which is something I think I can work with, but it doesn't feel very efficient and I'm not yet getting the result needed in the alt_set_id column:
row_id
set_id
letter
letter in next row
next row a duplicate?
alt_set_id
1
0
A
A
yes
tbd
2
0
A
A
yes
tbd
3
1
A
B
no
tbd
4
1
B
B
yes
tbd
5
1
B
B
yes
tbd
6
2
B
B
yes
tbd
7
2
B
C
no
tbd
8
2
C
C
yes
tbd
9
3
C
C
yes
tbd
10
3
C
D
no
tbd
11
3
D
D
yes
tbd
12
4
D
D
yes
tbd
13
4
D
D
yes
tbd
14
4
D
NULL
no
tbd
Thanks for any help!
Based on your example data, you want the minimum set_id for each letter. If so, use window functions;
select t.*, min(set_id) over (partition by letter) as alt_set_id
from sort_test t;
It would appear if I understand correctly a simple correlated subquery will give you the desired result:
select *, (select Min(set_Id) from t t2 where t2.letter=t.letter) as alt_set_id
from t
See working DB Fiddle

SQL JOIN for Duplicate Values

I have following two tables:
A.
A_ID Amount GL_ID
------------------
1 100 10
2 200 11
3 150 10
4 20 10
5 369 12
6 369 11
7 254 12
B.
B_ID Name GL_ID
-----------------
1 A 10
2 B 10
3 C 11
4 D 11
5 E 12
6 F 12
I want to join these tables. They have GL_ID column in common (ID of another table). Table A store transactions along with GL_ID while table B defines document type (A, B, C, D etc.) with reference to GL_ID.
A & B don't have any common column except GL_ID. I want the following result, relevant document type (A, B, C, D etc.) for each transaction in table A.
A.A_ID A.Amount B.Name
-----------------------
1 100 A
2 200 B
3 150 B
4 20 B
5 369 A
6 369 D
7 254 D
But when I apply to join (LEFT, RIGHT, FULL JOIN) keyword, query shows repeated values. But I only want to have relevant Doc Type for each line in table A.
try this.
select distinct A.A_ID, A.Amount, B.Name
from A inner join B on A.GL_ID=B.GL_ID

pasting files/multiple columns with different number of rows

Hi I was trying to paste multiple files (each with a single column but different number of rows) together. But it did't provide what I was expecting. How to solve that?
paste file1.txt file2.txt paste3.txt ... paste100 > out.txt
input file 1:
A
B
C
input file 2:
D
E
input file 3:
F
G
H
I
J
.......
......
Desired output:
A D F
B E G
C H
I
J
Would this be same if the files have multiple columns with different number of rows?
for example:
file1
A 1
B 2
C 3
file2
D 4
E 5
file3
F 6 %
G 7 &
H 8 #
I 9 #
J 10 ?
output:
A 1 D 4 F 6 %
B 2 E 5 G 7 &
C 3 H 8 #
I 9 #
J 10 ?
Isn't the default behaviour of paste exactly what you ask?
% paste <(echo "a
b
c
d") <(echo "1
2
3") <(echo "10
> 20
> 30
> 40
> 50
> 60")
a 1 10
b 2 20
c 3 30
d 40
50
60
%

How to create a flag of exclusion for duplicate rows in Oracle

Given below is the snapshot of my data
NameAgeIncome Group
Asd 20 A
Asd 20 A
b 19 E
c 21 B
c 21 B
c 21 B
df 21 C
rd 24 D
I want ot include a flag variable where it says 1 to one of the duplicate row and 0 to another. And also 0 to rest of the rows which are not duplicate. Given below is the snapshot of final desired output
NameAgeIncome Group Flag
Asd 20 A 1
Asd 20 A 0
b 19 E 0
c 21 B 1
c 21 B 1
c 21 B 0
df 21 C 0
rd 24 D 0
Can anyone help me how to create this Flag variable in Oracle database
You can do this using analytic functions and case:
select t.*,
(case when row_number() over (partition by name, age, income order by name) = 1
then 0 else 1
end) as GroupFlag
from table t;

combine 2 files with AWK based last colums [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
i have two files
file1
-------------------------------
1 a t p b
2 b c f a
3 d y u b
2 b c f a
2 u g t c
2 b j h c
file2
--------------------------------
1 a b
2 p c
3 n a
4 4 a
i want combine these 2 files based last columns (column 5 of file1 and column 3 of file2) using awk
result
----------------------------------------------
1 a t p 1 a b
2 b c f 3 n a
2 b c f 4 4 a
3 d y u 1 a b
2 b c f 3 n a
2 b c f 4 4 a
2 u g t 2 p c
2 b j h 2 p c
at the very beginning, I didn't see the duplicated "a" in file2, I thought it would be solved with normal array matching. ... now it works.
an awk onliner:
awk 'NR==FNR{a[$3"_"NR]=$0;next;}{for(x in a){if(x~"^"$5) print $1,$2,$3,$4,a[x];}}' f2.txt f1.txt
test
kent$ head *.txt
==> f1.txt <==
1 a t p b
2 b c f a
3 d y u b
2 b c f a
2 u g t c
2 b j h c
==> f2.txt <==
1 a b
2 p c
3 n a
4 4 a
kent$ awk 'NR==FNR{a[$3"_"NR]=$0;next;}{for(x in a){if(x~"^"$5) print $1,$2,$3,$4,a[x];}}' f2.txt f1.txt
1 a t p 1 a b
2 b c f 3 n a
2 b c f 4 4 a
3 d y u 1 a b
2 b c f 3 n a
2 b c f 4 4 a
2 u g t 2 p c
2 b j h 2 p c
note, the output format was not sexy, but it would be acceptable if pipe it to column -t
Other way assuming files have no headers:
awk '
FNR == NR {
f2[ $NF ] = f2[ $NF ] ? f2[ $NF ] SUBSEP $0 : $0;
next;
}
FNR < NR {
if ( $NF in f2 ) {
split( f2[ $NF ], a, SUBSEP );
len = length( a );
for ( i = 1; i <= len; i++ ) {
$NF = a[ i ];
}
}
printf "%s\n", $0;
}
' file2 file1 | column -t
It yields:
1 a t p 1 a b
2 b c f 3 n a
2 b c f 4 4 a
3 d y u 1 a b
2 b c f 3 n a
2 b c f 4 4 a
2 u g t 2 p c
2 b j h 2 p c
A bit easier in a language that supports arbitrary data structures (list of lists). Here's ruby
# read "file2" and group by the last field
file2 = File .foreach('file2') .map(&:split) .group_by {|fields| fields[-1]}
# process file1
File .foreach('file1') .map(&:split) .each do |fields|
file2[fields[-1]] .each do |fields2|
puts (fields[0..-2] + fields2).join(" ")
end
end
outputs
1 a t p 1 a b
2 b c f 3 n a
2 b c f 4 4 a
3 d y u 1 a b
2 b c f 3 n a
2 b c f 4 4 a
2 u g t 2 p c
2 b j h 2 p c