Distinct Values Ignoring Column Order - sql

I have a table similar to:-
+----+---+---+
| Id | A | B |
+----+---+---+
| 1 | 1 | 2 |
+----+---+---+
| 2 | 2 | 1 |
+----+---+---+
| 3 | 3 | 4 |
+----+---+---+
| 4 | 0 | 5 |
+----+---+---+
| 5 | 5 | 0 |
+----+---+---+
I want to remove all duplicate pairs of values, regardless of which column contains which value, e.g. after whatever the query might be I want to see:-
+----+---+---+
| Id | A | B |
+----+---+---+
| 1 | 1 | 2 |
+----+---+---+
| 3 | 3 | 4 |
+----+---+---+
| 4 | 0 | 5 |
+----+---+---+
I'd like to find a solution in Microsoft SQL Server (has to work in <= 2005, though I'd be interested in any solutions which rely upon >= 2008 features regardless).
In addition, note that A and B are going to be in the range 1-100 (but that's not guaranteed forever. They are surrogate seeded integer foreign keys, however the foreign table might grow to a couple hundred rows max).
I'm wondering whether I'm missing some obvious solution here. The ones which have occurred all seem rather overwrought, though I do think they'd probably work, e.g.:-
Have a subquery return a bitfield with each bit corresponding to one of the ids and use this value to remove duplicates.
Somehow, pivot, remove duplicates, then unpivot. Likely to be tricky.
Thanks in advance!

Test data and sample below.
Basically, we do a self join with an OR criteria so either a=a and b=b OR a=b and b=a.
The WHERE in the subquery gives you the max for each pair to eliminate.
I think this should work for triplicates as well (note I added a 6th row).
DECLARE #t table(id int, a int, b int)
INSERT INTO #t
VALUES
(1,1,2),
(2,2,1),
(3,3,4),
(4,0,5),
(5,5,0),
(6,5,0)
SELECT *
FROM #t
WHERE id NOT IN (
SELECT a.id
FROM #t a
INNER JOIN #t b
ON (a.a=b.a
AND a.b=b.b)
OR
(a.b=b.a
AND a.a = b.b)
WHERE a.id > b.id)

Try:
select min(Id) Id, A, B
from (select Id, A, B from DuplicatesTable where A <= B
union all
select Id, B A, A B from DuplicatesTable where A > B) v
group by A, B
order by 1

Not 100% tested and I'm sure it can be tidied up but it produces your required result:
DECLARE #T TABLE (id INT IDENTITY(1,1), A INT, B INT)
INSERT INTO #T
VALUES (1,2), (2,1), (3,4), (0,5), (5,0);
SELECT *
FROM #T
WHERE id IN (SELECT DISTINCT MIN(id)
FROM (SELECT id, a, b
FROM #T
UNION ALL
SELECT id, b, a
FROM #T) z
GROUP BY a, b)

Related

Comparing different columns in SQL for each row

after some transformation I have a result from a cross join (from table a and b) where I want to do some analysis on. The table for this looks like this:
+-----+------+------+------+------+-----+------+------+------+------+
| id | 10_1 | 10_2 | 11_1 | 11_2 | id | 10_1 | 10_2 | 11_1 | 11_2 |
+-----+------+------+------+------+-----+------+------+------+------+
| 111 | 1 | 0 | 1 | 0 | 222 | 1 | 0 | 1 | 0 |
| 111 | 1 | 0 | 1 | 0 | 333 | 0 | 0 | 0 | 0 |
| 111 | 1 | 0 | 1 | 0 | 444 | 1 | 0 | 1 | 1 |
| 112 | 0 | 1 | 1 | 0 | 222 | 1 | 0 | 1 | 0 |
+-----+------+------+------+------+-----+------+------+------+------+
The ids in the first column are different from the ids in the sixth column.
In a row are always two different IDs that are matched with each other. The other columns always have either 0 or 1 as a value.
I am now trying to find out how many values(meaning both have "1" in 10_1, 10_2 etc) two IDs have on average in common, but I don't really know how to do so.
I was trying something like this as a start:
SELECT SUM(CASE WHEN a.10_1 = 1 AND b.10_1 = 1 then 1 end)
But this would obviously only count how often two ids have 10_1 in common. I could make something like this for example for different columns:
SELECT SUM(CASE WHEN (a.10_1 = 1 AND b.10_1 = 1)
OR (a.10_2 = 1 AND b.10_1 = 1) OR [...] then 1 end)
To count in general how often two IDs have one thing in common, but this would of course also count if they have two or more things in common. Plus, I would also like to know how often two IDS have two things, three things etc in common.
One "problem" in my case is also that I have like ~30 columns I want to look at, so I can hardly write down for each case every possible combination.
Does anyone know how I can approach my problem in a better way?
Thanks in advance.
Edit:
A possible result could look like this:
+-----------+---------+
| in_common | count |
+-----------+---------+
| 0 | 100 |
| 1 | 500 |
| 2 | 1500 |
| 3 | 5000 |
| 4 | 3000 |
+-----------+---------+
With the codes as column names, you're going to have to write some code that explicitly references each column name. To keep that to a minimum, you could write those references in a single union statement that normalizes the data, such as:
select id, '10_1' where "10_1" = 1
union
select id, '10_2' where "10_2" = 1
union
select id, '11_1' where "11_1" = 1
union
select id, '11_2' where "11_2" = 1;
This needs to be modified to include whatever additional columns you need to link up different IDs. For the purpose of this illustration, I assume the following data model
create table p (
id integer not null primary key,
sex character(1) not null,
age integer not null
);
create table t1 (
id integer not null,
code character varying(4) not null,
constraint pk_t1 primary key (id, code)
);
Though your data evidently does not currently resemble this structure, normalizing your data into a form like this would allow you to apply the following solution to summarize your data in the desired form.
select
in_common,
count(*) as count
from (
select
count(*) as in_common
from (
select
a.id as a_id, a.code,
b.id as b_id, b.code
from
(select p.*, t1.code
from p left join t1 on p.id=t1.id
) as a
inner join (select p.*, t1.code
from p left join t1 on p.id=t1.id
) as b on b.sex <> a.sex and b.age between a.age-10 and a.age+10
where
a.id < b.id
and a.code = b.code
) as c
group by
a_id, b_id
) as summ
group by
in_common;
The proposed solution requires first to take one step back from the cross-join table, as the identical column names are super annoying. Instead, we take the ids from the two tables and put them in a temporary table. The following query gets the result wanted in the question. It assumes table_a and table_b from the question are the same and called tbl, but this assumption is not needed and tbl can be replaced by table_a and table_b in the two sub-SELECT queries. It looks complicated and uses the JSON trick to flatten the columns, but it works here:
WITH idtable AS (
SELECT a.id as id_1, b.id as id_2 FROM
-- put cross join of table a and table b here
)
SELECT in_common,
count(*)
FROM
(SELECT idtable.*,
sum(CASE
WHEN meltedR.value::text=meltedL.value::text THEN 1
ELSE 0
END) AS in_common
FROM idtable
JOIN
(SELECT tbl.id,
b.*
FROM tbl, -- change here to table_a
json_each(row_to_json(tbl)) b -- and here too
WHERE KEY<>'id' ) meltedL ON (idtable.id_1 = meltedL.id)
JOIN
(SELECT tbl.id,
b.*
FROM tbl, -- change here to table_b
json_each(row_to_json(tbl)) b -- and here too
WHERE KEY<>'id' ) meltedR ON (idtable.id_2 = meltedR.id
AND meltedL.key = meltedR.key)
GROUP BY idtable.id_1,
idtable.id_2) tt
GROUP BY in_common ORDER BY in_common;
The output here looks like this:
in_common | count
-----------+-------
2 | 2
3 | 1
4 | 1
(3 rows)

Group by 3 columns: "Each group by expression must contain at least one column that is not an outer reference"

I know questions regarding this error message have been asked already, but I couldn't find any that really fit my problem.
I have a table with three columns (A,B,C) containing different values and I need to identify all the identical combination. For example out of "TABLE A" below:
| A | B | C |
| 1 | 2 | 3 |
| 1 | 3 | 3 |
| 1 | 2 | 3 |
| 2 | 2 | 2 |
| 1 | 3 | 3 |
... I would like too get "TABLE B" below:
| A | B | C | count |
| 1 | 2 | 3 | 1 |
| 1 | 3 | 3 | 1 |
| 2 | 2 | 2 | 1 |
(I need the last column "count" with 1 in each row for later usage)
When I try with "group by A,B,C" I get the error mentioned in the title. Any help would be greatly appreciated!
FYI, I don't think it really changes the matter, but "TABLE A" is obtained from an other table: "SOURCE_TABLE", thanks to a query of the type:
select (case when ... ),(case when ...),(case when ...) from SOURCE_TABLE
and I need to build "TABLE B" with only one query.
i think what you are after of is using distinct
select distinct A,B,C, 1 [count] -- where 1 is a static value for later use
from (select ... from sourcetable) X
Sounds like you have the right idea. My guess is that the error is occurring due to an outer reference in your CASE statements. If you wrapped your first query in another query, it may alleviate this issue. Try:
SELECT A, B, C, COUNT(*) AS [UniqueRowCount]
FROM (
SELECT (case when ... ) AS A, (case when ...) AS B, (case when ...) AS C FROM SOURCE_TABLE
) AS Subquery
GROUP BY A, B, C
After re-reading your question, it seems that you're not counting at all, just putting a "1" after each distinct row. If that's the case, then you can try:
SELECT DISTINCT A, B, C, [Count]
FROM (
SELECT (case when ... ) AS A, (case when ...) AS B, (case when ...) AS C, 1 AS [Count] FROM SOURCE_TABLE
) AS Subquery
Assuming your outer reference exceptions were occurring in only your aggregations, you should also simply try:
SELECT DISTINCT (case when ... ) AS A, (case when ...) AS B, (case when ...) AS C, 1 AS [Count] FROM SOURCE_TABLE

Trying to select multiple columns where one is unique

I am trying to select several columns from a table where one of the columns is unique. The select statement looks something like this:
select a, distinct b, c, d
from mytable
The table looks something like this:
| a | b | c | d | e |...
|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5
| 1 | 2 | 3 | 4 | 6
| 2 | 5 | 7 | 1 | 9
| 7 | 3 | 8 | 6 | 4
| 7 | 3 | 8 | 6 | 7
So the query should return something like this:
| a | b | c | d |
|---|---|---|---|
| 1 | 2 | 3 | 4
| 2 | 5 | 7 | 1
| 7 | 3 | 8 | 6
I just want to remove all of the rows where b is duplicated.
EDIT: There seems to be some confusion about which row I want to be selected in the case of duplicate b values. I don't care because the a, c, and d should (but are not guaranteed to) be the same.
Try this
SELECT * FROM (SELECT ROW_NUMBER() OVER (PARTITION BY b ORDER BY a) NO
,* FROM TableName) AS T1 WHERE NO = 1
I think you are nearly there with DISTINCT try:
SELECT DISTINCT a, b, c, d
FROM myTable
You haven't said how to pick a row for each b value, but this will pick one for each.
Select
a,
b,
c,
d,
e
From (
Select
a,
b,
c,
d,
e,
row_number() over (partition by b order by b) rn
From
mytable
) x
Where
x.rn = 1
If you don't care what values you get for B, C, D, and E, as long as they're appropriate for that key, you can group by A:
SELECT A, MIN(B), MIN(C), MIN(D), MIN(E)
FROM MyTable
GROUP BY A
Note that MAX() would be just as valid. Some RDBMSs support a FIRST() aggregate, or similar, for exactly these circumstances where you don't care which value you get (from a certain population).
This will return what you're looking for but I think your example is flawed because you've no determinism over which value from the e column is returned.
Create Table A1 (a int, b int, c int, d int, e int)
INSERT INTO A1 (a,b,c,d,e) VALUES (1,2,3,4,5)
INSERT INTO A1 (a,b,c,d,e) VALUES (1,2,3,4,6)
INSERT INTO A1 (a,b,c,d,e) VALUES (2,5,7,1,9)
INSERT INTO A1 (a,b,c,d,e) VALUES (7,3,8,6,4)
INSERT INTO A1 (a,b,c,d,e) VALUES (7,3,8,6,7)
SELECT * FROM A1
SELECT a,b,c,d
FROM
(
SELECT ROW_NUMBER() OVER (PARTITION BY b ORDER BY a) RowNum ,*
FROM A1
) As InnerQuery WHERE RowNum = 1
You cannot put DISTINCT on a single column. You should put it right after the SELECT:
SELECT DISTINCT a, b, c, d
FROM mytable
It return the result you need for your sample table. However if you require to remove duplicates only from a single column (which is not possible) you probably misunderstood something. Give us more descriptions and sample, and we try to guide you to the right direction.

How to merge MySQL queries with different column counts?

Definitions:
In the results, * denotes an empty column
The data in the tables is such that every field in the table has the value Fieldname + RowCount (so column 'a' in row 1 contains the value 'a1').
2 MySQL Tables
Table1
Fieldnames: a,b,c,d
Table2
Fieldnames: e,f,g,h,i,j
Task:
I want to get the first 4 rows from each of the tables.
Standalone Queries
SELECT Table1.* FROM Table1 WHERE 1 LIMIT 0,4 -- Colcount 4
SELECT Table2.* FROM Table2 WHERE 1 LIMIT 0,4 -- Colcount 6
A simple UNION of the queries fails because the two parts have different column counts.
Version1: add two empty fields to the first query
SELECT Table1.*,'' AS i,'' AS j FROM Table1 WHERE 1 LIMIT 0,4
UNION
SELECT Table2.* FROM Table2 WHERE 1 LIMIT 0,4
So I will get the following fields in the result set:
a,b,c,d,i,j
a1,b1,c1,d1,*,*,
a2,b2,c2,d2,*,*,
....
....
e1,f1,g1,h1,i1,j1
e2,f2,g2,h2,i2,j2
The problem is that the field names of Table2 are overridden by Table1.
Version2 - shift columns by using empty fields:
SELECT Table1.*,'','','','','','' FROM Table1 WHERE 1 LIMIT 0,4
UNION
SELECT '','','','',Table2.* FROM Table2 WHERE 1 LIMIT 0,4
So I will get the following fields in the result set:
a,b,c,d,i,j
a1,b1,c1,d1,*,*,*,*,*,*,
a2,b2,c2,d2,*,*,*,*,*,*,
....
....
*,*,*,*,e1,f1,g1,h1,i1,j1
*,*,*,*,e2,f2,g2,h2,i2,j2
....
....
Problem is solved but I get many empty fields.
Is there a known performance issue?
How do you solve this task?
Is there a best practice to solve this issue?
The output from a query should be a table, which is a set of rows, each row with the same set of column names and types. (There are some DBMS that support ragged rows - with different sets of columns, but that is not a mainstream feature.)
You have to decide how to handle two sets of four rows with different sets of columns in the two sets.
The simplest option, usually, is to do the two standalone queries. The two result sets are not comparable, and should not be conflated.
If you choose your Version 1, then you should decide which set of column names is appropriate, or create a composite set of names using 'AS x' column aliases.
If you choose your Version 2, then you should probably name the trailing columns of the first clause of the UNION; at the moment, they all have no name:
SELECT Table1.*, '' AS e, '' AS f, '' AS g, '' AS h, '' AS i, '' AS j
FROM Table1 WHERE 1 LIMIT 0,4
UNION
SELECT '' AS a, '' AS b, '' AS c, '' AS d, Table2.*
FROM Table2 WHERE 1 LIMIT 0,4
(The AS comments in the second are redundant, but self-consistent; the two halves of the UNION have the same column headings explicitly.)
Except that you have provided empty strings instead of NULL, the notation you have chosen corresponds to an 'OUTER UNION'. You can find occasional references to it in selected parts of the literature (E F Codd in the RM/V2 book; C J Date in critiques of all things OUTER). SQL 1999 provided it as a UNION JOIN; SQL 2003 removed UNION JOIN (that's pretty unusual - and damning of the feature).
I'd use two separate queries.
The thing that seems most sensible is your "version 2", except using NULLs instead of empty strings.
This took some thinking, and then some MySQL-specific workarounds. The concept is this: A Join will produce the table structure you want. What you really want is a full outer join where no row 'matches.' To do this, we need a reliable way to ensure that rows don't match, and then, we have to UNION and LEFT JOIN and a RIGHT JOIN, to overcome MySQL's limitation of no FULL OUTER JOINs.
SQL Fiddle
MySQL 5.6 Schema Setup:
CREATE TABLE A (a int, b int, c int, d int);
CREATE TABLE B (e int, f int, g int, h int, i int, j int);
INSERT INTO A VALUES (1,1,1,1),(2,2,2,2);
INSERT INTO B VALUES (8,8,8,8,8,8),(9,9,9,9,9,9);
Query 1:
SELECT * FROM
(SELECT * FROM (SELECT "TableA" as unique_field) as Ax CROSS JOIN A) as A
LEFT JOIN
(SELECT * FROM (SELECT "TableB" as unique_field) as Bx CROSS JOIN B) AS B
on A.unique_field=B.unique_field
UNION
SELECT * FROM
(SELECT * FROM (SELECT "TableA" as unique_field) as Ax CROSS JOIN A) as A
RIGHT JOIN
(SELECT * FROM (SELECT "TableB" as unique_field) as Bx CROSS JOIN B) AS B
on A.unique_field=B.unique_field
Results:
| unique_field | a | b | c | d | unique_field | e | f | g | h | i | j |
|--------------|--------|--------|--------|--------|--------------|--------|--------|--------|--------|--------|--------|
| TableA | 1 | 1 | 1 | 1 | (null) | (null) | (null) | (null) | (null) | (null) | (null) |
| TableA | 2 | 2 | 2 | 2 | (null) | (null) | (null) | (null) | (null) | (null) | (null) |
| (null) | (null) | (null) | (null) | (null) | TableB | 8 | 8 | 8 | 8 | 8 | 8 |
| (null) | (null) | (null) | (null) | (null) | TableB | 9 | 9 | 9 | 9 | 9 | 9 |
This syntax: SELECT * FROM (SELECT 1 as unique_field) as Ax CROSS JOIN A) as A is more easily understood as (SELECT 1 as unique_field, * FROM A) AS A, but, MySQL doesn't allow a * to follow a field specification.

SQL - min() gets the lowest value, max() the highest, what if I want the 2nd (or 5th or nth) lowest value?

The problem I'm trying to solve is that I have a table like this:
a and b refer to point on a different table. distance is the distance between the points.
| id | a_id | b_id | distance | delete |
| 1 | 1 | 1 | 1 | 0 |
| 2 | 1 | 2 | 0.2345 | 0 |
| 3 | 1 | 3 | 100 | 0 |
| 4 | 2 | 1 | 1343.2 | 0 |
| 5 | 2 | 2 | 0.45 | 0 |
| 6 | 2 | 3 | 110 | 0 |
....
The important column I'm looking is a_id. If I wanted to keep the closet b for each a, I could do something like this:
update mytable set delete = 1 from (select a_id, min(distance) as dist from table group by a_id) as x where a_gid = a_gid and distance > dist;
delete from mytable where delete = 1;
Which would give me a result table like this:
| id | a_id | b_id | distance | delete |
| 1 | 1 | 1 | 1 | 0 |
| 5 | 2 | 2 | 0.45 | 0 |
....
i.e. I need one row for each value of a_id, and that row should have the lowest value of distance for each a_id.
However I want to keep the 10 closest points for each a_gid. I could do this with a plpgsql function but I'm curious if there is a more SQL-y way.
min() and max() return the smallest and largest, if there was an aggregate function like nth(), which'd return the nth largest/smallest value then I could do this in similar manner to the above.
I'm using PostgeSQL.
Try this:
SELECT *
FROM (
SELECT a_id, (
SELECT b_id
FROM mytable mib
WHERE mib.a_id = ma.a_id
ORDER BY
dist DESC
LIMIT 1 OFFSET s
) AS b_id
FROM (
SELECT DISTINCT a_id
FROM mytable mia
) ma, generate_series (1, 10) s
) ab
WHERE b_id IS NOT NULL
Checked on PostgreSQL 8.3
I love postgres, so it took it as a challenge the second I saw this question.
So, for the table:
Table "pg_temp_29.foo"
Column | Type | Modifiers
--------+---------+-----------
value | integer |
With the values:
SELECT value FROM foo ORDER BY value;
value
-------
0
1
2
3
4
5
6
7
8
9
14
20
32
(13 rows)
You can do a:
SELECT value FROM foo ORDER BY value DESC LIMIT 1 OFFSET X
Where X = 0 for the highest value, 1 for the second highest, 2... And so forth.
This can be further embedded in a subquery to retrieve the value needed. So, to use the dataset provided in the original question we can get the a_ids with the top ten lowest distances by doing:
SELECT a_id, distance FROM mytable
WHERE id IN
(SELECT id FROM mytable WHERE t1.a_id = t2.a_id
ORDER BY distance LIMIT 10);
ORDER BY a_id, distance;
a_id | distance
------+----------
1 | 0.2345
1 | 1
1 | 100
2 | 0.45
2 | 110
2 | 1342.2
Does PostgreSQL have the analytic function rank()? If so try:
select a_id, b_id, distance
from
( select a_id, b_id, distance, rank() over (partition by a_id order by distance) rnk
from mytable
) where rnk <= 10;
This SQL should find you the Nth lowest salary should work in SQL Server, MySQL, DB2, Oracle, Teradata, and almost any other RDBMS: (note: low performance because of subquery)
SELECT * /*This is the outer query part */
FROM mytable tbl1
WHERE (N-1) = ( /* Subquery starts here */
SELECT COUNT(DISTINCT(tbl2.distance))
FROM mytable tbl2
WHERE tbl2.distance < tbl1.distance)
The most important thing to understand in the query above is that the subquery is evaluated each and every time a row is processed by the outer query. In other words, the inner query can not be processed independently of the outer query since the inner query uses the tbl1 value as well.
In order to find the Nth lowest value, we just find the value that has exactly N-1 values lower than itself.