SQL - Select duplicates based on two columns in DB2 - sql

I am using DB2 and am trying to count duplicate rows in a table called ML_MEASURE. What I define as a duplicate in this table, is a row containing the same DATETIME and TAG_NAME value. So I tried this below:
SELECT
DATETIME,
TAG_NAME,
COUNT(*) AS DUPLICATES
FROM
ML_MEASURE
GROUP BY DATETIME, TAG_NAME
HAVING COUNT(*) > 1
The query doesn't fail, but I get an empty result, even though I now for a fact I have at least one duplicate, when I tried this query below I got the result correct for this specific tag_name and datetime:
SELECT
DATETIME,
TAG_NAME,
COUNT(*) AS DUPLICATES
FROM
ML_MEASURE
WHERE
DATETIME='2018-03-23 15:09:30' AND
TAG_NAME='HOG.613KU201'
GROUP BY
DATETIME,
TAG_NAME.
The result of the second query looked like this:
DATETIME TAG_NAME DUPLICATES
--------------------- ------------ ----------
2018-03-23 15:09:30.0 HOG.613KU201 3
What am I doing wrong in the first query?
* UPDATE *
My table is row organized, not sure if that makes any difference.

Yes, you should get the same row back on the first query. If you had a NOT ENFORCED TRUSTED Primary Key or Unique constraint on those two columns, then the Optimizer would be within it's rights to trust the constraint and return you no rows. However from a quick test, I don't believe it does that for this query.
Do you have any indexes defined on the table?
(P.S. I assume you are not running the query from a shell prompt and redirecting the output to a file of the name 1)

This worked for me:
SELECT * FROM (
SELECT DATETIME, TAG_NAME, COUNT(*) AS DUPLICATES
FROM ML_MEASURE
GROUP BY DATETIME, TAG_NAME
) WHERE DUPLICATES > 1

Related

How to group by one column and limit to rows where another column has the same value for all rows in group?

I have a table like this
CREATE TABLE userinteractions
(
userid bigint,
dobyr int,
-- lots more fields that are not relevant to the question
);
My problem is that some of the data is polluted with multiple dobyr values for the same user.
The table is used as the basis for further processing by creating a new table. These cases need to be removed from the pipeline.
I want to be able to create a clean table that contains unique userid and dobyr limited to the cases where there is only one value of dobyr for the userid in userinteractions.
For example I start with data like this:
userid,dobyr
1,1995
1,1995
2,1999
3,1990 # dobyr values not equal
3,1999 # dobyr values not equal
4,1989
4,1989
And I want to select from this to get a table like this:
userid,dobyr
1,1995
2,1999
4,1989
Is there an elegant, efficient way to get this in a single sql query?
I am using postgres.
EDIT: I do not have permissions to modify the userinteractions table, so I need a SELECT solution, not a DELETE solution.
Clarified requirements: your aim is to generate a new, cleaned-up version of an existing table, and the clean-up means:
If there are many rows with the same userid value but also the same dobyr value, one of them is kept (doesn't matter which one), rest gets discarded.
All rows for a given userid are discarded if it occurs with different dobyr values.
create table userinteractions_clean as
select distinct on (userid,dobyr) *
from userinteractions
where userid in (
select userid
from userinteractions
group by userid
having count(distinct dobyr)=1 )
order by userid,dobyr;
This could also be done with an not in, not exists or exists conditions. Also, select which combination to keep by adding columns at the end of order by.
Updated demo with tests and more rows.
If you don't need the other columns in the table, only something you'll later use as a filter/whitelist, plain userid's from records with (userid,dobyr) pairs matching your criteria are enough, as they already uniquely identify those records:
create table userinteractions_whitelist as
select userid
from userinteractions
group by userid
having count(distinct dobyr)=1
Just use a HAVING clause to assert that all rows in a group must have the same dobyr.
SELECT
userid,
MAX(dobyr) AS dobyr
FROM
userinteractions
GROUP BY
userid
HAVING
COUNT(DISTINCT dobyr) = 1

Get unique records from table avoiding all duplicates based on two key columns

I have a table Trial_tb with columns p_id,t_number and rundate.
Sample values:
p_id|t_number|rundate
=====================
111|333 |1/7/2016||
111|333 |1/1/2016||
222|888 |1/8/2016||
222|444 |1/2/2016||
666|888 |1/6/2016||
555|777 |1/5/2016||
pid and tnumber are key columns. I need fetch values such that the result should not have any record in which pid-tnumber combination are duplicated. For example there is duplication for 111|333 and hence not valid. The query should fetch all other than first two records.
I wrote below script but it fetches only the last record. :(
select rundate,p_id,t_number from
(
select rundate,p_id,t_number,
count(p_id) over (partition by p_id) PCnt,
count(t_number) over (partition by t_number) TCnt
from trialtb
)a
where a.PCnt=1 and a.TCnt=1
The having clause is ideal for this job. Having allows you to filter on aggregated records.
-- Finding unique combinations.
SELECT
p_id,
t_number
FROM
trialtb
GROUP BY
p_id,
t_number
HAVING
COUNT(*) = 1
;
This query returns combinations of p_id and t_number that occur only once.
If you want to include rundate you could add MAX(rundate) AS rundate to the select clause. Because you are only looking at unique occurrences the max or min would always be the same.
Do you mean:
select
p_id,t_number
from
trialtb
group by
p_id,t_number
having
count(*) = 1
or do you need the run date too?
select
p_id,t_number,max(rundate)
from
trialtb
group by
p_id,t_number
having
count(*) = 1
Seeing as you are only looking items with one result using max or min should work fine

SQL Server Sum multiple rows into one - no temp table

I would like to see a most concise way to do what is outlined in this SO question: Sum values from multiple rows into one row
that is, combine multiple rows while summing a column.
But how to then delete the duplicates. In other words I have data like this:
Person Value
--------------
1 10
1 20
2 15
And I want to sum the values for any duplicates (on the Person col) into a single row and get rid of the other duplicates on the Person value. So my output would be:
Person Value
-------------
1 30
2 15
And I would like to do this without using a temp table. I think that I'll need to use OVER PARTITION BY but just not sure. Just trying to challenge myself in not doing it the temp table way. Working with SQL Server 2008 R2
Simply put, give me a concise stmt getting from my input to my output in the same table. So if my table name is People if I do a select * from People on it before the operation that I am asking in this question I get the first set above and then when I do a select * from People after the operation, I get the second set of data above.
Not sure why not using Temp table but here's one way to avoid it (tho imho this is an overkill):
UPDATE MyTable SET VALUE = (SELECT SUM(Value) FROM MyTable MT WHERE MT.Person = MyTable.Person);
WITH DUP_TABLE AS
(SELECT ROW_NUMBER()
OVER (PARTITION BY Person ORDER BY Person) As ROW_NO
FROM MyTable)
DELETE FROM DUP_TABLE WHERE ROW_NO > 1;
First query updates every duplicate person to the summary value. Second query removes duplicate persons.
Demo: http://sqlfiddle.com/#!3/db7aa/11
All you're asking for is a simple SUM() aggregate function and a GROUP BY
SELECT Person, SUM(Value)
FROM myTable
GROUP BY Person
The SUM() by itself would sum up the values in a column, but when you add a secondary column and GROUP BY it, SQL will show distinct values from the secondary column and perform the aggregate function by those distinct categories.

Higher Query result with the DISTINCT Keyword?

Say I have a table with 100,000 User IDs (UserID is an int).
When I run a query like
SELECT COUNT(Distinct User ID) from tableUserID
the result I get is HIGHER than the result from the following statement:
SELECT COUNT(User ID) from tableUserID
I thought Distinct implied unique, which would mean a lower result. What would cause this discrepancy and how would I identify those user IDs that don't show up in the 2nd query?
Thanks
**
UPDATE - 11:14 am est
**
Hi All
I sincerely apologize as I should've taken the trouble to reproduce this in my local environment. But I just wanted to see if there was a general consensus about this. Here are the full details:
The query is a result of an inner join between 2 tables.
One has this information:
TABLE ACTIVITY (NO PRIMARY KEY)
UserID int (not Nullable)
JoinDate datetime
Status tinyint
LeaveDate datetime
SentAutoMessage tinyint
SectionDetails varchar
And here is the second table:
TABLE USER_INFO (CLUSTERED PRIMARY KEY)
UserID int (not Nullable)
UserName varchar
UserActive int
CreatedOn datetime
DisabledOn datetime
The tables are joined on UserID and the UserID being selected in the original 2 queries is the one from the TABLE ACTIVITY.
Hope this clarifies the question.
This is not technically an answer, but since I took time to analyze this, I might as well post it (although I have the risk of being down voted).
There was no way I could reproduce the described behavior.
This is the scenario:
declare #table table ([user id] int)
insert into #table values
(1),(1),(1),(1),(1),(1),(1),(2),(2),(2),(2),(2),(2),(null),(null)
And here are some queries and their results:
SELECT COUNT(User ID) FROM #table --error: this does not run
SELECT COUNT(dsitinct User ID) FROM #table --error: this does not run
SELECT COUNT([User ID]) FROM #table --result: 13 (nulls not counted)
SELECT COUNT(distinct [User ID]) FROM #table --result: 2 (nulls not counted)
And something interesting:
SELECT user --result: 'dbo' in my sandbox DB
SELECT count(user) from #table --result: 15 (nulls are counted because user value
is not null)
SELECT count(distinct user) from #table --result: 1 (user is the same
value always)
I find it very odd that you are able to run the queries exactly how you described. You'd have to let us know the table structure and the data to get further help.
how would I identify those user IDs that don't show up in the 2nd query
Try this query
SELECT UserID from tableUserID Where UserID not in (SELECT Distinct User ID from tableUserID)
I think there will be no row.
Edit:
User is a reserved keyword. Do you mean UserID in your requests ?
Ray : Yes
I tried to reproduce the problem in my environment and my conclusion is that given the conditions you described, the result from the first query can not be higher than the second one. Even if there would be NULL's, that just won't happen.
Did you run the query #Jean-Charles sugested?
I'm very intrigued with this, please let us know what turns out to be the problem.

How to delete duplicate rows with SQL?

I have a table with some rows in. Every row has a date-field. Right now, it may be duplicates of a date. I need to delete all the duplicates and only store the row with the highest id. How is this possible using a SQL query?
Now:
date id
'07/07' 1
'07/07' 2
'07/07' 3
'07/05' 4
'07/05' 5
What I want:
date id
'07/07' 3
'07/05' 5
DELETE FROM table WHERE id NOT IN
(SELECT MAX(id) FROM table GROUP BY date);
I don't have comment rights, so here's my comment as an answer in case anyone comes across the same problem:
In SQLite3, there is an implicit numerical primary key called "rowid", so the same query would look like this:
DELETE FROM table WHERE rowid NOT IN
(SELECT MAX(rowid) FROM table GROUP BY date);
this will work with any table even if it does not contain a primary key column called "id".
For mysql,postgresql,oracle better way is SELF JOIN.
Postgresql:
DELETE FROM table t1 USING table t2 WHERE t1.date=t2.date AND t1.id<t2.id;
MySQL
DELETE FROM table
USING table, table as vtable
WHERE (table.id < vtable.id)
AND (table.date=vtable.date)
SQL aggregate (max,group by) functions almost always are very slow.