Remove Common Data In Access Table - sql

I'm using an Access Database and have the following question:
How is it possible to remove entire rows based on certain columns?
For example I have the following table:
Id | Name | Desc | Contact No. |
1 | XYZ | bla bla | 123456 |
2 | ABC | bla bla | 123456 |
3 | TUV | hmmmmm | 123456 |
4 | XYZ | bla bla | 123456 |
I want to remove all lines of data which have duplicate information, so in this case, rows 1 & 4. The only issue of course, is the Id, which makes the overall row unique....
How would I go about removing this? Is there some SQL code that can handle this?

An autojoin for detecting the duplicate:
DELETE FROM Table
WHERE Id IN
(
SELECT t1.Id
FROM Table t1
INNER JOIN join Table t2
ON t1.Name = t2.Name
AND t1.[Desc] = t2.[Desc]
AND t1.Contact = t2.Contact
)
If you want to keep only one row for each duplicate, look this question : Deleting duplicate rows from a table
DELETE FROM Table
WHERE Id NOT IN
(
SELECT MIN(Id)
FROM Table
GROUP BY Name, [Desc], Contact
)

If you need to delete all rows that have a duplicate, you could use this Ms-Access SQL query:
DELETE tbl.Id
FROM tbl
WHERE (((tbl.Id) In (SELECT tbl.Id
FROM
(SELECT [Name], [Desc], [Contact]
FROM tbl
GROUP BY [Name], [Desc], [Contact]
HAVING COUNT(*)>1) AS [t1]
INNER JOIN tbl
ON ([t1].Contact = tbl.Contact)
AND ([t1].[Desc] = tbl.[Desc])
AND ([t1].[Name] = tbl.[Name]))));
To keep just the first duplicate you could use this:
DELETE tbl.Id
FROM tbl
WHERE (tbl.Id) Not In (SELECT Min(tbl.Id) AS MinId
FROM tbl
GROUP BY tbl.[Name], tbl.[Desc], tbl.[Contact]);

Related

Compare columns from 2 different tables with only last inserted values in table_2 in SQL Server

If I have two different tables in a SQL Server 2019 database as follows:
Table1
|id | name |
+-----+--------+
| 1 | rose |
| 2 | peter |
| 3 | ann |
| 4 | rose |
| 5 | ann |
Table2
| name2 |
+--------+
|rose |
|ann |
I would like to retrieve only the last tow ids from table1 (which in this case 4 and 5) that match name2 in table2. In other words, match happens only once on the last added names in table1, furthermore, the ids (4, 5) to be inserted in table2.
How to do that using SQL?
Thank you
You can use row_number()
select name,id from
(
select *, row_number() over(partition by t.name order by id desc) as rn
from table1 t join table2 t1 on t.name=t1.name2
)A where rn=1
Your question is vague, so there could be many answers here. My first thought is that you simply want an inner join. This will fetch ONLY the data that both tables share.
SELECT Table1.*
FROM Table1
INNER JOIN Table2 on Table1.name = Table2.name2
You seem to be describing:
select . . . -- whatever columns you want
from (select top (2) t1.*
from table1 t1
order by t1.id desc
) t1 join
table2 t2
on t2.name2 = t1.name;
This doesn't seem particularly useful for the data you have provided, but it does what you describe.
EDIT:
If you want only the most recent rows that match, use row_number():
select . . . -- whatever columns you want
from (select t1.*,
row_number() over (partition by name order by id desc) as seqnum
from table1 t1
) t1 join
table2 t2
on t2.name2 = t1.name and t1.seqnum = 1;

What is the correct way from performance perspective to match(replace) every value in every row in temp table using SQL Server 2016 or 2017?

I am wondering what should I use in SQL Server 2016 or 2017 (CTE, LOOP, JOINS, CURSOR, REPLACE, etc) to match (replace) every value in every row in temp table? What is the best solution from performance perspective?
Source Table
|id |id2|
| 1 | 2 |
| 2 | 1 |
| 1 | 1 |
| 2 | 2 |
Mapping Table
|id |newid|
| 1 | 3 |
| 2 | 4 |
Expected result
|id |id2|
| 3 | 4 |
| 4 | 3 |
| 3 | 3 |
| 4 | 4 |
You may join the second table to the first table twice:
WITH cte AS (
SELECT
t1.id AS id_old,
t1.id2 AS id2_old,
t2a.newid AS id_new,
t2b.newid AS id2_new
FROM table1 t1
LEFT JOIN table2 t2a
ON t1.id = t2a.id
LEFT JOIN table2 t2b
ON t1.id2 = t2b.id
)
UPDATE cte
SET
id_old = id_new,
id2_old = id2_new;
Demo
Not sure if you want just a select here, or maybe an update, or an insert into another table. In any case, the core logic I gave above should work for all these cases.
You'd need to apply joins on update query. Something like this:
Update tblA set column1 = 'something', column2 = 'something'
from actualName tblA
inner join MappingTable tblB
on tblA.ID = tblB.ID
this query will compare eachrow with ids and if matched then it will update/replace the value of the column as you desire. :)
Do the self join only
SELECT t1.id2 as id, t2.id2
FROM table1 t
INNER JOIN table2 t1 on t1.id = t.id
INNER JOIN table2 t2 on t2.id = t.id2
This may have best performance from solutions posted here if you have indexes set appropriately:
select (select [newid] from MappingTable where id = [ST].[id]) [id],
(select [newid] from MappingTable where id = [ST].[id2]) [id2]
from SourecTable [ST]

Oracle Efficiently joining tables with subquery in FROM

Table 1:
| account_no | **other columns**...
+------------+-----------------------
| 1 |
| 2 |
| 3 |
| 4 |
Table 2:
| account_no | TX_No | Balance | History |
+------------+-------+---------+------------+
| 1 | 123 | 123 | 12.01.2011 |
| 1 | 234 | 2312 | 01.03.2011 |
| 3 | 232 | 212 | 19.02.2011 |
| 4 | 117 | 234 | 24.01.2011 |
I have multiple join query, one of the tables(Table 2) inside a query is problematic as it is a view which computes many other things, that is why each query to that table is costly. From Table 2, for each account_no in Table 1 I need the whole row with the greatest TX_NO, this is how I do it:
SELECT * FROM TABLE1 A LEFT JOIN
( SELECT
X.ACCOUNT_NO,
HISTORY,
X.BALANCE
FROM TABLE2 X INNER JOIN
(SELECT
ACCOUNT_NO,
MAX(TX_NO) AS TX_NO
FROM TABLE2
GROUP BY ACCOUNT_NO) Y ON X.ACCOUNT_NO = Y.ACCOUNT_NO) B
ON B.ACCOUNT_NO = A.ACCOUNT_NO
As I understand at first it will make the inner join for all the rows in Table2 and after that left join needed account_no's with Table1 which is what I would like to avoid.
My question: Is there a way to find the max(TX_NO) for only those accounts that are in Table1 instead of going through all? I think it will help to increase the speed of the query.
I think you are on the right track, but I don't think that you need to, and would not myself, nest the subqueries the way you have done. Instead, if you want to get each record from table 1 and the matching max record from table 2, you can try the following:
SELECT * FROM TABLE1 t1
LEFT JOIN
(
SELECT t.*,
ROW_NUMBER() OVER (PARTITION BY account_no ORDER BY TX_No DESC) rn
FROM TABLE2 t
) t2
ON t1.account_no = t2.account_no AND
t2.rn = 1
If you want to continue with your original approach, this is how I would do it:
SELECT *
FROM TABLE1 t1
LEFT JOIN TABLE2 t2
ON t1.account_no = t2.account_no
INNER JOIN
(
SELECT account_no, MAX(TX_No) AS max_tx_no
FROM TABLE2
GROUP BY account_no
) t3
ON t2.account_no = t3.account_no AND
t2.TX_No = t3.max_tx_no
Instead of using a window function to find the greatest record per account in TABLE2, we use a second join to a subquery instead. I would expect the window function approach to perform better than this double join approach, and once you get used to it can even easier to read.
If table1 is comparatiely less expensive then you could think of doing a left outer join first which would considerable decrease the resultset and from that pick the latest transaction id records alone
select <required columns> from
(
select f.<required_columns),row_number() over (partition by account_no order by tx_id desc ) as rn
from
(
a.*,b.tx_id,b.balance,b.History
from table1 a left outer join table2 b
on a.account_no=b.account_no
)f
)g where g.rn=1

Returning max values of query based on different column

My table stores various versions of the few documents.
-------------------------------
| id | doc_type | download |
-------------------------------
| 1 | type1 | file |
-------------------------------
| 2 | type2 | file |
-------------------------------
| 3 | type3 | file |
-------------------------------
| 4 | type1 | file |
-------------------------------
The table stores different versions of the same type of documents. I need to build a query which will return distinct types of doc_type having max(id) - which is the newest version of the file. Number of doc_types is not limited and is dynamic. My query so far:
select max(id) from template_table
where doc_type in (select distinct doct_type from template_table);
This returns only one largest result. If I could sort results by id ASC and the limit result to 4 largest but it will not guarantee that it will return distinct doc_types. Also number of document types in DB might be changing from 4 it needs to count how many there is.
select * from template_table
order by id limit 4;
Thanks for any help.
Query:
SELECT t1.id,
t1.doc_type,
t1.download
FROM template_table t1
JOIN (SELECT MAX(id) AS id,
doc_typ
FROM template_table
GROUP BY doc_type) t2
ON t2.doc_type = t1.doc_type
AND t2.id = t1.id
OR:
SELECT t1.id,
t1.doc_type,
t1.download
FROM template_table t1
WHERE t1.id = (SELECT MAX(t2.id)
FROM template_table t2
WHERE t2.doc_type = t1.doc_type)
you can use GROUP BY to get the desired result
select
doc_type
, max(id) AS last_id
, max(download) KEEP (DENSE_RANK FIRST order by id desc) AS last_download
from template_table
group by doc_type
;

Only select first row of repeating value in a column in SQL

I have table that has a column that may have same values in a burst. Like this:
+----+---------+
| id | Col1 |
+----+---------+
| 1 | 6050000 |
+----+---------+
| 2 | 6050000 |
+----+---------+
| 3 | 6050000 |
+----+---------+
| 4 | 6060000 |
+----+---------+
| 5 | 6060000 |
+----+---------+
| 6 | 6060000 |
+----+---------+
| 7 | 6060000 |
+----+---------+
| 8 | 6060000 |
+----+---------+
| 9 | 6050000 |
+----+---------+
| 10 | 6000000 |
+----+---------+
| 11 | 6000000 |
+----+---------+
Now I want to prune rows where the value of Col1 is repeated and only select the first occurrence.
For the above table the result should be:
+----+---------+
| id | Col1 |
+----+---------+
| 1 | 6050000 |
+----+---------+
| 4 | 6060000 |
+----+---------+
| 9 | 6050000 |
+----+---------+
| 10 | 6000000 |
+----+---------+
How can I do this in SQL?
Note that only burst rows should be removed and values can be repeated in non-burst rows! id=1 & id=9 are repeated in sample result.
EDIT:
I achieved it using this:
select id,col1 from data as d1
where not exists (
Select id from data as d2
where d2.id=d1.id-1 and d1.col1=d2.col1 order by id limit 1)
But this only works when ids are sequential. With gaps between ids (deleted ones) the query breaks. How can I fix this?
You can use a EXISTS semi-join to identify candidates:
Select wanted rows:
SELECT * FROM tbl t
WHERE NOT EXISTS (
SELECT *
FROM tbl
WHERE col1 = t.col1
AND id = t.id - 1
)
ORDER BY id;
Get rid of unwanted rows:
DELETE FROM tbl AS t
-- SELECT * FROM tbl t -- check first?
WHERE EXISTS (
SELECT *
FROM tbl
WHERE col1 = t.col1
AND id = t.id - 1
);
This effectively deletes every row, where the preceding row has the same value in col1, thereby arriving at your set goal: only the first row of every burst survives.
I left the commented SELECT statement because you should always check what is going to be deleted before you do the deed.
Solution for non-sequential IDs:
If your RDBMS supports CTEs and window functions (like PostgreSQL, Oracle, SQL Server, ... but not SQLite prior to v3.25, MS Access or MySQL prior to v8.0.1), there is an elegant way:
WITH cte AS (
SELECT *, row_number() OVER (ORDER BY id) AS rn
FROM tbl
)
SELECT id, col1
FROM cte c
WHERE NOT EXISTS (
SELECT *
FROM cte
WHERE col1 = c.col1
AND rn = c.rn - 1
)
ORDER BY id;
Another way doing the job without those niceties (should work for you):
SELECT id, col1
FROM tbl t
WHERE (
SELECT col1 = t.col1
FROM tbl
WHERE id < t.id
ORDER BY id DESC
LIMIT 1) IS NOT TRUE
ORDER BY id;
select min(id), Col1 from tableName group by Col1
If your RDBMS supports Window Aggregate functions and/or LEAD() and LAG() functions you can leverage them to accomplish what you are trying to report. The following SQL will help get you started down the right path:
SELECT id
, Col AS CurCol
, MAX(Col)
OVER(ORDER BY id ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS PrevCol
, MIN(COL)
OVER(ORDER BY id ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS NextCol
FROM MyTable
From there you can put that SQL in a derived table with some CASE logic that if the NextCol or PrevCol is the same as CurCol then set CurCol = NULL. Then you can collapse eliminate all the id records CurCol IS NULL.
If you don't have the ability to use window aggregates or LEAD/LAG functions your task is a little more complex.
Hope this helps.
Since id is always sequential, with no gaps or repetitions, as per your comment, you could use the following method:
SELECT t1.*
FROM atable t1
LEFT JOIN atable t2 ON t1.id = t2.id + 1 AND t1.Col1 = t2.Col1
WHERE t2.id IS NULL
The table is (outer-)joined to itself on the condition that the left side's id is one greater than the right side's and their Col1 values are identical. In other words, the condition is ‘the previous row contains the same Col1 value as the current row’. If there's no match on the right, then the current record should be selected.
UPDATE
To account for non-sequential ids (which, however, are assumed to be unique and defining the order of changes of Col1), you could also try the following query:
SELECT t1.*
FROM atable t1
LEFT JOIN atable t2 ON t1.id > t2.id
LEFT JOIN atable t3 ON t1.id > t3.id AND t3.id > t2.id
WHERE t3.id IS NULL
AND (t2.id IS NULL OR t2.Col1 <> t1.Col1)
The third self-join is there to ensure that the second one yields the row directly preceding that of t1. That is, if there's no match for t3, then either t2 contains the preceding row or it's got no match either, the latter meaning that t1's current row is the top one.