Been reading this site for answers for quite a while and now asking my first question!
I'm using SQL Server
I have two tables, ABC and ABC_Temp.
The contents are inserted into the ABC_Temp first before making its way to ABC.
Table ABC and ABC_Temp have the same columns, except that ABC_Temp has an extra column called LastUpdatedDate, which contains the date of the last update. Because ABC_Temp can have more than 1 of the same record, it has a composite key of the item number and the last updated date.
The columns are: ItemNo | Price | Qty and ABC_Temp has an extra column: LastUpdatedDate
I want to create a statement that follows the following conditions:
Check if each of the attributes of ABC differ from the value of ABC_Temp for records with the same key, if so then do the update (Even if only one attribute is different, all other attributes can be updated as well)
Only update those that need changes, if the record is the same, then it would not update.
Since an item can have more than one record in ABC_Temp I only want the latest updated one to be updated to ABC
I am currently using 2005 (I think, not at work at the moment).
This will be in a stored procedure and is called inside the VBscript scheduled task. So I believe it is a once time thing. Also I'm not trying to sync the two tables, as the contents of ABC_Temp would only contain new records bulk inserted from a text file through BCP. For the sake of context, this will be used with in conjunction with an insert stored proc that checks if records exist.
UPDATE
ABC
SET
price = T1.price,
qty = T1.qty
FROM
ABC
INNER JOIN ABC_Temp T1 ON
T1.item_no = ABC.item_no
LEFT OUTER JOIN ABC_Temp T2 ON
T2.item_no = T1.item_no AND
T2.last_updated_date > T1.last_updated_date
WHERE
T2.item_no IS NULL AND
(
T1.price <> ABC.price OR
T1.qty <> ABC.qty
)
If NULL values are possible in the price or qty columns then you will need to account for that. In this case I would probably change the inequality statements to look like this:
COALESCE(T1.price, -1) <> COALESCE(ABC.price, -1)
This assumes that -1 is not a valid value in the data, so you don't have to worry about it actually appearing there.
Also, is ABC_Temp really a temporary table that's just loaded long enough to get the values into ABC? If not then you are storing duplicate data in multiple places, which is a bad idea. The first problem is that now you need these kinds of update scenarios. There are other issues that you might run into, such as inconsistencies in the data, etc.
You could use cross apply to seek the last row in ABC_Temp with the same key. Use a where clause to filter out rows with no differences:
update abc
set col1 = latest.col1
, col2 = latest.col2
, col3 = latest.col3
from ABC abc
cross apply
(
select top 1 *
from ABC_Temp tmp
where abc.key = tmp.key
order by
tmp.LastUpdatedDate desc
) latest
where abc.col1 <> latest.col1
or (abc.col2 <> latest.col2
or (abc.col1 is null and latest.col2 is not null)
or (abc.col1 is not null and latest.col2 is null))
or abc.col3 <> latest.col3
In the example, only col2 is nullable. Since null <> 1 is not true, you have to check differences involving null using the is null syntax.
Related
I have a table with the sample data below. Now, I just want to compare one record with all other records in the same table and we have to give ID if that record colloids with any other records in the remaining records. And column is with comma separated data, So if we have 'A,C' as Name in one record and 'A' in another record(Check the input from text) then it colloid each other because 'A' is common in both.
In the same way one of the record is not having anything in the Name it is NULL. When it is Null it should colloid with remaining other records. Like this Name column I have around 10 columns to verify data.
Input
ID
Name
1
A,C
2
B
3
A
4
NULL
OUTPUT
ID
ColloidID
1
3
1
4
2
4
3
1
3
4
4
1
4
2
4
3
Problem : I have implemented solution like below, and it working fine as expected. But the thing here is it is fine when less data in the table(<100k) but it's taking more time and space when dealing with millions of data(Ex : >20M Data)
SELECT DISTINCT A.ID,B.ID AS ColloidID
FROM #Temp1 A
CROSS APPLY #Temp1 B
WHERE A.ID<>B.ID
AND master.dbo.fIntersection(COALESCE(A.Name,B.Name,''),COALESCE(B.Name,A.Name,'')) = 1
Ideally you should not store multiple pieces of info in a single column.
Be that as it may, you can use a nested EXISTS with STRING_SPLIT to compare the two columns.
SELECT t1.ID, t2.ID
FROM #Temp1 t1
JOIN #Temp1 t2 ON t2.ID <> t1.ID
AND (t1.Name IS NULL OR t2.Name IS NULL
OR EXISTS (SELECT 1
FROM STRING_SPLIT(t1.Name, ',') s1
JOIN STRING_SPLIT(t2.Name, ',') s2 ON s2.value = s1.value
)
)
ORDER BY
t1.ID,
t2.ID;
db<>fiddle
20M isn't a lot of data, provided a good database design is used, with proper indexes. This is definitely not a good design. It violates the most basic design rule - one value per field. As a result, it's impossible to index Name, forcing 4*10^14 comparisons.
The only way to get acceptable performance is to fix the design. To do that Name has to be split into separate rows. The data needs to be stored in a table whose Name column is covered by an index or primary key:
create table #Id_Names (
ID bigint not null,
Name varchar(30) null,
INDEX IX_Id_Names (Name,ID)
);
GO
INSERT INTO #Id_Names (Id,Name)
select ID,value
from #Temp1 t
CROSS APPLY STRING_SPLIT(Name,',');
After that, the query is simplified to :
SELECT
t1.ID,t2.ID as ColloidID
FROM #Id_Names t1
INNER JOIN #Id_Names t2
ON t1.ID<>t2.ID
AND (t1.Name=t2.Name
OR t1.Name IS NULL
OR t2.Name IS NULL)
This can run a lot faster. The only real problem is the logic of treating NULL as a wildcard. This will return the entire table. And since the table joins itself, each null will result in (20M-1)^2 extra rows. The same relations will be repeated twice, eg (1,4) and (4,1)
If #Temp1 was a proper table, an alternative would be to create an indexed view. Creating an index over a VIEW essentially generates, stores and updates its results automatically.
Another option is to create a Clustered Columnstore index. This provides both compression and acceleration. The data is stored per column in buckets of roughly 1M rows. In each bucket, each column value is only stored once.
create table #Id_Names (
ID bigint not null,
Name varchar(30) null,
INDEX CCI_Id_Names CLUSTERED COLUMNSTORE
);
I have table name Merge_table like :
Employee_Number MINISTRY_CODE BRANCH_SECRETARIAT_CODE
12 333 30
13 222 31
l want to copy the value of BRANCH_SECRETARIAT_CODE and paste it in different table called EMPLOYMENTS look like :
and ENTITY_BRANCH has null data
EMPLOYEE_NUMBER JOINING_DATE ENTITY_BRANCH
12 11/12/2006 null
13 01/11/2009 null
so, now i want to copy the value of BRANCH_SECRETARIAT_CODE from table1 to
table2 ENTITY_BRANCH for each employee according his EMPLOYEE_NUMBER
You can declare several tables in your UPDATE instruction and specify which column of which table has to be updated from the values of the other table.
In your case you have only 2 tables so the easier is to make an implict jointure using T1.Employee_Number = T2.Employee_Number :
UPDATE Table1 T1, Table2 T2
SET T2.ENTITY_BRANCH = T1.BRANCH_SECRETARIAT_CODE
WHERE T1.Employee_Number = T2.Employee_Number
I guessed this is for SQL server but this UPDATE statement will work also on MySQL and Access. Please edit your question to add the proper RDBMS tag.
Use the ANSI Standard's merge statement, which provides better join-style syntax for matching source and destination tables, supports complex source clauses, supports inserts too, etc.
merge into EMPLOYMENTS -- destination table
using Merge_table -- source table, or nested subquery, CTE, etc.
on Merge_table Employee_Number = EMPLOYMENTS.EMPLOYEE_NUMBER
-- any other criteria to determine which destination rows to affect
-- e.g.: and EMPLOYMENTS.EMPLOYEE_NUMBER is null
-- when not matched then
-- [...]
when matched then
update
set EMPLOYMENTS.ENTITY_BRANCH = Merge_table.BRANCH_SECRETARIAT_CODE;
I have 2 tables in SQL Server: Table 1 and Table 2.
Table 1 has 500 Records and Table 2 has Millions of Records.
Table 2 may/may not have the 500 Records of Table 1 in it.
I have to compare Table 1 and Table 2. But the result should give me only the Records of Table 1 which has any data change in Table 2. Means the Result should be less than or equal to 500.
I don't have any primary key but the columns in the 2 tables are same. I have written the following query. But I am getting time out exception and it is taking much time to process. Please help.
With CTE_DUPLICATE(OLD_FIRSTNAME ,New_FirstName,
OLD_LASTNAME ,New_LastName,
OLD_MINAME ,New_MIName ,
OLD_FAMILYID,NEW_FAMILYID,ROWNUMBER)
as (
Select distinct
OLD.FIRST_NAME AS 'OLD_FIRSTNAME' ,New.First_Name AS 'NEW_FIRSTNAME',
OLD.LAST_NAME AS 'OLD_LASTNAME',New.Last_Name AS 'NEW_LASTNAME',
OLD.MI_NAME AS 'OLD_MINAME',New.MI_Name AS 'NEW_MINAME',
OLD.FAMILY_ID AS 'OLD_FAMILYID',NEW.FAMILY_ID AS 'NEW_FAMILYID',
row_number()over(partition by OLD.FIRST_NAME ,New.First_Name,
OLD.LAST_NAME ,New.Last_Name,
OLD.MI_NAME ,New.MI_Name ,
OLD.FAMILY_ID,NEW.FAMILY_ID
order by OLD.FIRST_NAME ,New.First_Name,
OLD.LAST_NAME ,New.Last_Name,
OLD.MI_NAME ,New.MI_Name ,
OLD.FAMILY_ID,NEW.FAMILY_ID )as rank
From EEMSCDBStatic OLD,EEMS_VIPFILE New where
OLD.MPID <> New.MPID and old.FIRST_NAME <> New.First_Name
and OLD.LAST_NAME <> New.Last_Name and OLD.MI_NAME <> New.MI_Name
and old.Family_Id<>New.Family_id
)
sELECT OLD_FIRSTNAME ,New_FirstName,
OLD_LASTNAME ,New_LastName,
OLD_MINAME ,New_MIName ,
OLD_FAMILYID,NEW_FAMILYID FROM CTE_DUPLICATE where rownumber=1
I think the main problem here is that your query is forcing the DB to fully multiply your tables, which means processing ~500M combinations. It happens because you're connecting any record from T1 with any record from T2 that has at least one different value, including MPID that looks like the unique identifier that must be used to connect records.
If MPID is really the column that identifies records in both tables then your query should have a bit different structure:
SELECT old.FIRSTNAME, new.FirstName,
old.LASTNAME, new.LastName,
old.MINAME, new.MIName,
old.FAMILYID, new.FAMILYID
FROM EEMSCDBStatic old
INNER JOIN EEMS_VIPFILE new ON old.MPID = new.MPID
WHERE old.FIRST_NAME <> New.First_Name
AND OLD.LAST_NAME <> New.Last_Name
AND OLD.MI_NAME <> New.MI_Name
AND old.Family_Id <> New.Family_id
ORDER BY old.FIRSTNAME, new.FirstName,
old.LASTNAME, new.LastName,
old.MINAME, new.MIName,
old.FAMILYID, new.FAMILYID
A couple of other thoughts:
If you're looking for any change in a record (even if only one column has different values), you should use ORs in the WHERE clause, not ANDs. Now you're only looking for records that changed values in all columns. For instance, you'll fail to find a person who changed his or her first name but decided to keep last name.
You should obviously consider indexing your tables if it's possible.
Surely it is pointless to use DISTINCT keyword together with ROWNUMBER.
See this sql query distinct with Row_Number.
You are doing CROSS JOIN, which is terribly big in your case.
Perhaps in that condition you
where OLD.MPID <> New.MPID and old.FIRST_NAME <> New.First_Name and ...
you wanted to have OR instead of AND?
It is also not entirely clear why you use ROWNUMBER at all - perhaps to find the best match.
All this is because as #Shnugo correctly remarked, the logic behind your comparing is faulty - you must have some logic defined that would JOIN the tables (Like First and second name must be the same).
I have two table one is having all field VARCHAR2 but other having different type for different data.
For Example :
Table One
==========================
Col 1 VARCHAR2 UNIQUE KEY
Col 2 VARCHAR2
Col 3 VARCHAR2
===========================
Table Two
==========================
Col One VARCHAR2 UNIQUE KEY
Col Two TIMESTAMP
Col Three NUMBER
==========================
we are having one mapping table. it denotes which column of Table One has to compare with which column of Table Two.
For Example
Mapping Table
==============================
Table One Table Two
==============================
Col 1 Col One
Col 2 Col Three
Col 3 Col Two
==============================
Now with the help of UNIQUE KEY of TABLE ONE we have to find same row in TABLE TWO and compare rows column by column and get changes in data.
Currently we are using java program for comparing data row by row and column by column and getting changes between data in rows with same UNIQUE KEY. it is working fine but taking too much time as we are having 100000 records in DB.
Now my question is : is there any way i can compare data at SQL level and get changes in data?
You can do it 'manually' with a query like this: It's a lot of work, but there are only three different types of checks you need to do, so it's not very complex:
select
*
from
Table1 t1
full outer join Table2 t2 on t2.ID = t1.ID
where
-- Check ID, either record does not exist in either table.
t1.ID is null or
t2.ID = null or
-- Not nullable field can be easily compared.
t1.NotNullableField1 <> t2.NotNUllableField1 or
-- Nullable field is slightly more work.
t1.NullableField1 <> t2.NullableField1 or
(t1.NullableField1 is null and t2.NullableField1 is not null) or
(t1.NullableField1 is not null and t2.NullableField1 is null)
Another solution is to use MINUS, which is a bit like UNION, only it returns a dataset minus the records in a second dataset:
select * from Table1 t1
MINUS
select * from Table2 t2
This works only one way (which might be fine for your purpose), but you can also combine it with UNION to make it bidirectional.
select
*
from
( select * from Table1
MINUS
select * from Table2)
UNION ALL
( select * from Table2
MINUS
select * from Table1)
The output of both solutions is a bit different.
In the FULL OUTER JOIN query, the IDs will be joined and the values of the matching rows will be displayed next to each other as a single row.
In the MINUS query, the result will be presented as a single dataset. If a record does not exist in either one table, it will be displayed. If a record (ID) exists in both tables, but other fields are different, you will get both rows. So it's a bit harder to compare them.
See: http://www.techonthenet.com/oracle/minus.php
I'm using SQL Server 2008 and I'm trying to load a new (target) table from a staging (source) table. The target table is empty.
I think since my target table is empty, the MERGE statement skips the WHEN MATCHED part i.e. result of INNER JOIN is NULL and so nothing is UPDATED, and it just proceed to the WHEN NOT MATCHED BY TARGET part (LEFT OUTER JOIN) an inserts all the records in the staging table.
My target table looks exactly similar to my staging table (rows #1 and #4). There should be only 3 rows in the target table (3 inserts and one update for row #4). So, I'm not sure whats going on.
FileID client_id account_name account_currency creation_date last_modified
210 12345 Cars USD 2013-11-21 2013-11-27
211 23498 Truck USD 2013-09-22 2013-11-27
212 97652 Cars - 1 USD 2013-09-17 2013-11-27
210 12345 Cars JPY 2013-11-21 2013-11-29
QUERY
MERGE [AccountSettings] AS tgt -- RIGHT TABLE
USING
(
SELECT * FROM [AccountSettings_Staging]
) AS src -- LEFT TABLE
ON src.client_id = tgt.client_id
AND src.account_name = tgt.account_name
WHEN MATCHED -- INNER JOIN
THEN UPDATE
SET
tgt.[FileID] = src.[FileID]
,tgt.[account_currency] = src.[account_currency]
,tgt.[creation_date] = src.[creation_date]
,tgt.[last_modified] = src.[last_modified]
WHEN NOT MATCHED BY TARGET -- left outer join: A row from the source that has no corresponding row in the target
THEN INSERT
(
[FileID],
[client_id],
[account_name],
[account_currency],
[creation_date],
[last_modified]
)
VALUES
(
src.[FileID],
src.[client_id],
src.[account_name],
src.[account_currency],
src.[creation_date],
src.[last_modified]
);
Since the target table is empty, using MERGE seems to me like hiring a plumber to pour you a glass of water. And MERGE operates only one branch, independently, for every row of a table - it can't see that the key is repeated and so perform an insert and then an update - this betrays that you think SQL always operates on a row-by-row basis, when in fact most operations are performed on the entire set at once.
Why not just insert only the most recent row:
;WITH cte AS
(
SELECT FileID, ... other columns ...,
rn = ROW_NUMBER() OVER (PARTITION BY FileID ORDER BY last_modified DESC)
FROM dbo.AccountSettings_Staging
)
INSERT dbo.AccountSettings(FileID, ... other columns ...)
SELECT FileID, ... other columns ...
FROM cte WHERE rn = 1;
If you have potential for ties on the most recent last_modified, you'll need to find another tie-breaker (not obvious from your sample data).
For future versions, I would say run an UPDATE first:
UPDATE a SET client_id = s.client_id /* , other columns that can change */
FROM dbo.AccountSettings AS a
INNER JOIN dbo.AccountSettings_Staging AS s
ON a.FileID = s.FileID;
(Of course, this will choose an arbitrary row if the source contains multiple rows with the same FileID - you may want to use a CTE here too to make the choice predictable.)
Then add this clause to the INSERT CTE above:
FROM dbo.AccountSettings_Staging AS s
WHERE NOT EXISTS (SELECT 1 FROM dbo.AccountSettings
WHERE FileID = s.FileID);
Wrap it all in a transaction at the appropriate isolation level, and you are still avoiding a ton of complicated MERGE syntax, potential bugs, etc.
I think since my target table is empty, the MERGE statement skips the WHEN MATCHED part
Well, that's correct, but it's by design - MERGE is not a "progressive" merge. It does not go row-by-row to see if records inserted as part of the MERGE should now be updated. It processes the source in "batches" based on whether or not a match was found in the destination.
You'll need to deal with the "duplicate" records at the source before attempting the MERGE.