How to copy the column id from another table? - sql

I'm stuck with this since last week. I have two tables, where the id column of CustomerTbl correlates with CustomerID column of PurchaseTbl:
What I'm trying to achieve is I want to duplicate the data of the table from itself, but copy the newly generated id of CustomerTbl to PurchaseTbl's CustomerID
Just like from the screenshots above. Glad for any help :)

You may use OUTPUT clause to access to the new ID. But to access to both OLD ID and NEW ID, you will need to use MERGE statement. INSERT statement does not allow you to access to the source old id.
first you need somewhere to store the old and new id, a mapping table. You may use table variable or temp table
declare #out table
(
old_id int,
new_id int
)
then the merge statement with output clause
merge
#CustomerTbl as t
using
(
select id, name
from CustomerTbl
) as s
on 1 = 2 -- force it to `false`, not matched
when not matched then
insert (name)
values (name)
output -- the output clause
s.id, -- old_id
inserted.id -- new_id
into #out (old_id, new_id);
after that you just use the #out to join back using old_id to obtain the new_id for the PurchaseTbl
insert into PurchaseTbl (CustomerID, Item, Price)
select o.new_id, p.Item, p.Price
from #out o
inner join PurchaseTbl p on o.old_id = p.CustomerID

Not sure what your end game is, but one way you could solve this is this:
INSERT INTO purchaseTbl ( customerid ,
item ,
price )
SELECT customerid + 3 ,
item ,
price
FROM purchaseTbl;

Related

Generating Lines based on a value from a column in another table

I have the following table:
EventID=00002,DocumentID=0005,EventDesc=ItemsReceived
I have the quantity in another table
DocumentID=0005,Qty=20
I want to generate a result of 20 lines (depending on the quantity) with an auto generated column which will have a sequence of:
ITEM_TAG_001,
ITEM_TAG_002,
ITEM_TAG_003,
ITEM_TAG_004,
..
ITEM_TAG_020
Here's your sql query.
with cte as (
select 1 as ctr, t2.Qty, t1.EventID, t1.DocumentId, t1.EventDesc from tableA t1
inner join tableB t2 on t2.DocumentId = t1.DocumentId
union all
select ctr + 1, Qty, EventID, DocumentId, EventDesc from cte
where ctr <= Qty
)select *, concat('ITEM_TAG_', right('000'+ cast(ctr AS varchar(3)),3)) from cte
option (maxrecursion 0);
Output:
Best is to introduce a numbers table, very handsome in many places...
Something along:
Create some test data:
DECLARE #MockNumbers TABLE(Number BIGINT);
DECLARE #YourTable1 TABLE(DocumentID INT,ItemTag VARCHAR(100),SomeText VARCHAR(100));
DECLARE #YourTable2 TABLE(DocumentID INT, Qty INT);
INSERT INTO #MockNumbers SELECT TOP 100 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values;
INSERT INTO #YourTable1 VALUES(1,'FirstItem','qty 5'),(2,'SecondItem','qty 7');
INSERT INTO #YourTable2 VALUES(1,5), (2,7);
--The query
SELECT CONCAT(t1.ItemTag,'_',REPLACE(STR(A.Number,3),' ','0'))
FROM #YourTable1 t1
INNER JOIN #YourTable2 t2 ON t1.DocumentID=t2.DocumentID
CROSS APPLY(SELECT Number FROM #MockNumbers WHERE Number BETWEEN 1 AND t2.Qty) A;
The result
FirstItem_001
FirstItem_002
[...]
FirstItem_005
SecondItem_001
SecondItem_002
[...]
SecondItem_007
The idea in short:
We use an INNER JOIN to get the quantity joined to the item.
Now we use APPLY, which is a row-wise action, to bind as many rows to the set, as we need it.
The first item will return with 5 lines, the second with 7. And the trick with STR() and REPLACE() is one way to create a padded number. You might use FORMAT() (v2012+), but this is working rather slowly...
The table #MockNumbers is a declared table variable containing a list of numbers from 1 to 100. This answer provides an example how to create a pyhsical numbers and date table. Any database should have such a table...
If you don't want to create a numbers table, you can search for a tally table or tally on the fly. There are many answers showing approaches how to create a list of running numbers...a

How to insert bulk data without changing order of item into table using merge statement

I wrote a stored procedure that can insert bulk data into table using the merge statement.
Problem is that when I insert itemid 1024,1000,1012,1025 in this order, then SQL Server automatically changes order of itemid 1000,1012,1024,1025.
I want to insert data that I actually pass.
Here is sample code. This will parse XML string into table object:
DECLARE #tblPurchase TABLE
(
Purchase_Detail_ID INT ,
Purchase_ID INT ,
Head_ID INT ,
Item_ID INT
);
INSERT INTO #tblPurchase (Purchase_Detail_ID, Purchase_ID, Head_ID, Item_ID)
SELECT
Tbl.Col.value('Purchase_Detail_ID[1]', 'INT') AS Purchase_Detail_ID,
Tbl.Col.value('Purchase_ID[1]', 'INT') AS Purchase_ID,
Tbl.Col.value('Head_ID[1]', 'INT') AS Head_ID,
Tbl.Col.value('Item_ID[1]', 'INT') AS Item_ID
FROM
#PurchaseDetailsXML.nodes('/documentelement/TRN_Purchase_Details') Tbl(Col)
This will insert bulk data into the TRN_Purchase_Details table:
MERGE TRN_Purchase_Details MTD
USING (SELECT
Purchase_Detail_ID,
Id AS Purchase_ID,
Head_ID, Item_ID
FROM
#tblPurchase
LEFT JOIN
#ChangeResult ON 1 = 1) AS TMTD ON MTD.Purchase_Detail_ID = TMTD.Purchase_Detail_ID
AND MTD.Purchase_ID = TMTD.Purchase_ID
WHEN MATCHED THEN
UPDATE SET MTD.Head_ID = TMTD.Head_ID,
MTD.Item_ID = TMTD.Item_ID
WHEN NOT MATCHED BY TARGET THEN
INSERT (Purchase_ID, Head_ID, Item_ID)
VALUES (Purchase_ID, Head_ID, Item_ID)
WHEN NOT MATCHED BY SOURCE AND
MTD.Purchase_ID = (SELECT TOP 1 Id
FROM #ChangeResult
WHERE Id > 0) THEN
DELETE;
Rows in a SQL table don't have any order. They come back in indeterminate order unless you specify an order by.
Try adding an identity column to your temporary table?
DECLARE #tblPurchase TABLE
(
ID int identity,
Purchase_Detail_ID INT ,
The identity column might capture the order of the XML elements.
If that doesn't work, you can calculate the position of the elements in the XML and store that position in the temporary table.
As mentioned elsewhere, data in a table is stored as an unordered set. If you need to be able to go back to your table after data is inserted and determine the order that it was inserted, you'll have to add a column to the table schema to record that information.
It could be something as simple as adding an IDENTITY column, which will increment on each row addition, or perhaps a column with a DATETIME data type and a GETDATE() default value so you not only know the order rows were added, but exactly when that happened.

Enter Specific data depending on row criteria

Afternoon, I have the following SQL command:
SELECT INVOICE_ID,
ITEM_ID,
ORDER_NO,
CLIENT_STATE
FROM CUSTOMER_ORDER_INV_JOIN
WHERE ORDER_NO = '*1007';
This pulls out the following information:
[enter image description here][1]
There is a specific Criteria that I want to reach and that is the following:
On Order No: *1007, If client state on all lines = PaidPosted then I need another column to show 'PaidPosted' on all lines.
However On Order No: *1007, If Client State on 4 Lines = 'PaidPosted' but 1 or more lines = 'PostedAuth' then I need another column where all lines to show 'PostedAuth'. However if all of the lines are NULL I need a column where all lines show 'No Invoice'.
Hopefully this makes more sense.
I think this will get you what you need.
You can create a Temporary Table that has your sort order:
CREATE TABLE #Sort_Order
(myOrder INT,
CLIENT_STATE NVARCHAR(20)
)
INSERT INTO #Sort_Order
VALUES(1, 'Preliminary')
INSERT INTO #Sort_Order
VALUES(2, 'PostedAuth')
INSERT INTO #Sort_Order
VALUES(3, 'PaidPosted')
Then you can just join it to your table and run a RANK function on it like so:
SELECT
A.*,
DENSE_RANK() OVER (PARTITION BY A.ID
ORDER BY B.myOrder ASC) AS OrderRank
FROM #Temp A
INNER JOIN #Sort_Order B On (A.CLIENT_STATE = B.CLIENT_STATE)
WHERE A.ID = 1
This will give you results by RANK and you can use a WHERE statement to filter on only RANK = 1
If your data has multiple rows of the same Client State, you will need to do some kind of DISTINCT or GROUP BY.

I am looking for a way for a trigger to insert into a second table only where the value in table 1 changes

I am looking for a way for a trigger to insert into a second table only where the value in table 1 changes. It is essentially an audit tool to trap any changes made. The field in table 1 is price and we want to write additional fields.
This is what I have so far.
CREATE TRIGGER zmerps_Item_costprice__update_history_tr ON [ITEM]
FOR UPDATE
AS
insert into zmerps_Item_costprice_history
select NEWID(), -- unique id
GETDATE(), -- CURRENT_date
'PRICE_CHANGE', -- reason code
a.ima_itemid, -- item id
a.ima_price-- item price
FROM Inserted b inner join item a
on b.ima_recordid = a.IMA_RecordID
The table only contains a unique identifier, date, reference(item) and the field changed (price). It writes any change not just a price change
Is it as simple as this? I moved some of the code around because comments after the comma between columns is just painful to maintain. You also should ALWAYS specify the columns in an insert statement. If your table changes this code will still work.
CREATE TRIGGER zmerps_Item_costprice__update_history_tr ON [ITEM]
FOR UPDATE
AS
insert into zmerps_Item_costprice_history
(
UniqueID
, CURRENT_date
, ReasonCode
, ItemID
, ItemPrice
)
select NEWID()
, GETDATE()
, 'PRICE_CHANGE'
, d.ima_itemid
, d.ima_price
FROM Inserted i
inner join deleted d on d.ima_recordid = i.IMA_RecordID
AND d.ima_price <> i.ima_price
Since you haven't provided any other column names I Have used Column2 and Column3 and the "Other" column names in the below example.
You can expand adding more columns in the below code.
overview about the query below:
Joined the deleted and inserted table (only targeting the rows that has changed) joining with the table itself will result in unnessacary processing of the rows which hasnt changed at all.
I have used NULLIF function to yeild a null value if the value of the column hasnt changed.
converted all the columns to same data type (required for unpivot) .
used unpivot to eliminate all the nulls from the result set.
unpivot will also give you the column name its has unpivoted it.
CREATE TRIGGER zmerps_Item_costprice__update_history_tr
ON [ITEM]
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON ;
WITH CTE AS (
SELECT CAST(NULLIF(i.Price , d.Price) AS NVARCHAR(100)) AS Price
,CAST(NULLIF(i.Column2 , d.Column2) AS NVARCHAR(100)) AS Column2
,CAST(NULLIF(i.Column3 , d.Column3) AS NVARCHAR(100)) AS Column3
FROM dbo.inserted i
INNER JOIN dbo.deleted d ON i.IMA_RecordID = d.IMA_RecordID
WHERE i.Price <> d.Price
OR i.Column2 <> d.Column2
OR i.Column3 <> d.Column3
)
INSERT INTO zmerps_Item_costprice_history
(unique_id, [CURRENT_date], [reason code], Item_Value)
SELECT NEWID()
,GETDATE()
,Value
,ColumnName + '_Change'
FROM CTE UNPIVOT (Value FOR ColumnName IN (Price , Column2, Column3) )up
END
As I understand your question correctly, You want to record change If and only if The column Price value is changes, you dont need any other column changes to be recorded
here is your code
CREATE TRIGGER zmerps_Item_costprice__update_history_tr ON [ITEM]
FOR UPDATE
AS
if update(ima_price)
insert into zmerps_Item_costprice_history
select NEWID(), -- unique id
GETDATE(), -- CURRENT_date
'PRICE_CHANGE', -- reason code
a.ima_itemid, -- item id
a.ima_price-- item price
FROM Inserted b inner join item a
on b.ima_recordid = a.IMA_RecordID

Tricky MS Access SQL query to remove surplus duplicate records

I have an Access table of the form (I'm simplifying it a bit)
ID AutoNumber Primary Key
SchemeName Text (50)
SchemeNumber Text (15)
This contains some data eg...
ID SchemeName SchemeNumber
--------------------------------------------------------------------
714 Malcolm ABC123
80 Malcolm ABC123
96 Malcolms Scheme ABC123
101 Malcolms Scheme ABC123
98 Malcolms Scheme DEF888
654 Another Scheme BAR876
543 Whatever Scheme KJL111
etc...
Now. I want to remove duplicate names under the same SchemeNumber. But I want to leave the record which has the longest SchemeName for that scheme number. If there are duplicate records with the same longest length then I just want to leave only one, say, the lowest ID (but any one will do really). From the above example I would want to delete IDs 714, 80 and 101 (to leave only 96).
I thought this would be relatively easy to achieve but it's turning into a bit of a nightmare! Thanks for any suggestions. I know I could loop it programatically but I'd rather have a single DELETE query.
See if this query returns the rows you want to keep:
SELECT r.SchemeNumber, r.SchemeName, Min(r.ID) AS MinOfID
FROM
(SELECT
SchemeNumber,
SchemeName,
Len(SchemeName) AS name_length,
ID
FROM tblSchemes
) AS r
INNER JOIN
(SELECT
SchemeNumber,
Max(Len(SchemeName)) AS name_length
FROM tblSchemes
GROUP BY SchemeNumber
) AS w
ON
(r.SchemeNumber = w.SchemeNumber)
AND (r.name_length = w.name_length)
GROUP BY r.SchemeNumber, r.SchemeName
ORDER BY r.SchemeName;
If so, save it as qrySchemes2Keep. Then create a DELETE query to discard rows from tblSchemes whose ID value is not found in qrySchemes2Keep.
DELETE
FROM tblSchemes AS s
WHERE Not Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID);
Just beware, if you later use Access' query designer to make changes to that DELETE query, it may "helpfully" convert the SQL to something like this:
DELETE s.*, Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID)
FROM tblSchemes AS s
WHERE (((Exists (SELECT * FROM qrySchemes2Keep WHERE MinOfID = s.ID))=False));
DELETE FROM Table t1
WHERE EXISTS (SELECT 1 from Table t2
WHERE t1.SchemeNumber = t2.SchemeNumber
AND Length(t2.SchemeName) > Length(t1.SchemeName)
)
Depend on your RDBMS you may use function different from Length (Oracle - length, mysql - length, sql server - LEN)
delete ShortScheme
from Scheme ShortScheme
join Scheme LongScheme
on ShortScheme.SchemeNumber = LongScheme.SchemeNumber
and (len(ShortScheme.SchemeName) < len(LongScheme.SchemeName) or (len(ShortScheme.SchemeName) = len(LongScheme.SchemeName) and ShortScheme.ID > LongScheme.ID))
(SQL Server flavored)
Now updated to include the specified tie resolution. Although, you may get better performance doing it in two queries: first deleting the schemes with shorter names as in my original query and then going back and deleting the higher ID where there was a tie in name length.
I'd do this in multiple steps. Large delete operations done in a single step make me too nervous -- what if you make a mistake? There's no sql 'undo' statement.
-- Setup the data
DROP Table foo;
DROP Table bar;
DROP Table bat;
DROP Table baz;
CREATE TABLE foo (
id int(11) NOT NULL,
SchemeName varchar(50),
SchemeNumber varchar(15),
PRIMARY KEY (id)
);
insert into foo values (714, 'Malcolm', 'ABC123' );
insert into foo values (80, 'Malcolm', 'ABC123' );
insert into foo values (96, 'Malcolms Scheme', 'ABC123' );
insert into foo values (101, 'Malcolms Scheme', 'ABC123' );
insert into foo values (98, 'Malcolms Scheme', 'DEF888' );
insert into foo values (654, 'Another Scheme ', 'BAR876' );
insert into foo values (543, 'Whatever Scheme ', 'KJL111' );
-- Find all the records that have dups, find the longest one
create table bar as
select max(length(SchemeName)) as max_length, SchemeNumber
from foo
group by SchemeNumber
having count(*) > 1;
-- Find the one we want to keep
create table bat as
select min(a.id) as id, a.SchemeNumber
from foo a join bar b on a.SchemeNumber = b.SchemeNumber
and length(a.SchemeName) = b.max_length
group by SchemeNumber;
-- Select into this table all the rows to delete
create table baz as
select a.id from foo a join bat b where a.SchemeNumber = b.SchemeNumber
and a.id != b.id;
This will give you a new table with only records for rows that you want to remove.
Now check these out and make sure that they contain only the rows you want deleted. This way you can make sure that when you do the delete, you know exactly what to expect. It should also be pretty fast.
Then when you're ready, use this command to delete the rows using this command.
delete from foo where id in (select id from baz);
This seems like more work because of the different tables, but it's safer probably just as fast as the other ways. Plus you can stop at any step and make sure the data is what you want before you do any actual deletes.
If your platform supports ranking functions and common table expressions:
with cte as (
select row_number()
over (partition by SchemeNumber order by len(SchemeName) desc) as rn
from Table)
delete from cte where rn > 1;
try this:
Select * From Table t
Where Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber )
And Id >
(Select Min (Id)
From Table
Where SchemeNumber = t.SchemeNumber
And SchemeName = t.SchemeName)
or this:,...
Select * From Table t
Where Id >
(Select Min(Id) From Table
Where SchemeNumber = t.SchemeNumber
And Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber))
if either of these selects the records that should be deleted, just change it to a delete
Delete
From Table t
Where Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber )
And Id >
(Select Min (Id)
From Table
Where SchemeNumber = t.SchemeNumber
And SchemeName = t.SchemeName)
or using the second construction:
Delete From Table t Where Id >
(Select Min(Id) From Table
Where SchemeNumber = t.SchemeNumber
And Len(SchemeName) <
(Select Max(Len(Schemename))
From Table
Where SchemeNumber = t.SchemeNumber))