How can I update the following column "LowestFinishDate" in my #temp table to hold the absolute mininium value of columns ["Finished_OldDate", "Finished_NewDate" and "Current_FinishedDate"]?
This is what the table looks like for Parent0000:
So in this case I want all 7 rows in #temp.[LowestFinishDate] for Parent0000 to be updated to the lowest date which is:
2020-11-25 14:15.
I have tried doing a CROSS/OUTER APPLY and use a table-value constructor but for some reason each LowestFinishDate rows gets updated with the correspondent value of Current_FinishedDate.
Thanks in advance
In SQL Server, I would be inclined to write this as:
with toupdate as (
select t.*,
min(least_date) over (partition by t.parentid) as new_lowestfinishdate
from #temp t cross apply
(select min(dte) as least_date
from (values (t.Finished_OldDate),
(t.Finished_NewDate)
(t.Current_FinishedDate)
) v(dte)
) v
)
update toudpate
set lowestfinishdate = new_lowestfinishdate;
The cross apply takes the minimum value of the dates within each row. The window function then takes the minimum across all rows for the parent id.
One method is use use MIN with a subquery and a VALUES table construct:
UPDATE YT
SET LowestFinishedDate = (SELECT MIN(V.FinishDate)
FROM (VALUES (YT.FinishedOldDate),
(YT.FinishedNewDate),
(YT.CurrentFinishDate)) V(FinishDate)
WHERE V.FinishDate IS NOT NULL)
FROM dbo.YourTable;
Related
I have a requirement where I have to check if the record for the business date already exists in the table then I need to update the values for that business date from the select statement otherwise I have to insert for that business date from the select statement. Below is my full query where I am only inserting at the moment:
INSERT INTO
gstl_calculated_daily_fee(business_date,fee_type,fee_total,range_id,total_band_count)
select
#tlf_business_date,
'FEE_LOCAL_CARD',
SUM(C.settlement_fees),
C.range_id,
Count(1)
From
(
select
*
from
(
select
rowNumber = #previous_mada_switch_fee_volume_based_count + (ROW_NUMBER() OVER(PARTITION BY DATEPART(MONTH, x_datetime) ORDER BY x_datetime)),
tt.x_datetime
from gstl_trans_temp tt where (message_type_mapping = '0220') and card_type ='GEIDP1' and response_code IN('00','10','11') and tran_amount_req >= 5000 AND merchant_type NOT IN(5542,5541,4829)
) A
CROSS APPLY
(
select
rtt.settlement_fees,
rtt.range_id
From gstl_mada_local_switch_fee_volume_based rtt
where A.rowNumber >= rtt.range_start
AND (A.rowNumber <= rtt.range_end OR rtt.range_end IS NULL)
) B
) C
group by CAST(C.x_datetime AS DATE),C.range_id
I have tried to use the if exists but could not fit in the above full query.
if exists (select
business_date
from gstl_calculated_daily_fee
where
business_date = #tlf_business_date)
UPDATE gstl_calculated_daily_fee
SET fee_total = #total_mada_local_switch_fee_low
WHERE fee_type = 'FEE_LOCAL_CARD'
AND business_date = #tlf_business_date
else
INSERT INTO
Please help.
You need a MERGE statement with a join.
Basically, our issue with MERGE is going to be that we only want to merge against a subset of the target table. To do this, we pre-filter the table as a CTE. We can also put the source table as a CTE.
Be very careful when you write MERGE when using a CTE. You must make sure you fully filter the target within the CTE to what rows you want to merge against, and then match the rows using ON
;with source as (
select
business_date = #tlf_business_date,
fee_total = SUM(C.settlement_fees),
C.range_id,
total_band_count = Count(1)
From
(
select
rowNumber = #previous_mada_switch_fee_volume_based_count + (ROW_NUMBER() OVER(PARTITION BY DATEPART(MONTH, x_datetime) ORDER BY x_datetime)),
tt.x_datetime
from gstl_trans_temp tt where (message_type_mapping = '0220') and card_type ='GEIDP1' and response_code IN('00','10','11') and tran_amount_req >= 5000 AND merchant_type NOT IN(5542,5541,4829)
) A
CROSS APPLY
(
select
rtt.settlement_fees,
rtt.range_id
From gstl_mada_local_switch_fee_volume_based rtt
where A.rowNumber >= rtt.range_start
AND (A.rowNumber <= rtt.range_end OR rtt.range_end IS NULL)
) B
group by CAST(A.x_datetime AS DATE), B.range_id
),
target as (
select
business_date,fee_type,fee_total,range_id,total_band_count
from gstl_calculated_daily_fee
where business_date = #tlf_business_date AND fee_type = 'FEE_LOCAL_CARD'
)
MERGE INTO target t
USING source s
ON t.business_date = s.business_date AND t.range_id = s.range_id
WHEN NOT MATCHED BY TARGET THEN INSERT
(business_date,fee_type,fee_total,range_id,total_band_count)
VALUES
(s.business_date,'FEE_LOCAL_CARD', s.fee_total, s.range_id, s.total_band_count)
WHEN MATCHED THEN UPDATE SET
fee_total = #total_mada_local_switch_fee_low
;
The way a MERGE statement works, is that it basically does a FULL JOIN between the source and target tables, using the ON clause to match. It then applies various conditions to the resulting join and executes statements based on them.
There are three possible conditions you can do:
WHEN MATCHED THEN
WHEN NOT MATCHED [BY TARGET] THEN
WHEN NOT MATCHED BY SOURCE THEN
And three possible statements, all of which refer to the target table: UPDATE, INSERT, DELETE (not all are applicable in all cases obviously).
A common problem is that we would only want to consider a subset of a target table. There a number of possible solutions to this:
We could filter the matching inside the WHEN MATCHED clause e.g. WHEN MATCHED AND target.somefilter = #somefilter. This can often cause a full table scan though.
Instead, we put the filtered target table inside a CTE, and then MERGE into that. The CTE must follow Updatable View rules. We must also select all columns we wish to insert or update to. But we must make sure we are fully filtering the target, otherwise if we issue a DELETE then all rows in the target table will get deleted.
Query:
SELECT *
FROM [MemberBackup].[dbo].[OriginalBackup]
where ration_card_id in
(
1247881,174772,
808454,2326154
)
Right now the data is ordered by the auto id or whatever clause I'm passing in order by.
But I want the data to come in sequential format as per id's I have passed
Expected Output:
All Data for 1247881
All Data for 174772
All Data for 808454
All Data for 2326154
Note:
Number of Id's to be passed will 300 000
One option would be to create a CTE containing the ration_card_id values and the orders which you are imposing, and the join to this table:
WITH cte AS (
SELECT 1247881 AS ration_card_id, 1 AS position
UNION ALL
SELECT 174772, 2
UNION ALL
SELECT 808454, 3
UNION ALL
SELECT 2326154, 4
)
SELECT t1.*
FROM [MemberBackup].[dbo].[OriginalBackup] t1
INNER JOIN cte t2
ON t1.ration_card_id = t2.ration_card_id
ORDER BY t2.position DESC
Edit:
If you have many IDs, then neither the answer above nor the answer given using a CASE expression will suffice. In this case, your best bet would be to load the list of IDs into a table, containing an auto increment ID column. Then, each number would be labelled with a position as its record is being loaded into your database. After this, you can join as I have done above.
If the desired order does not reflect a sequential ordering of some preexisting data, you will have to specify the ordering yourself. One way to do this is with a case statement:
SELECT *
FROM [MemberBackup].[dbo].[OriginalBackup]
where ration_card_id in
(
1247881,174772,
808454,2326154
)
ORDER BY CASE ration_card_id
WHEN 1247881 THEN 0
WHEN 174772 THEN 1
WHEN 808454 THEN 2
WHEN 2326154 THEN 3
END
Stating the obvious but note that this ordering most likely is not represented by any indexes, and will therefore not be indexed.
Insert your ration_card_id's in #temp table with one identity column.
Re-write your sql query as:
SELECT a.*
FROM [MemberBackup].[dbo].[OriginalBackup] a
JOIN #temps b
on a.ration_card_id = b.ration_card_id
order by b.id
This might sound like a dumb question - apologies, I'm new to SQL Server and I just want to confirm my understanding.
I've got a query that is aggregating values in a table as a subquery in different ways for different columns, e.g. for a transaction on a given day, transactions in the previous month, previous 6 months, before that, after that.
I aliased the main table as tx, then the subquery alias as tx1 so I could use for example:
tx1.TransactionDate < tx.TransactionDate
I created one column, copied it and amended the WHERE conditions.
I assumed that the scope of an alias in the subquery is bound to that subquery, so it didn't matter that the alias was the same in each case.
It seems to work, but then as neither the main table tx is altered nor the subquery tables tx1 I wouldn't know if the scope of the alias tx1 was bound to each subquery or if the initial tx1 was being reused.
Am I correct in my assumption?
Query:
SELECT tr.transaction_value ,
Isnull(
(
SELECT Sum(tr1.transaction_value)
FROM [MyDB].[dbo].[Transactions] tr1
WHERE tr1.client_ref = tr.client_ref),0)
and tr1.transaction_date > tr.transaction_date ),0) AS 'Future_Transactions' ,isnull(
(
SELECT sum(tr1.transaction_value)
FROM [MyDB].[dbo].[Transactions] tr1
WHERE tr1.client_ref = tr.client_ref),0)
AND
tr1.transaction_date < tr.transaction_date ),0) AS 'Prior_Transactions' FROM [MyDB].[dbo].[Transactions]
I think that following script can explain you everything.
SELECT 1,1,GETDATE()
INSERT INTO #t ( Id, UserId, TranDate )
SELECT 2,1,GETDATE()
INSERT INTO #t ( Id, UserId, TranDate )
SELECT 3,1,GETDATE()
SELECT tx.Id/*main alias*/,
tx1.Id /*First subquery alias*/,
tx2.Id /*Second subquery alias*/,
(SELECT Id FROM #t txs /*alias only in this one subquery/must be different from main if you want use main alias in it...*/
WHERE txs.Id = tx.Id+2 /*here is used main value = subquery value+2*/) AS Id
FROM #t tx /*main*/
JOIN (SELECT *
FROM #t tx
WHERE tx.Id = 1 /*this one using subquery values + you are not able to use here main value*/
) tx1 --alias of subquery
ON tx.Id = tx1.Id /*here is used main value = subquery value*/
CROSS APPLY (SELECT TOP 1 *
FROM #t txc /*This one must be different from main if you want use it to comparison with main*/
WHERE txc.Id > tx.Id /*this one using subquery value > main value*/
) tx2 --alias of subquery
WHERE tx.Id = 1 AND /*Subquery alias canot reference on First subquery value*/
tx1.Id = 1 AND/*Subquery alias*/
tx2.Id = 2 /*Subquery alias*/
It means that yea, it could be reused, but only if you dont want compare main / sub, because if you reuse it and for example you try to do folowing statement in subquery tx.Id > tx.Id It causes that only values in subquery will be compared. In our example it causes that you dont get anything because you comaring values in same row...
I have a table serviceClusters with a column identity(1590 values). Then I have another table serviceClustersNew with the columns ID, text and comment. In this table, I have some values for text and comment, the ID is always 1. Here an example for the table:
[1, dummy1, hello1;
1, dummy2, hello2;
1, dummy3, hello3;
etc.]
WhaI want now for the values in the column ID is the continuing index of the table serviceClusters plus the current Row number: In our case, this would be 1591, 1592 and 1593.
I tried to solve the problem like this: First I updated the column ID with the maximum value, then I tryed to add the row number, but this doesnt work:
-- Update ID to the maximum value 1590
UPDATE serviceClustersNew
SET ID = (SELECT MAX(ID) FROM serviceClusters);
-- This command returns the correct values 1591, 1592 and 1593
SELECT ID+ROW_NUMBER() OVER (ORDER BY Text_ID) AS RowNumber
FROM serviceClustersNew
-- But I'm not able to update the table with this command
UPDATE serviceClustersNew
SET ID = (SELECT ID+ROW_NUMBER() OVER (ORDER BY Text_ID) AS RowNumber FROM
serviceClustersNew)
By sending the last command, I get the error "Syntax error: Ordered Analytical Functions are not allowed in subqueries.". Do you have any suggestions, how I could solve the problem? I know I could do it with a volatile table or by adding a column, but is there a way without creating a new table / altering the current table?
You have to rewrite it using UPDATE FROM, the syntax is just a bit bulky:
UPDATE serviceClustersNew
FROM
(
SELECT text_id,
(SELECT MAX(ID) FROM serviceClusters) +
ROW_NUMBER() OVER (ORDER BY Text_ID) AS newID
FROM serviceClustersNew
) AS src
SET ID = newID
WHERE serviceClustersNew.Text_ID = src.Text_ID
You are not dealing with a lot of data, so a correlated subquery can serve the same purpose:
UPDATE serviceClustersNew
SET ID = (select max(ID) from serviceClustersNew) +
(select count(*)
from serviceClustersNew scn2
where scn2.Text_Id <= serviceClustersNew.TextId
)
This assumes that the text_id is unique along the rows.
Apparently you can update a base table through a CTE... had no idea. So, just change your last UPDATE statement to this, and you should be good. Just be sure to include any fields in the CTE that you desire to update.
;WITH cte_TEST AS
( SELECT
ID,
ID+ROW_NUMBER() OVER (ORDER BY TEXT_ID) AS RowNumber FROM serviceClustersNew)
UPDATE cte_TEST
SET cte_TEST.ID = cte_TEST.RowNumber
Source:
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/ee06f451-c418-4bca-8288-010410e8cf14/update-table-using-rownumber-over
I'm having a bit of a weird question, given to me by a client.
He has a list of data, with a date between parentheses like so:
Foo (14/08/2012)
Bar (15/08/2012)
Bar (16/09/2012)
Xyz (20/10/2012)
However, he wants the list to be displayed as follows:
Foo (14/08/2012)
Bar (16/09/2012)
Bar (15/08/2012)
Foot (20/10/2012)
(notice that the second Bar has moved up one position)
So, the logic behind it is, that the list has to be sorted by date ascending, EXCEPT when two rows have the same name ('Bar'). If they have the same name, it must be sorted with the LATEST date at the top, while staying in the other sorting order.
Is this even remotely possible? I've experimented with a lot of ORDER BY clauses, but couldn't find the right one. Does anyone have an idea?
I should have specified that this data comes from a table in a sql server database (the Name and the date are in two different columns). So I'm looking for a SQL-query that can do the sorting I want.
(I've dumbed this example down quite a bit, so if you need more context, don't hesitate to ask)
This works, I think
declare #t table (data varchar(50), date datetime)
insert #t
values
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-09-16'),
('Xyz','2012-10-20')
select t.*
from #t t
inner join (select data, COUNT(*) cg, MAX(date) as mg from #t group by data) tc
on t.data = tc.data
order by case when cg>1 then mg else date end, date desc
produces
data date
---------- -----------------------
Foo 2012-08-14 00:00:00.000
Bar 2012-09-16 00:00:00.000
Bar 2012-08-15 00:00:00.000
Xyz 2012-10-20 00:00:00.000
A way with better performance than any of the other posted answers is to just do it entirely with an ORDER BY and not a JOIN or using CTE:
DECLARE #t TABLE (myData varchar(50), myDate datetime)
INSERT INTO #t VALUES
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-09-16'),
('Xyz','2012-10-20')
SELECT *
FROM #t t1
ORDER BY (SELECT MIN(t2.myDate) FROM #t t2 WHERE t2.myData = t1.myData), T1.myDate DESC
This does exactly what you request and will work with any indexes and much better with larger amounts of data than any of the other answers.
Additionally it's much more clear what you're actually trying to do here, rather than masking the real logic with the complexity of a join and checking the count of joined items.
This one uses analytic functions to perform the sort, it only requires one SELECT from your table.
The inner query finds gaps, where the name changes. These gaps are used to identify groups in the next query, and the outer query does the final sorting by these groups.
I have tried it here (SQL Fiddle) with extended test-data.
SELECT name, dat
FROM (
SELECT name, dat, SUM(gap) over(ORDER BY dat, name) AS grp
FROM (
SELECT name, dat,
CASE WHEN LAG(name) OVER (ORDER BY dat, name) = name THEN 0 ELSE 1 END AS gap
FROM t
) x
) y
ORDER BY grp, dat DESC
Extended test-data
('Bar','2012-08-12'),
('Bar','2012-08-11'),
('Foo','2012-08-14'),
('Bar','2012-08-15'),
('Bar','2012-08-16'),
('Bar','2012-09-17'),
('Xyz','2012-10-20')
Result
Bar 2012-08-12
Bar 2012-08-11
Foo 2012-08-14
Bar 2012-09-17
Bar 2012-08-16
Bar 2012-08-15
Xyz 2012-10-20
I think that this works, including the case I asked about in the comments:
declare #t table (data varchar(50), [date] datetime)
insert #t
values
('Foo','20120814'),
('Bar','20120815'),
('Bar','20120916'),
('Xyz','20121020')
; With OuterSort as (
select *,ROW_NUMBER() OVER (ORDER BY [date] asc) as rn from #t
)
--Now we need to find contiguous ranges of the same data value, and the min and max row number for such a range
, Islands as (
select data,rn as rnMin,rn as rnMax from OuterSort os where not exists (select * from OuterSort os2 where os2.data = os.data and os2.rn = os.rn - 1)
union all
select i.data,rnMin,os.rn
from
Islands i
inner join
OuterSort os
on
i.data = os.data and
i.rnMax = os.rn-1
), FullIslands as (
select
data,rnMin,MAX(rnMax) as rnMax
from Islands
group by data,rnMin
)
select
*
from
OuterSort os
inner join
FullIslands fi
on
os.rn between fi.rnMin and fi.rnMax
order by
fi.rnMin asc,os.rn desc
It works by first computing the initial ordering in the OuterSort CTE. Then, using two CTEs (Islands and FullIslands), we compute the parts of that ordering in which the same data value appears in adjacent rows. Having done that, we can compute the final ordering by any value that all adjacent values will have (such as the lowest row number of the "island" that they belong to), and then within an "island", we use the reverse of the originally computed sort order.
Note that this may, though, not be too efficient for large data sets. On the sample data it shows up as requiring 4 table scans of the base table, as well as a spool.
Try something like...
ORDER BY CASE date
WHEN '14/08/2012' THEN 1
WHEN '16/09/2012' THEN 2
WHEN '15/08/2012' THEN 3
WHEN '20/10/2012' THEN 4
END
In MySQL, you can do:
ORDER BY FIELD(date, '14/08/2012', '16/09/2012', '15/08/2012', '20/10/2012')
In Postgres, you can create a function FIELD and do:
CREATE OR REPLACE FUNCTION field(anyelement, anyarray) RETURNS numeric AS $$
SELECT
COALESCE((SELECT i
FROM generate_series(1, array_upper($2, 1)) gs(i)
WHERE $2[i] = $1),
0);
$$ LANGUAGE SQL STABLE
If you do not want to use the CASE, you can try to find an implementation of the FIELD function to SQL Server.