Select Previous Record in SQL Server 2008 - sql

Here's the case: I have one table myTable which contains 3 columns:
ID int, identity
Group varchar(2), not null
value decimal(18,0), not null
Table looks like this:
ID GROUP VALUE Prev_Value Result
------------------------------------------
1 A 20 0 20
2 A 30 20 10
3 A 35 30 5
4 B 100 0 100
5 B 150 100 50
6 B 300 200 100
7 C 40 0 40
8 C 60 40 20
9 A 50 35 15
10 A 70 50 20
Prev_Value and Result columns should be custom columns. I need to make it on view. Anyone can help? please... Thank you so much.

The gist of what you need to do here is to join the table to itself, where part of the join condition is that the value column of the joined copy of the table is less than value column of the original. Then you can group by the columns from the original table and select the max value from the joined table to get your results:
SELECT t1.id, t1.[Group], t1.Value
, coalesce(MAX(t2.Value),0) As Prev_Value
, t1.Value - coalesce(MAX(t2.Value),0) As Result
FROM MyTable t1
LEFT JOIN MyTable t2 ON t2.[Group] = t1.[Group] and t2.Value < t1.Value
GROUP BY t1.id, t1.[Group], t1.Value
Once you can update to Sql Server 2012 you'll also be able to take advantage of the new LAG keyword.

Related

TSQL - update based on value between two ints in 2nd table.

Having an issue with doing a mass update with millions of rows. Example of what I'm attempting to do below. Trying to avoid case statements if possible as there are over 1000 ranks.
Table 1:
id, score, rank
1 4090 null
2 6400 null
3 8905 null
4 2551 null
Table 2:
Rank, Score
1 0
2 1000
3 3500
4 5000
5 8000
6 10000
I'm attempting to update table 1 to display the correct rank.
EX: ID 2 having a score of 6400 would be above 5000 but below 8000 therefore be rank 4. Is this possible without a case statement?
You can use cross apply:
update t1
set rank = t2.rank
from table1 t1 cross apply
(select top 1 t2.*
from table2 t2
where t2.score <= t1.score
order by t2.score desc
) t2;
For millions of rows I would suggest one of the following:
Do the update in batches.
Use a case statement.
Put the output in a new table, truncate the original table, and reload
"Millions" of updates is often a very expensive operation.
Another Option is with a simple JOIN in concert with Lead()
Example
Update Table1 Set Rank=B.Rank
From Table1 A
Join (
Select Rank
,R1=Score
,R2=Lead(Score,1,999999) over (Order By Score)
From Table2
) B on A.score >= B.R1 and A.Score < B.R2
Returns
id score rank
1 4090 3
2 6400 4
3 8905 5
4 2551 2

Assigning a value of data for each record having the same condition in SQL Server 2008

I have a table in SQL Server 2008 like:
Period Name Value
1 A 10
2 A 20
3 A 30
4 A 40
1 B 50
2 B 80
3 B 70
4 B 60
What I need to write a select query includes a new column MainValue which contains the value where period=4 for a name for each data.
Example:
Period Name Value MainValue
1 A 10 40
2 A 20 40
3 A 30 40
4 A 40 40
1 B 50 60
2 B 80 60
3 B 70 60
4 B 60 60
How can I provide this? I tried the one below, but it is not working as I want.
Select
*,
(select Value where Period = 4) as MainValue
from myTable;
Any help would be appreciated.
Try this:
SELECT Period, Name, Value,
MAX(CASE WHEN Period=4 THEN Value END) OVER (PARTITION BY Name) AS MainValue
FROM mytable
The query uses a window function with a condition applied over Name partitions: the function returns the Value corresponding to Period=4 inside each partition.
You can do this a number of ways. A correlated sub-query as the column, a cross apply to a correlated query, or a cte. I personally like the cte approach. It would look something like this.
with MainValues as
(
select Name
, Value
from SomeTable
where Period = 4
)
select st.*
, mv.Value as MainValue
from SomeTable st
join MainValues mv on st.Name = mv.Name

Insert rows based on number of distinct values in another table in SQL

I have a table with PO#,Days_to_travel, and Days_warehouse fields. I take the distinct Days_in_warehouse values in the table and insert them into a temp table. I want a script that will insert all of the values in the Days_in_warehouse field from the temp table into the Days_in_warehouse_batch row in table 1 by PO# duplicating the PO records until all of the POs have a record per distinct value.
Example:
Temp table: (Contains only one field with all distinct values in table 1)
Days_in_warehouse
20
30
40
Table 1 :
PO# Days_to_travel Days_in_warehouse Days_in_warehouse_batch
1 10 20
2 5 30
3 7 40
Updated Table 1:
PO# Days_to_travel Days_in_warehouse Days_in_warehouse_batch
1 10 20 20
1 10 20 30
1 10 20 40
2 5 30 20
2 5 30 30
2 5 30 40
3 7 40 20
3 7 40 30
3 7 40 40
Any ideas as to how can I update Table 1 to see desired results?
One more way without a TEMP table and DELETE.
UPDATE T SET [Days_in_warehouse_batch] = [Days_in_warehouse];
INSERT INTO T ([PO], [Days_to_travel],
[Days_in_warehouse], [Days_in_warehouse_batch])
SELECT T.PO,
T.Days_to_travel,
T.Days_in_warehouse,
DAYS_Table.Days_in_warehouse
FROM T
CROSS JOIN
(SELECT DISTINCT Days_in_warehouse FROM T) as DAYS_Table
WHERE T.Days_in_warehouse <> DAYS_Table.Days_in_warehouse;
SQLFiddle demo
I would suggest the following:
insert into table1(PO#, Days_to_travel, Days_in_warehouse, Days_in_warehouse_batch)
select PO#, Days_to_travel, Days_in_warehouse, Days_in_warehouse
from table1 cross join
(select 20 as Days_in_warehouse union all select 30 union all select 40) var
where Days_in_warehouse_batch is null;
delete from table1
where Days_in_warehouse_batch is null;
What you're looking for is the cartesian product between your two tables.
select t1.po, t1.daystotravel, t1.daysinwarehouse, temp.daysinwarehousebatch
from table1 t1, temp
The easiest way I can think of updating table1 with these values is to insert them, and then delete the originals.
insert into table1
select t1.po, t1.daystotravel, t1.daysinwarehouse, temp.daysinwarehousebatch
from table1 t1, temp
And then delete the originals:
delete from table1 where daysinwarehousebatch is null
SQL Fiddle Demo

Get record ids from groups where the sum of one of the field of their records is greater than

I have records as such:
Id ForeignKey Level ValueA ValueB
1 1001 1 2 10
2 1001 1 10 10
3 1001 1 20 20
4 1001 2 20 30
5 1002 1 1 100
6 1003 1 1 100
7 1004 1 1 100
I want to get the Ids of each record of the groups grouped by ForeignKey and Level where the sum of the group's records' ValueA values divided by the sum of ValueB values is greater than 0.5
In this case, I'd like to retrieve the Id of the three first records as (2 + 10 + 20) / (10 + 10 + 20) = 0.8
Here is what I've got so far:
select
ForeignKey,
SUM(ValueA) as ValueASum,
SUM(ValueB) as ValueBSum,
from tableA
group by ForeignKey
having (SUM(ValueA) / SUM(ValueB) > 0.5)
The result is
ForeignKey ValueASum ValueBSum
1001 32 40
How do I get the ids of the records from this point? If I add the Id in the select, I must group on it and then have a group for each record.
Thanks for your time
Hm, how about
select id from your_table where foreignkey = 1001
Is something wrong with working with multiple queries?
If you want you can do a subquery:
select id from your_table where foreignkey in ( select foreignkey from ( <yourQuery> ) sq);
UPDATE:
select t.id from Table1 t
inner join
(
select
ForeignKey, level,
SUM(ValueA) as ValueASum,
SUM(ValueB) as ValueBSum
from Table1
where level = 1
group by ForeignKey, Level
having (SUM(ValueA) / SUM(ValueB) > 0.5) ) sq
ON t.foreignkey = sq.foreignkey AND t.level = sq.level
I added where level = 1 just because your given resultset not what I get when I execute your query.
See it working live in an sqlfiddle.
You were on the right track, but if you wanted it from each "Level", you would need to add that into your group by also.
select
tA2.ID,
tA2.ForeignKey,
tA2.Level,
tA2.ValueA,
tA2.ValueB
from
( select
tA.ForeignKey,
tA.Level,
SUM(tA.ValueA) as ValueASum,
SUM(tA.ValueB) as ValueBSum,
from
tableA tA
group by
tA.ForeignKey,
tA.Level
having
(SUM(tA.ValueA) / SUM(tA.ValueB) > 0.5) ) PreQualified
JOIN tableA tA2
on PreQualified.ForeignKey = tA2.ForeignKey
AND PreQualified.Level = tA2.Level
This would give all values that matched the qualifying condition.

SQL decrement a value based on two columns till 0

I have the following datasets (just a sample):
Table1:
ID MAX AMT SORTED
1 20 0 1
1 30 0 2
1 40 0 3
1 50 0 4
2 0 0 1
2 30 0 2
2 40 0 3
2 40 0 4
...
Table2:
ID AMT
1 75
2 70
...
I must update Table1.AMT from Table2.AMT using this rules:
Table1 and Table2 are joined on ID
Table1.AMT can't hold larger value than MAX
if Table2.AMT >= Table1.MAX then Table1.AMT = Table1.MAX... then on the next row update Table1.AMT with Table2.AMT - previous record AMT still using the above rules.
So the expected output would be
ID MAX AMT SORTED
1 20 20 1
1 30 30 2
1 40 25 3
1 50 0 4
2 0 0 1
2 30 30 2
2 40 40 3
2 40 0 4
...
How can one achieve that?
I thought of creating a temp table with an aggregated SUM() of Table1.MAX, and using that as a reference to update Table1.AMT (if SUM(MAX) < Table2.AMT then Table1.AMT = Table1.MAX else Table1.AMT = previous records SUM(MAX)).
But can it be done without a temp table? (Sadly I can't create functions and procedures in my work env.)
More efficient solution can be made using specifics or Oracle PL/SQL.
Here is a generic solution:
select t1.ID, min(t1.MAX) as MAX, least(min(t1.MAX),coalesce(min(t2.AMT),0)-coalesce(least(sum(t1p.MAX-t1p.AMT), min(t2.AMT)),0)+min(t1.AMT)) as AMT, t1.SORTED
from Table1 t1
left join Table2 t2 on t2.ID = t1.ID
left join Table1 t1p on t1p.ID = t1.ID and t1p.SORTED < t1.SORTED
group by t1.ID, t1.SORTED
order by t1.ID, t1.SORTED
explanation of calculating AMT:
AMT is smallest of "MAX for the row" and "How much is possible"
least(min(t1.MAX),"How much is possible")
"How much is possible": max available - how much was given for previous rows + how much we already have
coalesce(min(t2.AMT),0) - "how much was given for previous rows" + min(t1.AMT)
"how much was given for previous rows": smalles of how much required to fill and how much possible
coalesce(least(sum(t1p.MAX-t1p.AMT), min(t2.AMT)),0)