Replace of self join in SQL Server 2012 - sql-server-2012

I have a scenario on following tables:
SampleData: (has one hour periodic values of a Prod.)
TST_DATE CLK_LITRE_WT
---------------------------------
09/15/2019 17:15 1280 <-- current time value
09/15/2019 16:15 1300
09/15/2019 15:15 1190
09/15/2019 14:15 1200
09/15/2019 13:15 1200
CLK_LITRE_WT is prod name out of 13 products. So totally 14 columns are there. But it doesn't matter. Note that no of row/hour is given user.
SettingMaster:
UserCode LastRunDate
------------------------------------
aa 2019-09-15 15:18:01.350
LastRunDate is nothing but Last DB reached time of a User. So I need query which should be like and Expected Result:
TST_DATE CLK_LITRE_WT_Real CLK_LITRE_WT_Seen
-----------------------------------------------------------
09/15/2019 17:15 1280 1190 <-- value of previous live record.
09/15/2019 16:15 1300 1190 <-- value of previous live record.
09/15/2019 15:15 1190 1190 <-- Last seen record by User.
09/15/2019 14:15 1200 1200
09/15/2019 13:15 1200 1200
I tried Self Join, LEAD-LAG (will share the query soon). But I did not achieve the expected result. So I need your help for how can I get the Expected result.
Edit 1:
I recreate LEAD-LAG what I tried yesterday.
select TST_DATE, CLK_LITRE_WT CLK_LITRE_WT_Real, lead (CLK_LITRE_WT) over (order by convert(datetime,sd.TST_DATE + ':15.00') desc) CLK_LITRE_WT_Seen
from SettingMaster sm left join SampleData sd
on convert(datetime,sd.TST_DATE + ':15.00') between dateadd (hour, -(4), '2019-09-15 17:20:02.733') and '2019-09-15 17:20:02.733'
where sm.UserCode = 'aa'
order by convert(datetime,sd.TST_DATE + ':15.00') desc
Here yesterday date and time was passed static. Because, given data are yesterday data. And result is:
TST_DATE CLK_LITRE_WT_Real CLK_LITRE_WT_Seen
09/15/2019 17:15 1280 1300
09/15/2019 16:15 1300 1190 <-- value of previous live record
09/15/2019 15:15 1190 1200 <-- here should be '1190' as Real
09/15/2019 14:15 1200 NULL

Related

Access Query: Match Two FKs, Select Record with Max (Latest) Time, Return 3d Field From Record

I have an Access table (Logs) like this:
pk
modID
relID
DateTime
TxType
1
1234
22.3
10/1/22 04:00
1
2
1234
23.1
10/10/22 06:00
1
3
1234
23.1
10/11/22 07:00
2
4
1234
23.1
10/12/22 08:00
3
5
4321
22.3
10/2/22 06:00
7
6
4321
23.1
10/10/22 06:00
1
7
4321
23.1
10/11/22 07:30
3
Trying to write a query as part of a function that searches this table:
for all records matching a given modID and relID (e.g. 1234 and 23.1),
picks the most recent one (the MAX of DateTime),
returns the TxType for that record.
However, a bit new to Access and its query structure is vexing me. I landed on this but because I have to include a Total/Aggregate function for TxType I had to either choose Group By (not what I want) or Last (closer, but returns junk results). The SQL for my query is currently:
SELECT Last(Logs.TxType) AS LastOfTxType, Max(Logs.DateTime) AS MaxOfDateTime
FROM Logs
GROUP BY Logs.dmID, Logs.relID
HAVING (((Logs.dmID)=[EnterdmID]) AND ((Logs.relID)=[EnterrelID]));
It returns the TxType field when I pass it the right parameters, but not the correct record - I would like to be rid of the Last() bit but if I remove it Access complains that I don't have it as part of an aggregate function.
Anyone that can point me in the right direction here?
Have you tried
SELECT TOP 1 TxtType
FROM Logs
WHERE (((Logs.dmID)=[EnterdmID]) AND ((Logs.relID)=[EnterrelID]))
ORDER BY DateTime DESC;
That will give you the latest single data row based on your DateTime field and other criteria.

T-SQL checking if a date in 1 table is between 2 dates in another table

I have a main table CleanData and a reference table TIME_TABLE_VERSION ttv. In CleanData there is no primary key but each row has a date column called CALENDAR_DATE1.
I want to return all rows from CleanData where CleanData.CALENDAR_DATE1 is between ttv.ACTIVATION_DATE and ttv.DEACTIVATION_DATE.
What's tricky is the timing of when ttv table gets updated. If you look below you'll see that the last record in ttv has a deactivation date of July 16, 2022. However, this table will get updated in the future by truncating the previous row deactivation date and a new row gets inserted. So for example the next time ttv gets updated will be around June 30th and there will be a new row for TIME_TABLE_VERSION_ID = 191 with an activation date of July 3, 2022; the deactivation date for TIME_TABLE_VERSION_ID = 190 will get truncated from July 16, 2022 to July 2, 2022 upon update. Note that this ttv table update will happen in advance, when CleanData.CALENDAR_DATE1 is still less than the ttv.ACTIVATION_DATE in TIME_TABLE_VERSION_ID = 191. If I simply select MAX TIME_TABLE_VERSION_ID then there will be a few days of missing data returned from CleanData between June 30th and July 3rd.
I'm trying to write a query that will factor in when CleanData.CALENDAR_DATE1 is less than the most recent ttv.ACTIVATION_DATE, and give all the rows in CleanData with a Calendar_DATE1 between TIME_TABLE_VERSION_ID -1, until CleanData.CALENDAR_DATE1 is >= the ttv.ACTIVATION_DATE in TIME_TABLE_VERSION_ID + 0 (most recent).
Any ideas how to fix my query?
SELECT
CleanData.*
FROM
TIME_TABLE_VERSION AS ttv
INNER JOIN
CleanData ON CleanData.CALENDAR_DATE1 BETWEEN ttv.ACTIVATION_DATE AND ttv.DEACTIVATION_DATE
AND (CASE
WHEN (CleanData.CALENDAR_DATE1 < (SELECT ttv1.ACTIVATION_DATE FROM TIME_TABLE_VERSION ttv1 WHERE ttv.TIME_TABLE_VERSION_ID = ttv1.TIME_TABLE_VERSION_ID))
THEN (ttv.TIME_TABLE_VERSION_ID = (SELECT MAX (ttv1.TIME_TABLE_VERSION_ID)-1 FROM TIME_TABLE_VERSION ttv1))
ELSE (ttv.TIME_TABLE_VERSION_ID = (SELECT MAX(ttv1.TIME_TABLE_VERSION_ID) FROM TIME_TABLE_VERSION ttv1)) END)
ORDER BY
CleanData.CALENDAR_DATE1
Here's what table ttv looks like:
TIME_TABLE_VERSION_ID
TIME_TABLE_VERSION_NAME
ACTIVATION_DATE
DEACTIVATION_DATE
184
Feb22_01
2022-02-06 00:00:00.000
2022-02-26 23:59:59.000
185
Feb22_02
2022-02-27 00:00:00.000
2022-03-19 23:59:59.000
186
Feb22_03
2022-03-20 00:00:00.000
2022-04-09 23:59:59.000
187
Feb22_04
2022-04-10 00:00:00.000
2022-04-23 23:59:59.000
188
Apr22_01
2022-04-24 00:00:00.000
2022-05-14 23:59:59.000
189
Apr22_02
2022-05-15 00:00:00.000
2022-05-28 23:59:59.000
190
Apr22_03
2022-05-29 00:00:00.000
2022-07-16 23:59:59.000
Note there is no TIME_TABLE_VERSION_ID or TIME_TABLE_VERSION_NAME in CleanData to join to. I only have the CALENDAR_DATE1 in CleanData and the ACTIVATION_DATE and DEACTIVATION_DATE in ttv.
Note also that I have no ability to change the structure of either table, I have to work with what's there in both.
Thanks so much for any help you can offer!

SQL aggregate based on date range

I'm having following data in a MSSQL table. The requirement is to group the records for users which falls under same/end time duration, and sum up the Rate field.
Is there any way to achieve this via query on-the-fly?
Row Data
-----------------------------------------------------
RawId Start Time End Time User Rate
1 1/9/2021 14:29 1/9/2021 14:40 User-1 10
2 1/9/2021 10:37 1/9/2021 14:00 User-2 20
3 1/9/2021 14:03 1/9/2021 14:59 User-2 30
4 1/9/2021 8:51 1/9/2021 14:39 User-1 40
5 1/9/2021 14:02 1/9/2021 14:59 User-2 50
Expected Output
-----------------------------------------------------
ProID Start Time End Time User RateTotal
xx1 1/9/2021 14:29 1/9/2021 14:40 User-1 50
xx2 1/9/2021 14:02 1/9/2021 14:59 User-2 80
xx3 1/9/2021 10:37 1/9/2021 14:00 User-2 20
Business logic
ProID xx1: RawID 1 & 4, belong to User-1 and RawID 1 start & end time (14:29-14:40) falls within RawID 4 (08:51-14:39). In this case rates have to be added up and show only one record.
ProID xx2: RawID 3 & 5, belong to User-2 and RawID 3 start & end time (14:03-14:59) falls within RawID 5 (14:02-14:59). In this case rates have to be added up and show only one record.
ProID xx3: RawID 2 also belongs to User-2 but start & end time (10:37-14:00) doesnt fall within other User-2 records. Hence this will be considered as separate row.
with cte as
(
select Rate as Rate,dateadd(hour,datediff(HOUR,0,StartTime),0) as starttime,
dateadd(HOUR,DATEDIFF(hour,0,endtime),0) as EndTime
from Row_Data
)
select sum(rate) as Rate,StartTime,Endtime from cte
group by StartTime,EndTime
order by starttime desc
Something like (I'm assuming you made a typo in the sample expected data and the starts/ends are meant to be the 0th minutes)
SELECT SUM(Rate),
Trunc_Start,
Trunc_End
FROM (
SELECT dateadd(hour, datediff(hour, 0, Start_Time), 0) AS Trunc_Start,
dateadd(hour, datediff(hour, 0, ENd_Time) + 1, 0) AS Trunc_End,
Rate
FROM SOME_TABLE
)
GROUP BY Trunc_Start,
Trunc_end
select sum(Rate),StartTime ,EndTime from table
group by StartTime ,EndTime

SQL query Design

Table 1 : Employee
EmpId CreatedAt
100 2015-11-09 07:21:02
200 2017-01-24 18:24:01
300 2016-08-20 06:55:35
Table 2 : Account
AccId EmpID Currency CreatedAt
9000 100 USD 2017-04-20 19:40:55
9001 200 USD 2017-04-20 19:40:55
9002 100 EUR 2017-05-20 19:40:55
9003 200 USD 2017-04-20 19:40:55
9004 100 USD 2017-04-20 19:40:55
Table 3 : Transaction
TrnsId AccId Amount CreatedAt
10 9000 3000 2017-04-25 19:40:55
11 9001 500 2017-05-25 19:40:55
12 9000 -200 2017-05-30 19:40:55
13 9000 -500 2017-06-11 19:40:55
Create a table that provides the day end balance (at midnight) for each account since it was first created, i.e. there should be a single entry in the table for each day an account exists, and its balance at the end of that day.
Can anybody help me in writing query to above scenario?
Thanks.
Since you haven't posted any attempt to solve this yourself, I will assume you need an initial nudge in the right direction. Hopefully this will help.
Outer-Join your Account table to a table (create it if you don't have one) that has one row for every day in the calendar (this is often referred to as a "tally table"). Filter out the days in the calendar that were before the date the account was created.
That will produce a result of one row for every Account-Date combination, which is all the rows you want in your result.
From there it's just a matter of adding the column with the End-Of-Day balance for that Account on that Date. There are lots of ways to do this. Google "SQL Running Total" and pick a method you like.

how to use group by case in powerbuilder 10.5

date_entry time_start time_finished idle_code qty_good
8/8/2013 13:00 13:30 6 10
8/8/2013 13:30 15:20 0 20
8/8/2013 15:20 15:30 6 5
8/8/2013 15:30 16:25 0 10
8/8/2013 16:25 16:40 7 0
8/8/2013 16:40 17:25 0 40
8/8/2013 17:25 17:40 3 10
8/8/2013 17:40 24:00 1
8/8/2013 24:00 00:00 1
8/8/2013 00:00 00:30 1
Idle Time Legend:
0 Production
1 Adjustment/Mold
2 Machine
3 Quality Matter
4 Supply Matter
5 Mold Change
6 Replacer
7 Others
----------Result--------------------------------------
total mins
idle_code total mins
1 - 410:00 mins
2 - 00:00
3 - 15:00
4 - 00:00
5 - 00:00
6 - 40:00
7 - 15:00
0 - 210:00
First question how to group by idle_code and add the total mins.?
---------other report----------------------------------
production efficientcy report
idle_code total mins
1 410:00 mins
2 00:00 mins
3 15:00 mins
4 00:00 mins
5 00:00 mins
7 15:00 mins
total idle time = 440:00 mins (formula: sum(total mins of idle 1,2,3,4,5,7))
idle rate = 63.77% (formula: (total idle time / total actual production time)* 100 )
total operation time = 250:00 mins (formula sum(idl_code '0' and idle_code '6'))
machine efficienct = 36.23% (formula (total operation time / total actual production time * 100))
total actual production time = 690:00 mins (formula sum(total_idle_time + total operation time))
this is easy to compute in the powerbuilder using computed field but my problem is how to group them by idle_code and there total mins.
You could do this as a single SQL statement, summing the difference between the start and finish times, and grouping on idle_code. (Don't forget to make this a Left Outer Join from the Idle_Code table to the Production data table). This would save you from retrieving all the detail data to the client, and doing the grouping and summing there.
If you need to do this as a computed column, and you've retrieved all the detail data, then create a group on idle_code, and create a computed column that sums (time_finished - time_start for group 1). The SecondsAfter() function can do this, if those columns are datetimes and not just time values.
How are you storing your time_start and time_finished columns? Are those datetime datatypes? Because that makes the calculations much easier. If they're just times, you'll have problems calculating the duration when those times cross midnight into the next day.