To get a range of values - sql

My table called TimeList with 2 columns SlotID(int identity) and SlotTime(varchar) in database is like this.
SlotID SlotTime
1 8:00AM-8:15AM
2 8:15AM-8:30AM
3 8:30AM-8:45AM
4 8:45AM-9AM
5 9AM-9:30AM
likewise up to 6:45PM-7:00PM.
if i pass 2 parameters starttime and endtime as 8:00AM and endtime as 9AM,I want to retrieve first 4 rows in the above given table.Can anybody help to have such a stored procedure.

Would it be possible to refactor the table to look like this:
SlotID SlotStart SlotEnd
----------------------------
1 8:00am 8:15am
2 8:15am 8:30am
...
If you split the times into separate columns, it will be easier to query the date ranges. The query would look something like this:
#StartTime = '8:00am'
#EndTime = '9:00am'
select SlotID, SlotStart, SlotEnd
from Slots
where SlotStart >= #StartTime
and SlotEnd <= #EndTime

Your data is not properly normalized, so it will be hard to query. A field should only contain a single value, so you should have the starting and ending time for the slot in separate fields:
SlotID StartTime EndTime
1 8:00AM 8:15AM
2 8:15AM 8:30AM
3 8:30AM 8:45AM
4 8:45AM 9:00AM
5 9:00AM 9:30AM
This also allows you to use a datetime type for the fields instead of a textual data type, so that you can easily query the table:
select SlotId, StartTime, EndTime
from TimeList
where StartTime >= '8:00AM' and EndTime <= '9:00AM'
With your original table design, you would have to use string operations to split the values in the field, and convert the values to make it comparable. If you get a lot of data in the table, this will be a killer for performance, as the query can't make use of indexes.

The problem is that your table is not normalized. Please read up on that at http://en.wikipedia.org/wiki/Database_normalization , it can greatly improve the quality of the systems you design.
In your current case, please follow Andy's advice and separate SlotStart and SlotEnd. Your time format is not good either. Use a DateTime format (or whatever your database offers you as its time type) or a numerical type like INT to store your values (e.g. 1800 instead of 6:00PM).
Then you can easily use
SELECT FROM TimeList WHERE SlotStart>=... AND SlotEnd<=...
and select whatever you like from your table.

Related

Calculating working time with overlapping events (SQL)

I have found similar queries on StackOverflow (e.g. Finding simultaneous events in a database between times) but nothing that matches exactly what I am after as far as I can tell so thought it OK to add as a new question.
I have a table that logs jobs (or "Activities"), with a start/end time for the job. I need to calculate working time (you can disregard non-working days, break times etc. as I have that covered). The complication is an individual can work on simultaneous jobs, overlapping at different points (the assumption is equal effort on simultaneous jobs), and the working time needs to reflect that. Minute accuracy is all that is required, not to the second.
Based on other suggestions I have this query, implemented as a table-valued function. It will look at each minute that activity is running, and if any other activities are running in the same period for the same person, and make calculations based on that. It works, but is very inefficient - taking over a minute to execute. Any ideas how I can do this more efficiently?
Running SQL 2005. I have done the obvious such as to add indexes on foreign keys by the way.
CREATE FUNCTION [dbo].[WorkActivity_WorkTimeCalculations] (#StartDate smalldatetime, #EndDate smalldatetime)
RETURNS #retActivity TABLE
(
ActivityID bigint PRIMARY KEY NOT NULL,
WorkMins decimal NOT NULL
)
/********************************************************************
Summary: Calculates the WORKING time on each activity running in a given date/time range
Remarks: Takes into account staff working simultaneously on jobs
(evenly distributes working time across simultaneous jobs)
Input Params: #StartDate - the start of the period to calculate
#EndDate - the end of the period to calculate
Output Params:
Returns: Recordset of activities and associated working time (minutes)
********************************************************************/
AS
BEGIN
-- any work activities still running use the overall end date as the activity's end date for the purpose of calculating
-- simulateneous jobs running
-- POPULATE A TEMP TABLE WITH EVERY MINUTE IN THE DATE RANGE
DECLARE #Minutes TABLE (MinuteDateTime smalldatetime NOT NULL)
;WITH cte AS (
SELECT #StartDate AS myDate
UNION ALL
SELECT DATEADD(minute,1,myDate)
FROM cte
WHERE DATEADD(minute,1,myDate) <= #EndDate
)
INSERT INTO #Minutes (MinuteDateTime)
SELECT myDate FROM cte
OPTION (MAXRECURSION 0)
-- POPULATE A TEMP TABLE WITH WORKLOAD PER EMPLOYEE PER MINUTE
DECLARE #JobsRunningByStaff TABLE (StaffID smallint NOT NULL, MinuteDateTime smalldatetime NOT NULL, JobsRunning decimal NOT NULL)
INSERT INTO #JobsRunningByStaff (StaffID, MinuteDateTime, JobsRunning)
SELECT wka_StaffID, MinuteDateTime, COUNT(DISTINCT wka_ItemID) JobsRunning
FROM dbo.WorkActivities
INNER JOIN #Minutes ON (MinuteDateTime BETWEEN wka_StartTime AND DATEADD(minute,-1,ISNULL(wka_EndTime,#EndDate)))
GROUP BY wka_StaffID, MinuteDateTime
-- FINALLY MAKE THE CALCULATIONS FOR EACH ACTIVITY
INSERT INTO #retActivity
SELECT wka_ActivityID, SUM(1/JobsRunning)WorkMins
FROM dbo.WorkActivities
INNER JOIN #JobsRunningByStaff ON (wka_StaffID = StaffID AND MinuteDateTime BETWEEN wka_StartTime AND DATEADD(minute,-1,ISNULL(wka_EndTime,#EndDate)))
GROUP BY wka_ActivityID
RETURN
END
Some example data (sorry for the poor formatting!)...
Source Data from WorkActivities table:
ACTIVITY ID | START TIME | END TIME | STAFF ID
1 | 03/03/2016 10:30 | 03/03/2016 10:50 | 1
2 | 03/03/2016 10:40 | 03/03/2016 11:00 | 1
And the desired results for a function call of SELECT * FROM dbo.WorkActivity_WorkTimeCalculations ('03-Mar-2016 10:30','03-Mar-2016 11:30'):
ACTIVITY ID | WORKMINS
1 | 25
2 | 15
So, the results take into account between 10:40 and 10:50 there are two jobs happening simultaneously, so calculates 5 mins working time on each over that period.
As suggested by posters, indexing made a significant difference - creating an index with wka_StartTime and wka_EndTime sorted it.
(sorry, couldn't see how to mark the comments made by others as an answer!)

sql while loop duplicating results

I have a rather large and complex query to work out people in work, off sick etc. This worked great if I want to just see it for 1 day, however I need to allow users to view multiple days.
I added a startdate and enddate parameter and looked at building in a sql while loop to change the start date each time and write the values into a temp table so I can pull them out at the end. This may not be the best approach.
I have got the loop working, however it keeps duplicating the results like in the example below:
How the data should look:
Date: Value
01/01/2014 1
02/01/2014 2
03/01/2014 3
How data is being exported:
Date: Value
01/01/2014 1
02/01/2014 1
02/01/2014 2
03/01/2014 1
03/01/2014 2
03/01/2014 3
This is the example of the loop I found and I have used with my own sql code in the middle. My sql code only uses the startdate parameter being passed in.
Should I be using a different type of loop maybe, or have I missed something out to stop the duplication? Any suggestions welcome as im not sure how to stop the loop doing this. It is bring back the correct data I just need to exclude the duplicates.
Structure of my code and loop but not the full example as code in middle is very long:
CREATE TABLE #TestTable1
(
Date DATETIME
Value int
);
declare #startdate datetime
declare #enddate datetime
while #startdate <= #enddate
BEGIN
(My Sql Code is placed here and uses the #startdate parameter)
INSERT INTO #TestTable1(Date, value)
select * from (uses allot of temp tables and cte from the code i have used.)
SET #startdate = DATEADD(DAY, 1, #startdate)
END
select * from #TestTable1
drop table #TestTable1

Join to Calendar Table - 5 Business Days

So this is somewhat of a common question on here but I haven't found an answer that really suits my specific needs. I have 2 tables. One has a list of ProjectClosedDates. The other table is a calendar table that goes through like 2025 which has columns for if the row date is a weekend day and also another column for is the date a holiday.
My end goal is to find out based on the ProjectClosedDate, what date is 5 business days post that date. My idea was that I was going to use the Calendar table and join it to itself so I could then insert a column into the calendar table that was 5 Business days away from the row-date. Then I was going to join the Project table to that table based on ProjectClosedDate = RowDate.
If I was just going to check the actual business-date table for one record, I could use this:
SELECT actual_date from
(
SELECT actual_date, ROW_NUMBER() OVER(ORDER BY actual_date) AS Row
FROM DateTable
WHERE is_holiday= 0 and actual_date > '2013-12-01'
ORDER BY actual_date
) X
WHERE row = 65
from here:
sql working days holidays
However, this is just one date and I need a column of dates based off of each row. Any thoughts of what the best way to do this would be? I'm using SQL-Server Management Studio.
Completely untested and not thought through:
If the concept of "business days" is common and important in your system, you could add a column "Business Day Sequence" to your table. The column would be a simple unique sequence, incremented by one for every business day and null for every day not counting as a business day.
The data would look something like this:
Date BDAY_SEQ
========== ========
2014-03-03 1
2014-03-04 2
2014-03-05 3
2014-03-06 4
2014-03-07 5
2014-03-08
2014-03-09
2014-03-10 6
Now it's a simple task to find the N:th business day from any date.
You simply do a self join with the calendar table, adding the offset in the join condition.
select a.actual_date
,b.actual_date as nth_bussines_day
from DateTable a
join DateTable b on(
b.bday_seq = a.bday_seq + 5
);

SQL Query data issues

I have the following data:
ID Date interval interval_date tot_activity non-activity
22190 2011-09-27 00:00:00 1000 2011-09-27 10:00:00.000 265 15
I have another table with this data:
Date ID Start END sched_non_activity non_activity
10/3/2011 12:00:00 AM HBLV-22267 10/3/2011 2:02:00 PM 10/3/2011 2:11:00 PM 540
Now, in the second table's non_activity field, I would like this to be the value from the first table. However, I need to capture the tot_activity - non_activity where the intervals(in 15 min increments) from the first table, fall in the same time frame as the start and end of the second table.
I have the following so far:
SELECT 1.ID, 1.Date, 1.interval, 1.interval_date, 1.tot_activity, 1.non_activity,
1.tot_activity - 1.non_activity AS non_activity
FROM table1 AS 1 INNER JOIN
LIST AS L ON 1.ID = L.ID INNER JOIN
table2 AS 2 ON 1.Date = 2.Date AND L.ID = Right(2.ID,5)
Where 1.interval_date >= 2.Start AND 1.interval_date < 2.End
ORDER BY 1.ID, 1.interval_date
With this, I can already see I will be unable to capture if a start from table 2 is at 15:50, which means that I need to capture interval 15:45.
is there any way of doing this through queries, or should I be using variables, and doing the check per interval. Any help at all would be greatly appreciated.
I think you are asking too much from a query here.
What i would do is treat the two tables as lists ordered by time stamps and solve the problem programatically (ie not with a single query)
For example, create a function that traverses the first table in 15min increments and find the best match in the second table (i am guessing this is what you are trying to do). Implement your function to return the same results set as your query above or store it in a temporary table. Select from the result set. T-SQL is your friend :)
I'm having a tough time understanding your issue, but you might have better luck with the DATEDIFF function:
DATEDIFF(SECOND, 1.interval_date, 2.Start) >= 0 AND DATEDIFF(SECOND, 1.interval_date, 2.End) <= 0
I apologize if I'm not catching your drift. If I'm missing something, could you try to clarify a little bit?

How do I analyse time periods between records in SQL data without cursors?

The root problem: I have an application which has been running for several months now. Users have been reporting that it's been slowing down over time (so in May it was quicker than it is now). I need to get some evidence to support or refute this claim. I'm not interested in precise numbers (so I don't need to know that a login took 10 seconds), I'm interested in trends - that something which used to take x seconds now takes of the order of y seconds.
The data I have is an audit table which stores a single row each time the user carries out any activity - it includes a primary key, the user id, a date time stamp and an activity code:
create table AuditData (
AuditRecordID int identity(1,1) not null,
DateTimeStamp datetime not null,
DateOnly datetime null,
UserID nvarchar(10) not null,
ActivityCode int not null)
(Notes: DateOnly (datetime) is the DateTimeStamp with the time stripped off to make group by for daily analysis easier - it's effectively duplicate data to make querying faster).
Also for the sake of ease you can assume that the ID is assigned in date time order, that is 1 will always be before 2 which will always be before 3 - if this isn't true I can make it so).
ActivityCode is an integer identifying the activity which took place, for instance 1 might be user logged in, 2 might be user data returned, 3 might be search results returned and so on.
Sample data for those who like that sort of thing...:
1, 01/01/2009 12:39, 01/01/2009, P123, 1
2, 01/01/2009 12:40, 01/01/2009, P123, 2
3, 01/01/2009 12:47, 01/01/2009, P123, 3
4, 01/01/2009 13:01, 01/01/2009, P123, 3
User data is returned (Activity Code 2) immediate after login (Activity Code 1) so this can be used as a rough benchmark of how long the login takes (as I said, I'm interested in trends so as long as I'm measuring the same thing for May as July it doesn't matter so much if this isn't the whole login process - it takes in enough of it to give a rough idea).
(Note: User data can also be returned under other circumstances so it's not a one to one mapping).
So what I'm looking to do is select the average time between login (say ActivityID 1) and the first instance after that for that user on that day of user data being returned (say ActivityID 2).
I can do this by going through the table with a cursor, getting each login instance and then for that doing a select to say get the minimum user data return following it for that user on that day but that's obviously not optimal and is slow as hell.
My question is (finally) - is there a "proper" SQL way of doing this using self joins or similar without using cursors or some similar procedural approach? I can create views and whatever to my hearts content, it doesn't have to be a single select.
I can hack something together but I'd like to make the analysis I'm doing a standard product function so would like it to be right.
SELECT TheDay, AVG(TimeTaken) AvgTimeTaken
FROM (
SELECT
CONVERT(DATE, logins.DateTimeStamp) TheDay
, DATEDIFF(SS, logins.DateTimeStamp,
(SELECT TOP 1 DateTimeStamp
FROM AuditData userinfo
WHERE UserID=logins.UserID
and userinfo.ActivityCode=2
and userinfo.DateTimeStamp > logins.DateTimeStamp )
)TimeTaken
FROM AuditData logins
WHERE
logins.ActivityCode = 1
) LogInTimes
GROUP BY TheDay
This might be dead slow in real world though.
In Oracle this would be a cinch, because of analytic functions. In this case, LAG() makes it easy to find the matching pairs of activity codes 1 and 2 and also to calculate the trend. As you can see, things got worse on 2nd JAN and improved quite a bit on the 3rd (I'm working in seconds rather than minutes).
SQL> select DateOnly
2 , elapsed_time
3 , elapsed_time - lag (elapsed_time) over (order by DateOnly) as trend
4 from
5 (
6 select DateOnly
7 , avg(databack_time - prior_login_time) as elapsed_time
8 from
9 ( select DateOnly
10 , databack_time
11 , ActivityCode
12 , lag(login_time) over (order by DateOnly,UserID, AuditRecordID, ActivityCode) as prior_login_time
13 from
14 (
15 select a1.AuditRecordID
16 , a1.DateOnly
17 , a1.UserID
18 , a1.ActivityCode
19 , to_number(to_char(a1.DateTimeStamp, 'SSSSS')) as login_time
20 , 0 as databack_time
21 from AuditData a1
22 where a1.ActivityCode = 1
23 union all
24 select a2.AuditRecordID
25 , a2.DateOnly
26 , a2.UserID
27 , a2.ActivityCode
28 , 0 as login_time
29 , to_number(to_char(a2.DateTimeStamp, 'SSSSS')) as databack_time
30 from AuditData a2
31 where a2.ActivityCode = 2
32 )
33 )
34 where ActivityCode = 2
35 group by DateOnly
36 )
37 /
DATEONLY ELAPSED_TIME TREND
--------- ------------ ----------
01-JAN-09 120
02-JAN-09 600 480
03-JAN-09 150 -450
SQL>
Like I said in my comment I guess you're working in MSSQL. I don't know whether that product has any equivalent of LAG().
If the assumptions are that:
Users will perform various tasks in no mandated order, and
That the difference between any two activities reflects the time it takes for the first of those two activities to execute,
Then why not create a table with two timestamps, the first column containing the activity start time, the second column containing the next activity start time. Thus the difference between these two will always be total time of the first activity. So for the logout activity, you would just have NULL for the second column.
So it would be kind of weird and interesting, for each activity (other than logging in and logging out), the time stamp would be recorded in two different rows--once for the last activity (as the time "completed") and again in a new row (as time started). You would end up with a jacob's ladder of sorts, but finding the data you are after would be much more simple.
In fact, to get really wacky, you could have each row have the time that the user started activity A and the activity code, and the time started activity B and the time stamp (which, as mentioned above, gets put down again for the following row). This way each row will tell you the exact difference in time for any two activities.
Otherwise, you're stuck with a query that says something like
SELECT TIME_IN_SEC(row2-timestamp) - TIME_IN_SEC(row1-timestamp)
which would be pretty slow, as you have already suggested. By swallowing the redundancy, you end up just querying the difference between the two columns. You probably would have less need of knowing the user info as well, since you'd know that any row shows both activity codes, thus you can just query the average for all users on any given day and compare it to the next day (unless you are trying to find out which users are having the problem as well).
This is the faster query to find out, in one row you will have current and row before datetime value, after that you can use DATEDIFF ( datepart , startdate , enddate ). I use #DammyVariable and DamyField as i remember the is some problem if is not first #variable=Field in update statement.
SELECT *, Cast(NULL AS DateTime) LastRowDateTime, Cast(NULL As INT) DamyField INTO #T FROM AuditData
GO
CREATE CLUSTERED INDEX IX_T ON #T (AuditRecordID)
GO
DECLARE #LastRowDateTime DateTime
DECLARE #DammyVariable INT
SET #LastRowDateTime = NULL
SET #DammyVariable = 1
UPDATE #T SET
#DammyVariable = DammyField = #DammyVariable
, LastRowDateTime = #LastRowDateTime
, #LastRowDateTime = DateTimeStamp
option (maxdop 1)