I have a table in SQL Server which has 2 columns StartTime AND EndTime. The datatype of both columns in time(7). So when I view data in the table it can look like this:
08:33:00.0000000
or
19:33:00.0000000
I want to return all the rows in the table where StartTime and EndTime conflicts with another row.
Example table TimeTable
RowID StartTime EndTime
1 08:33:00.0000000 19:33:00.0000000
2 10:34:00.0000000 15:32:00.0000000
3 03:00:00.0000000 05:00:00.0000000
Type of query I am trying to do:
SELECT * FROM TimeTable
WHERE RowID = 1
AND
TimeTable.StartTime AND EndTime
Falls in range
(SELECT * FROM TimeTable WHERE RowID <> 1)
Expected result:
2 10:34:00.0000000 15:32:00.0000000
You can use logic like this for at least a fraction of a second of overlap:
select tt.*
from timetable tt
where exists (select 1
from tt2
where tt2.rowid <> tt.rowid and
tt2.endtime > tt.starttime and
tt2.starttime < tt.endtime
);
Related
How do I group by the ids with only the null directly located directly below it. Then get sum of the time?
ID time
1 time1
null time1
null time1
null time1
2 time1
null time1
null time1
3 time1
null time1
null time1
Result wanted
ID time
1 sumTime
2 sumTime
3 sumTime
SQL tables represent unordered sets. In order for you to do what you want, you need a column that specifies the ordering. Once you have that, you can identify the groups by counting up the cumulative number of non-null values in id and aggregating:
select id, sum(time)
from (select t.*,
count(id) over (order by <ordering col>) as grp
from t
) t
group by id;
If you do not have an ordering column, your question does not make sense, because the table is unordered.
I agree with Gordon Linoff that what you are asking falls outside the rules of SQL Server because tables in SQL Server are unordered sets.
However, assuming that if you run the command
SELECT * FROM YourTimeTable
returns the data following the order and structure you showed:
ID time
1 time1
null time1
null time1
null time1
2 time1
null time1
null time1
3 time1
null time1
null time1
You can make it work with the following strategy:
Add a new column with row numbers that we can use to add ordering
Then run an update statement to set the ID = to the highest ID in row numbers smaller than the current row number.
if OBJECT_ID('tempdb.dbo.#tempTimeTable') IS NOT NULL
begin
drop table #tempTimeTable
end
SELECT ROW_NUMBER() OVER(ORDER BY TIME) AS RowN, * into #tempTimeTable FROM YourTimeTable
update t1 set ID = (select max(ID) from #tempTimeTable t2 where t2.RowN < t1.RowN) from #tempTimeTable t1 where id is null
select ID, SUM([time]) from #tempTimeTable group by [ID]
What we are doing is:
Insert the data from the original table into a temp table with a new column added that indicates the row number.
Update the ID fields on the rows that are NULL and set it = to the highest ID from lower number rows only. it will look like this:
1 time1
1 time1
1 time1
1 time1
2 time1
2 time1
2 time1
3 time1
3 time1
3 time1
Retrieve the data after summing all the times for each ID together.
Let me know if this works for you.
I am working on a script to analyze some data contained in thousands of tables on a SQL Server 2008 database.
For simplicity sakes, the tables can be broken down into groups of 4-8 semi-related tables. By semi-related I mean that they are data collections for the same item but they do not have any actual SQL relationship. Each table consists of a date-time stamp (datetime2 data type), value (can be a bit, int, or float depending on the particular item), and some other columns that are currently not of interest. The date-time stamp is set for every 15 minutes (on the quarter hour) within a few seconds; however, not all of the data is recorded precisely at the same time...
For example:
TABLE1:
TIMESTAMP VALUE
2014-11-27 07:15:00.390 1
2014-11-27 07:30:00.390 0
2014-11-27 07:45:00.373 0
2014-11-27 08:00:00.327 0
TABLE2:
TIMESTAMP VALUE
2014-11-19 08:00:07.880 0
2014-11-19 08:15:06.867 0.0979999974370003
2014-11-19 08:30:08.593 0.0979999974370003
2014-11-19 08:45:07.397 0.0979999974370003
TABLE3
TIMESTAMP VALUE
2014-11-27 07:15:00.390 0
2014-11-27 07:30:00.390 0
2014-11-27 07:45:00.373 1
2014-11-27 08:00:00.327 1
As you can see, not all of the tables will start with the same quarterly TIMESTAMP. Basically, what I am after is a query that will return the VALUE for each of the 3 tables for every 15 minute interval starting with the earliest TIMESTAMP out of the 3 tables. For the example given, I'd want to start at 2014-11-27 07:15 (don't care about seconds... thus, would need to allow for the timestamp to be +- 1 minute or so). Returning NULL for the value when there is no record for the particular TIMESTAMP is ok. So, the query for my listed example would return something like:
TIMESTAMP VALUE1 VALUE2 VALUE3
2014-11-27 07:15 1 NULL 0
2014-11-27 07:30 0 NULL 0
2014-11-27 07:45 0 NULL 1
2014-11-27 08:00 0 NULL 1
...
2014-11-19 08:00 0 0 1
2014-11-19 08:15 0 0.0979999974370003 0
2014-11-19 08:30 0 0.0979999974370003 0
2014-11-19 08:45 0 0.0979999974370003 0
I hope this makes sense. Any help/pointers/guidance will be appreciated.
Use Full Outer Join
SELECT COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP]) [TIMESTAMP],
Isnull(Max(a.VALUE), 0) VALUE1,
Max(b.VALUE) VALUE2,
Isnull(Max(c.VALUE), 0) VALUE3
FROM TABLE1 a
FULL OUTER JOIN TABLE2 b
ON CONVERT(SMALLDATETIME, a.[TIMESTAMP]) = CONVERT(SMALLDATETIME, b.[TIMESTAMP])
FULL OUTER JOIN TABLE3 c
ON CONVERT(SMALLDATETIME, a.[TIMESTAMP]) = CONVERT(SMALLDATETIME, c.[TIMESTAMP])
GROUP BY COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP])
ORDER BY [TIMESTAMP] DESC
The first thing I would do is normalize the timestamps to the minute. You can do this with an update to the existing column
UPDATE TABLENAME
SET TIMESTAMP = dateadd(minute,datediff(minute,0,TIMESTAMP),0)
or in a new column
ALTER TABLE TABLENAME ADD COLUMN NORMTIME DATETIME;
UPDATE TABLENAME
SET NORMTIME = dateadd(minute,datediff(minute,0,TIMESTAMP),0)
For details on flooring dates this see this post: Floor a date in SQL server
The next step is to make a table that has all of the timestamps (normalized) that you expect to see -- that is every 15 -- one per row. Lets call this table TIME_PERIOD and the column EVENT_TIME for my examples (call it whatever you want).
There are many ways to make such a table recursive CTE, ROW_NUMBER(), even brute force. I leave that part up to you.
Now the problem is simple select with left joins and a filter for valid values like this:
SELECT TP.EVENT_TIME, a.VALUE as VALUE1, b.VALUE as VALUE2, c.VALUE as VALUE3
FROM TIME_PERIOD TP
LEFT JOIN TABLE1 a ON a.[TIMESTAMP] = TP.EVENT_TIME
LEFT JOIN TABLE2 b ON b.[TIMESTAMP] = TP.EVENT_TIME
LEFT JOIN TABLE3 c ON c.[TIMESTAMP] = TP.EVENT_TIME
WHERE COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP]) is not null
ORDER BY TP.EVENT_TIME DESC
The where might get a little more complex if they are different types so you can always use this (which is not as good as coalesce but will always work):
WHERE a.[TIMESTAMP] IS NOT NULL OR
b.[TIMESTAMP] IS NOT NULL OR
c.[TIMESTAMP] IS NOT NULL
Here is an updated version of NoDisplayName's answer that does what you want. It works for SQL 2012, but you could replace the DATETIMEFROMPARTS function with a series of other functions to get the same result.
;WITH
NewT1 as (
SELECT DATETimeFROMPARTS( DATEPART(year,Timestamp) , DATEPART(month,timestamp) , datepart(day,timestamp),datepart(hour,timestamp), datepart(minute,timestamp),0,0 ) as TimeStamp, Value
FROM Table1),
NewT2 as (
SELECT DATETimeFROMPARTS( DATEPART(year,Timestamp) , DATEPART(month,timestamp) , datepart(day,timestamp),datepart(hour,timestamp), datepart(minute,timestamp),0,0 ) as TimeStamp, Value
FROM Table2),
NewT3 as (
SELECT DATETimeFROMPARTS( DATEPART(year,Timestamp) , DATEPART(month,timestamp) , datepart(day,timestamp),datepart(hour,timestamp), datepart(minute,timestamp),0,0 ) as TimeStamp, Value
FROM Table3)
SELECT COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP]) [TIMESTAMPs],
Isnull(Max(a.VALUE), 0) VALUE1,
Isnull(Max(b.VALUE), 0) VALUE2,
Isnull(Max(c.VALUE), 0) VALUE3
FROM NewT1 a
FULL OUTER JOIN NewT2 b
ON a.[TIMESTAMP] = b.[TIMESTAMP]
FULL OUTER JOIN TABLE3 c
ON a.[TIMESTAMP] = b.[TIMESTAMP]
GROUP BY COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP])
ORDER BY [TIMESTAMPs]
I have a query which shows count of messages received based on dates.
For Eg:
1 | 1-May-2012
3 | 3-May-2012
4 | 6-May-2012
7 | 7-May-2012
9 | 9-May-2012
5 | 10-May-2012
1 | 12-May-2012
As you can see on some dates there are no messages received. What I want is it should show all the dates and if there are no messages received it should show 0 like this
1 | 1-May-2012
0 | 2-May-2012
3 | 3-May-2012
0 | 4-May-2012
0 | 5-May-2012
4 | 6-May-2012
7 | 7-May-2012
0 | 8-May-2012
9 | 9-May-2012
5 | 10-May-2012
0 | 11-May-2012
1 | 12-May-2012
How can I achieve this when there are no rows in the table?
First, it sounds like your application would benefit from a calendar table. A calendar table is a list of dates and information about the dates.
Second, you can do this without using temporary tables. Here is the approach:
with constants as (select min(thedate>) as firstdate from <table>)
dates as (select( <firstdate> + rownum - 1) as thedate
from (select rownum
from <table> cross join constants
where rownum < sysdate - <firstdate> + 1
) seq
)
select dates.thedate, count(t.date)
from dates left outer join
<table> t
on t.date = dates.thedate
group by dates.thedate
Here is the idea. The alias constants records the earliest date in your table. The alias dates then creates a sequence of dates. The inner subquery calculates a sequence of integers, using rownum, and then adds these to the first date. Note this assumes that you have on average at least one transaction per date. If not, you can use a bigger table.
The final part is the join that is used to bring back information about the dates. Note the use of count(t.date) instead of count(*). This counts the number of records in your table, which should be 0 for dates with no data.
You don't need a separate table for this, you can create what you need in the query. This works for May:
WITH month_may AS (
select to_date('2012-05-01', 'yyyy-mm-dd') + level - 1 AS the_date
from dual
connect by level < 31
)
SELECT *
FROM month_may mm
LEFT JOIN mytable t ON t.some_date = mm.the_date
The date range will depend on how exactly you want to do this and what your range is.
You could achieve this with a left outer join IF you had another table to join to that contains all possible dates.
One option might be to generate the dates in a temp table and join that to your query.
Something like this might do the trick.
CREATE TABLE #TempA (Col1 DateTime)
DECLARE #start DATETIME = convert(datetime, convert(nvarchar(10), getdate(), 121))
SELECT #start
DECLARE #counter INT = 0
WHILE #counter < 50
BEGIN
INSERT INTO #TempA (Col1) VALUES (#start)
SET #start = DATEADD(DAY, 1, #start)
SET #counter = #counter+1
END
That will create a TempTable to hold the dates... I've just generated 50 of them starting from today.
SELECT
a.Col1,
COUNT(b.MessageID)
FROM
TempA a
LEFT OUTER JOIN YOUR_MESSAGE_TABLE b
ON a.Col1 = b.DateColumn
GROUP BY
a.Col1
Then you can left join your message counts to that.
I am trying to solve this query. I have the following data:
Input
Date Id Value
25-May-2011 1 10
26-May-2011 1 10
26-May-2011 2 10
27-May-2011 1 20
27-May-2011 2 20
28-May-2011 1 10
I need to query and output as:
Output
FromDate ToDate Id Value
25-May-2011 26-May-2011 1 10
26-May-2011 26-May-2011 2 10
27-May-2011 27-May-2011 1 20
28-May-2011 28-May-2011 1 10
I tried this sql but I'm not getting the correct result:
SELECT START_DATE, END_DATE, A.KEY, B.VALUE FROM
(
SELECT MIN(DATE) START_DATE, KEY, VALUE
FROM
KEY_VALUE
GROUP
BY KEY,VALUE
) A INNER JOIN
(
SELECT MAX(DATE) END_DATE, KEY, VALUE
FROM
KEY_VALUE
GROUP
BY KEY, VALUE
) B ON A.KEY = B.KEY AND A.VALUE = B.VALUE;
I think that you are trying too hard. Should be more like this:
SELECT MIN(START_DATE) AS FromDate, MAX(END_DATE) AS ToDate, KEY, VALUE
FROM KEY_VALUE
GROUP BY KEY, VALUE
This query appears to produce the correct results, though it pointed out that you missed a line in your example output '27-May-2011 ... 27-May-2011 ... 2 ... 20'.
select id, [value], date as fromdate, (
select top 1 date
from key_value kv2
where id = kv.id
and [value] = kv.[value]
and date >= kv.date
and datediff(d, kv.date, date) = (
select count(*)
from key_value
where id = kv.id
and [value] = kv.[value]
and date > kv.date
and date <= kv2.date
)
order by date desc
) as todate
from key_value kv
where not exists (
select *
from key_value
where id = kv.id
and [value] = kv.[value]
and date = dateadd(d, -1, kv.[date])
)
First it finds the min date records with the where clause, looking for records that do not have another record on the day before. Then the todate subquery gets the greatest date record by finding the number of days between it and min date then finding the number of records between the two and making sure they match. This of course assumes that the records in the table are distinct.
However if you are processing a massive table your best option may be to sort the records by key, id, date and then use a cursor to programmatically find the min and max dates as you loop over and look for values to change, then push them into a new table whether real or temp along with any other calculations you might need to do on other fields along the way.
I have a database table with one column being dates. However, some of the rows should share the same date but due to lag on insertion there's a one second difference between them. The insert part has been fixed already but the current data in the table needs to be fixed as well.
As an example the following data is present:
2008-10-08 12:23:01 1 1 x
2008-10-08 12:23:01 1 2 y
2008-10-08 12:23:02 1 3 z
Now I want to update the last row in this example and set the date to '2008-10-08 12:23:01'.
The best way I can think of is writing an external script to do that. It's tricky to determine which columns are correct and which should be updated without having more control over the grouping. Pseudo-code:
all_rows = SELECT * FROM table ORDER BY date
last_date = NULL
rows_to_update = []
for row in all_rows:
if last_date is NULL or row.date - last_date > X seconds:
set date to last_date for all rows from rows_to_update
last_date = row.date
rows_to_update = []
else if row.date != last_date:
rows_to_update += row
Alternatively, something like this could work, but you might need more than one run if want to handle cases where all three dates are different and you want to normalize two of them to the first one.
UPDATE
tbl t,
(SELECT
t.date,
(SELECT min(date)
FROM tbl
WHERE timestampdiff(SECOND,date,t.date) BETWEEN 1 AND 3) AS new_date
FROM tbl t) t2
SET t.date=t2.new_date
WHERE t.date=t2.date AND t2.new_date IS NOT NULL
For all rows::.
update yourtable set date_added=date_added-'01';
for a specific row add a where clause
due to lag in insertion
Why don't you get the date for insert before inserting/updating the first row and use that for all the other rows?
Assuming you have this structure:
create table tbl(id int identity, dt datetime)
insert into tbl (dt) values('2009-10-08 12:23:01')
insert into tbl (dt) values('2009-10-08 12:23:01')
insert into tbl (dt) values('2009-10-08 12:23:02')
insert into tbl (dt) values('2009-10-08 12:23:05')
insert into tbl (dt) values('2009-10-08 12:23:05')
insert into tbl (dt) values('2009-10-08 12:23:06')
This query will only show the last item of each set that's 1 second late:
select distinct A.* from tbl A
join (select * from tbl) AS T on datediff(ss, T.dt, A.dt) = 1
Using that in conjunction with an UPDATE statement, you get this:
update tbl set dt = (select top 1 dt from tbl where tbl.id < A.id order by tbl.id desc)
from tbl A
join (select * from tbl) AS T on datediff(ss, T.dt, A.dt) = 1
And that updates the last record of each set to the date above it, giving the results:
1 2009-10-08 12:23:01.000
2 2009-10-08 12:23:01.000
3 2009-10-08 12:23:01.000
4 2009-10-08 12:23:05.000
5 2009-10-08 12:23:05.000
6 2009-10-08 12:23:05.000
Its quick and dirty and unoptimized, but for a once-off data-scrub it should work.
Remember to back up!