I'm currently facing the following problem:
I want to select all the records from my database table LocationUpdates for a specific RFIDTagID AND a ScannedTime smaller then 2 min compared to the current time.
I am giving sql two parameters: 1. RFIDTagID (Needed to select only results in the database with this tagID, 2. ScannedTime (CurrentTimeStamp).
Now I want to ask the databse: Give me all records for RFIDTagID 123456789 where the ScannedTime max 2 min earlier is than the second parameter ScannedTime.
When SQL returns results: Than don't add the row.
Whenm SQL doesn't return results: Than you have to add the row.
I am making a stored procedure to perform this task. It looks like the following:
CREATE PROCEDURE
SELECT COUNT(*) FROM dbo.LocationUpdates updates WHERE updates.RFIDTagID = #RFIDTagID AND DATEDIFF(MINUTE, #ScannedTime, updates.ScannedTime) < 2
IF ##ROWCOUNT = 0 THEN PERFORM SOME TASKS AND ADD THE ROW
ELSE DO NOTHING
I have the following data in my database:
160 300833B2DDD9014035050005 18-7-2013 11:18:44
161 300833B2DDD9014035050005 18-7-2013 11:19:50
162 300833B2DDD9014035050005 18-7-2013 11:24:03
163 300833B2DDD9014035050005 18-7-2013 13:38:50
164 300833B2DDD9014035050005 18-7-2013 13:39:29
165 300833B2DDD9014035050005 1-1-1900 0:00:00
AND When I execute the following query (With the currentdate):
DECLARE #return_value Int
DECLARE #currDate DATETIME
SET #currDate = GETDATE()
EXEC #return_value = [dbo].[INSERT_LOCALROW]
#RFIDTagID = N'300833B2DDD9014035050005',
#ScannedTime = #currDate
SELECT 'Return Value' = #return_value
GO
This query returns the following result: 6 rows
But I am expecting to get 0 rows in return as none of the result is two minutes different then the current time.
Someone has any suggestions?
EDIT
I have found already the answer:
SELECT COUNT(*) FROM dbo.LocationUpdates updates WHERE updates.RFIDTagID = #RFIDTagID AND DATEDIFF(MINUTE, #ScannedTime, updates.ScannedTime) > -2
The function DateDiff gives a negative int when you compare a newer date with an older date, so it will return -2 when the time in the database is two minutes earlier then the current time.
I would replace
DATEDIFF(MINUTE, #ScannedTime, updates.ScannedTime) < 2
Cause if the second argument is bigger than the third argument (the dates), you will have a negative result. And... a negative result is smaller than 2.
by
updates.ScannedTime > DATEADD(MINUTE, -2, #ScannedTime)
or invert parameters
DATEDIFF(MINUTE, updates.ScannedTime, #ScannedTime) < 2
Related
I am using below sql to fetch records from a table created 1 minute back
SELECT Id,OrderNumber FROM ActivationRequest
WHERE Processed =0 AND
DateCreated <= DATEADD(minute,Convert(int,-1), GETDATE())
DateCreated : A column of data type Datetime and at time of insert long datettime value with seconds and milliseconds is getting inserted to it
While performing selct operation can we avoid the seconds and milliseconds part in some way ?
Example: DateCreated value 2018-12-07 07:08:41.703
But when i make the above select sql at 2018-12-07 07:08:51.597 , it returned 0 records back. Since the millisecond part of datetime is .597 only
So how can i avoid the seconds and milliseconds part and simply check hour and minute part in the where condition.
I need to fetch all records added in the last minute irrespective of seconds and milliseconds value
You may round to the nearest minute:
SELECT Id, OrderNumber
FROM ActivationRequest
WHERE
Processed = 0 AND
DateCreated <= DATEADD(mi, DATEDIFF(mi, 0, GETDATE()) + 1, 0);
Here is a demo showing the logic behind the calculus above used with GETDATE():
Demo
The above query should have reasonably good performance because it can use an index on DateCreated.
Try this ..
SELECT Id, OrderNumber
FROM ActivationRequest
WHERE
Processed = 0 AND
FORMAT(DateCreated,'dd:mm:yyyy:hh:mm') = FORMAT(GETDATE(),'dd:mm:yyyy:hh:mm');
I have a table with two DateTime columns, start & end
I have a stored procedure which has a line like
select
...
...
where
datediff(second, start, end) > xxx`
I know for unit = second, the maximum difference between start and end is around 68 years.
Currently there are some false legacy data, which the difference between start and end is over 68 years, and when it came across this stored procedure, it will produce overflow error.
What I am trying to do is to write another script to select all such false data so that we can patch them, how can I do that? How can I select some records to fix the error which producing the error itself?
First, is it really necessary to do this to one second accuracy. After all:
where datediff(minute, start, end) > xxx / 60
or:
where datediff(hour, start, end) > xxx / (60 * 60)
but . . . if that won't do, you can try:
where dateadd(hour, xxx / (60 * 60),
dateadd(second, xxx % (60 * 60), start)
) > end
EDIT:
Actually, your problem is with the dates, not the xxx value. So, this should also work:
where dateadd(second, xxx, start) > end
This will work as long as xxx is an integer and start is not way too big (near the end of the range of whatever type it is).
Considering CASE statements resolve from left to right, you could try
Declare #YourTable table (id int,start datetime,[end] datetime)
Insert Into #YourTable values
(1,'1930-01-01','2016-09-25'), -- Greater than 2.14B seconds
(2,'2016-09-24','2016-09-25') -- Something more reasonable
Select *
from #YourTable
Where case when DateDiff(MINUTE,[start],[end]) > (2147483647/60) then 2147483647 else DateDiff(SECOND,[start],[end]) end > 100000
Returns (without an exception)
id start end
1 1930-01-01 00:00:00.000 2016-09-25 00:00:00.000
EDIT
I should add the trap of minutes allows for 4,080 years vs 68. Also, the default value of 2147483647 could be a more reasonable number or even 0 indicating suspect data.
I have a rather large and complex query to work out people in work, off sick etc. This worked great if I want to just see it for 1 day, however I need to allow users to view multiple days.
I added a startdate and enddate parameter and looked at building in a sql while loop to change the start date each time and write the values into a temp table so I can pull them out at the end. This may not be the best approach.
I have got the loop working, however it keeps duplicating the results like in the example below:
How the data should look:
Date: Value
01/01/2014 1
02/01/2014 2
03/01/2014 3
How data is being exported:
Date: Value
01/01/2014 1
02/01/2014 1
02/01/2014 2
03/01/2014 1
03/01/2014 2
03/01/2014 3
This is the example of the loop I found and I have used with my own sql code in the middle. My sql code only uses the startdate parameter being passed in.
Should I be using a different type of loop maybe, or have I missed something out to stop the duplication? Any suggestions welcome as im not sure how to stop the loop doing this. It is bring back the correct data I just need to exclude the duplicates.
Structure of my code and loop but not the full example as code in middle is very long:
CREATE TABLE #TestTable1
(
Date DATETIME
Value int
);
declare #startdate datetime
declare #enddate datetime
while #startdate <= #enddate
BEGIN
(My Sql Code is placed here and uses the #startdate parameter)
INSERT INTO #TestTable1(Date, value)
select * from (uses allot of temp tables and cte from the code i have used.)
SET #startdate = DATEADD(DAY, 1, #startdate)
END
select * from #TestTable1
drop table #TestTable1
I have a single table which i want to use a single SQL statement from Visual basic 6 platform to Select and subtract summed column based on where clause. Below is the Description
Here is a Sample Table to explain better:
ID Deposit Withdrawal Date
ACC01 1000 0 10/10/2012
ACC01 2000 0 1/1/2013
ACC02 3000 0 10/10/2012
ACC02 4000 0 1/1/2013
ACC01 0 1000 12/12/2012
ACC02 0 3000 12/12/2012
I want to sum values in deposit column where date less than 1/1/2013 as DepositBefore
Also sum values in withdrawal column where date less than 1/1/2013 as WithDrawalBefore
then subtract WithdrawalBefore from DepositBefore (i.e. BalanceBefore = DepositBefore - WithrawalBefore)
In the same vein, sum values in deposit column where date >= 1/1/2013 as DepositAfter
Also sum values in Withdrawal column where date >= 1/1/2013 as WithdrawalAfter
so that (BalanceAfter = DepositAfter - WithdrawalAfter)
Finally compute BalanceForInterest = BalanceBefore + BalanceAfter. Would prefer if a single query can resolve this Thanks in Advance
Below is the code i tried but did not succeed.
With rsSaving
.Open "SELECT SUM(SUM(Deposit)-SUM(Withdrawal)) AS BalanceBefore FROM tblSaving WHERE ID = 'ACC01' AND Date < #1/1/2013# GROUP BY ID, (SELECT SUM(SUM(Deposit)-SUM(Withdrawal)) AS BalanceAfter FROM tblSaving WHERE ID='ACC01' AND Date >= #1/1/2013# GROUP BY ID)",conDB, adOpenDynamic, adLockOptimistic
.close
End With
set rsSaving = Nothing
rsSaving is an ADO object, while conDB is my connection. Thanks
This is the SQL query. You may have to change the syntax for your databse (SQL, Access, etc.). You don't need a GroupBy because you are getting back a single value. Given your sample data this query returns 2000.
SELECT
(SELECT SUM(Deposit-Withdrawal) AS Total FROM tblSaving WHERE ID = 'ACC01' AND Date < '1-1-2013')
+ (SELECT SUM(Deposit-Withdrawal) AS Total FROM tblSaving WHERE ID = 'ACC01' AND Date >= '1-1-2013')
AS BalanaceAfter
you can write like
declare #BalanceBefore int
set #BalanceBefore = (select sum(deposit ) from table where date < 1/1/2013)
declare #WithdrawalAfter int
set #WithdrawalAfter = (select sum(WithdrawalAfter ) from table where date > 1/1/2013)
declare #res int
set #res = #BalanceBefore - #WithdrawalAfter
print #res
try some thing like this. also check for date comparison
I have a data set consisting of time-stamped values, and absolute (meter) values. Sometimes the meter values reset to zero, which means I have to iterate through and calculate a delta one-by-one, and then add it up to get the total for a given period.
For example:
Timestamp Value
2009-01-01 100
2009-01-02 105
2009-01-03 120
2009-01-04 0
2009-01-05 9
the total here is 29, calculated as:
(105 - 100) + (120 - 105) + (0) + (9 - 0) = 29
I'm using MS-SQL server for this, and open to any suggestions.
Right now, I'm using a cursor to do this, which checks that the delta isn't negative, and then totals it up:
DECLARE CURSOR curTest CURSOR FAST_FORWARD FOR
SELECT value FROM table ORDER BY timestamp
OPEN curTest
DECLARE #delta bigint, #current bigint, #last bigint
SET #delta = 0
FETCH curTest INTO #current
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#current IS NOT NULL) AND (#current > 0)
BEGIN
IF (#last IS NOT NULL) AND (#current > #last)
SET #delta = #delta + (#current - #last)
SET #last = #current
FETCH curTest INTO #current
END
END
CLOSE curTest
DEALLOCATE curTest
It would be nice to get a data set like:
Timestamp Value LastValue
2009-01-01 100 NULL
2009-01-02 105 100
2009-01-03 120 105
2009-01-04 0 120
2009-01-05 9 0
as then it would be easy to grab the deltas, filter for (Value > LastValue), and do a SUM().
I tried:
SELECT m1.timestamp, m1.value,
( SELECT TOP 1 m2.value FROM table WHERE m2.timestamp < m1.timestamp ORDER BY m2.timestamp DESC ) as LastValue
FROM table
but this actually turns out to be slower than the cursor: When I run these together in SQL studio with 'show execution plan' on, the relative cost of this is 100% (with 7 or 8 operations - the majority in a clustered index scan on timestamp), and the cursor is 0% (with 3 operations).
(What I'm not showing here for simplicity is that I have several different sets of numbers, with a foreign key in this table as well - so there is also always a WHERE clause limiting to a specific set. I have several places where I calculate these totals for a given time period for several sets at once, and thus it becomes quite the performance bottleneck. The non-cursor method can also be easily modified to GROUP BY the key and return all the sets at once - but this actually is even slower in my testing than running the cursor multiple times, because there is the additional overhead of the GROUP BY and SUM() operation, aside from it being slower overall anyways.)
Much the same...
create table #temp ([timestamp] date,value int);
insert into #temp (timestamp,value) values ('2009-01-01',100)
insert into #temp (timestamp,value) values ('2009-01-02',105)
insert into #temp (timestamp,value) values ('2009-01-03',120)
insert into #temp (timestamp,value) values ('2009-01-04',0)
insert into #temp (timestamp,value) values ('2009-01-05',9);
with numbered as
(
select ROW_NUMBER() over (order by timestamp) id,value from #temp
)
select sum(n1.value-n2.value) from numbered n1 join numbered n2 on n1.id=n2.id+1 where n1.value!=0
drop table #temp;
Result is 29, as specified.
Start with row_number, then join back to yourself.
with numbered as
(
SELECT value, row_number() over (order by timestamp) as Rownum
FROM table
)
select sum(n1.value - n2.value)
from numbered n1
join
numbered n2 on n1.Rownum = n2.Rownum +1
Actually... you only want to pick up increases... so put a WHERE clause in, saying "WHERE n1.value > n2.value".
And... make sure I've put them the right way around... I've just changed it from -1 to +1, because I think I had it flipped.
Easy!
Rob
There are too many unnecessary joins in your algorithm.
Calculating the difference between each meter reading and its subsequent meter reading is a waste of resources. As a real world example, imagine if my electric company read my meter each day to how much electricity I used, and summed daily values to determine my monthly total - it just doesn't make sense. They simply determine the total based on the start value and the end value!
Simply calculate the difference between the first and last readings and adjust to account for the 'resets'. Your formula simply becomes:
total value = (final value) - (initial value)
+ (miscellaneous reductions in value, i.e. resets)
total value = (9) - (100) + (120)
= 29
It's trivial to find the final value and initial value. Just find the total amount by which 'meter' was reduced during 'resets', and add this to the total. Unless there are more reset records than measurement records, this will always be more efficient.
To borrow from spender's solution, the 'reset' value could be calculated by
create table...
select sum(n1.value-n2.value) from numbered n1 join numbered n2
on n1.id=n2.id+1 where n1.value=0 //note value=0 rather than value!=0