I have exchange rate table in which there are multiple date wise records with exchange rate.
Date Rate
17/05/2012 5
23/05/2012 6
27/05/2012 7
Now I want rate while passing any date like if, I pass 20/05/2012 then rate 5 should return because 20/05/2012 elapse in date range 17 and 23 may 2012.
Assuming you have correct datatypes (that is, not varchar to store date values...)
SELECT TOP 1
Rate
FROM
MyTable
WHERE
DateColumn <= '20120520'
ORDER BY
DateColumn DESC
Something like this will work:
select Rate from tablename where Date in (
select max(Date) as Date
from tablename
where Date <= convert(datetime, '20/05/2012', 103)
)
Related
My data looks something similar to:
days
weight
start date
end date
180
1
01/01/2020
null
365
0.75
01/01/2020
null
And I want to be able to select this to assign the correct value where say if the days were 0-180, they would be row 1 and 181-365 it would be row 2. If it was 365+ it would be row 2. I have already found out I can use between sql syntax for the date.
My initial code tries to do this:
select weight from (select * from table where days >= #DAYS order by days ASC) where rownum =1
But then if you do more than the last value it doesn't show anything so i've then tried to introduce a maximum element trying to find the maximum value and saying
>= #DAYS
or
>= MAX(#DAYS)
Is there a simpler way to do this?
Thanks.
select weight
from (select t.*, max(days) over () as max_day from table t) v
where days >= least(#DAY,max_day)
order by days asc
fetch first 1 row only
I'd suggest this option. When #DAY becomes larger than the largest days entry, we use max_days instead.
Select max(weight) from table
Where days=(Select max(days) from table
Where days >= #DAYS)
The first max() function is defensive in case your table has 2 entries with the same days number.
Suppose I have the following table:
Id Visitors Date
------------------------------
1 100 '2017-01-01'
2 200 '2017-01-02'
3 150 '2017-01-03'
I want a query to provide the average of a range of records for the last 12 months.
For one record I know that it would be like :
select avg(Visitors), Date
from Visitors_table
where Date between '2018-01-01' and '2017-01-01'
However, I need to do that for a range of dates and multiple records.
I know that Union will solve it, but if the range is one year for example It is not optimized to use 365 union
Get the dates from 1 year ago to current date:
SELECT
Date,
AVG(Visitors) AS avgvisitors,
FROM Visitors_table
WHERE Date > dateadd(year, -1, getdate())
GROUP BY Date
ORDER BY Date;
Since you need to group by date.
Using SQL I need to return a smooth set of results (i.e. one per day) from a dataset that contains 0-N records per day.
The result per day should be the most recent previous value even if that is not from the same day. For example:
Starting data:
Date: Time: Value
19/3/2014 10:01 5
19/3/2014 11:08 3
19/3/2014 17:19 6
20/3/2014 09:11 4
22/3/2014 14:01 5
Required output:
Date: Value
19/3/2014 6
20/3/2014 4
21/3/2014 4
22/3/2014 5
First you need to complete the date range and fill in the missing dates (21/3/2014 in you example). This can be done by either joining a calendar table if you have one, or by using a recursive common table expression to generate the complete sequence on the fly.
When you have the complete sequence of dates finding the max value for the date, or from the latest previous non-null row becomes easy. In this query I use a correlated subquery to do it.
with cte as (
select min(date) date, max(date) max_date from your_table
union all
select dateadd(day, 1, date) date, max_date
from cte
where date < max_date
)
select
c.date,
(
select top 1 max(value) from your_table
where date <= c.date group by date order by date desc
) value
from cte c
order by c.date;
May be this works but try and let me know
select date, value from test where (time,date) in (select max(time),date from test group by date);
I have a rollup table that sums up raw data for a given hour. It looks something like this:
stats_hours:
- obj_id : integer
- start_at : datetime
- count : integer
The obj_id points to a separate table, the start_at field contains a timestamp for the beginning of the hour of the data, and the count contains the sum of the data for that hour.
I would like to build a query that returns a set of data per day, so something like this:
Date | sum_count
2014-06-01 | 2000
2014-06-02 | 3000
2014-06-03 | 0
2014-06-04 | 5000
The query that I built does a grouping on the date column and sums up the count:
SELECT date(start_at) as date, sum(count) as sum_count
FROM stats_hours GROUP BY date;
This works fine unless I have no data for a given date, in which case it obviously leaves out the row:
Date | sum_count
2014-06-01 | 2000
2014-06-02 | 3000
2014-06-04 | 5000
Does anyone know of a good way in SQL to return a zeroed-out row in the case that there is no data for a given date group? Maybe some kind of case statement?
You need a full list of dates first, then connect that list to your available dates and group by that. Try the following:
--define start and end limits
Declare #todate datetime, #fromdate datetime
Select #fromdate='2009-03-01', #todate='2014-06-04'
;With DateSequence( Date ) as
(
Select #fromdate as Date
union all
Select dateadd(day, 1, Date)
from DateSequence
where Date < #todate
)
--select result
SELECT DateSequence.Date, SUM(Stats_Hours.Count) AS Sum_Count
FROM
DateSequence
LEFT JOIN
Stats_Hours ON DateSequence.Date = Stats_Hours.Start_At
GROUP BY DateSequence.Date
option (MaxRecursion 0)
EDIT: CTE code from this post
I have something like this:
SELECt *
FROM (
SELECT prodid, date, time, tmp, rowid
FROM live_pilot_plant
WHERE date BETWEEN CONVERT(DATETIME, '3/19/2012', 101)
AND CONVERT(DATETIME, '3/31/2012', 101)
) b
WHERE b.rowid % 400 = 0
FYI: The reason for the convert in the where clause, is because my date is stored as a varchar(10), I had to convert it to datetime in order to get the correct range of data. (I tried a bunch of different things and this worked)
I'm wondering how I can return the data I want every 4 hours during those selected dates. I have data collected approximately every 5 seconds (with some breaks in data) - ie data wasn't collected during a 2 hour period, but then continues at 5 second increments.
In my example I just used a modulo with my rowid - and the syntax works, but as I mentioned above there are some periods where data isnt collected so using logic like: if you take data every 5 seconds and multiple that by 4 hours you can approximately say how many rows are in between wont work.
My time column is a varchar column and is in the form hh:mm:ss
My ideal output is:
| prodid | date | time | tmp |
| 4 | 3/19/2012 | 10:00:00 | 2.3 |
| 7 | 3/19/2012 | 14:00:24 | 3.2 |
As you can see I can be a bit off (in terms of seconds) - I more so need the approximate value in terms of time.
Thank you in advance.
This should work
select prodid, date, time, tmp, rowid
from live_pilot_plant as lpp
inner join (
select min(prodid) as prodid -- is prodid your PK?? if not change it to rowid or whatelse is your PK
from live_pilot_plant
WHERE date BETWEEN CONVERT(DATETIME, '3/19/2012', 101) -- or whatever you want
AND CONVERT(DATETIME, '3/31/2012', 101) -- for better performance it is on the inner select
group by date,
floor( -- floor makes the trick
convert(float,convert(datetime, time)) -- assumes "time" column is a varchar containing data like '19:23:05'
* 6 -- 6 comes form 24 hours / 4 hours
)
) as filter on lpp.prodid = filter.prodid -- if prodid is not the PK also correct here.
A side note for everyone else who have date + time data in only one datetime field, suppose named "when_it_was", the group by can be as simple as:
group by floor(when_it_was * 6) -- again, 6 comes from 24/4
something along the lines of the following should work. Basically create date + time partitions, each partition representing a block of 4 hours and pick the record with the highest rank from each partition
select * from (
select *,
row_number() over (partition by date,cast(left( time, charindex( ':', time) - 1) as int) / 4 order by
date, time) as ranker from live_pilot_plant
) Z where ranker = 1
Assuming rowid is a PK and increased with date/time. Just convert time field to 4 hours interval number substring(time,1,2))/4 and select MIN(rowid) from each of 4 hours groups in a day:
select prodid, date, time, tmp, rowid from live_pilot_plant where rowid in
(
select min(rowid)
from live_pilot_plant
WHERE CONVERT(DATETIME, date, 101) BETWEEN CONVERT(DATETIME, '3/19/2012', 101)
AND CONVERT(DATETIME, '3/31/2012', 101)
group by date,convert(int,substring(time,1,2))/4
)
order by CONVERT(DATETIME, date, 101),time