Reduce 'X' hours from a column in Oracle - sql

I have a table 'ABC' with a column TIME_THU which has values like 04:00, 04:02,04:03. The column is VARCHAR2 and I need to reduce the column value to 00:00, 00:02,00:03 i.e. reduce them by 04 hours back.
Need to use UPDATE query to make changes to this.
Something like,
Update ABC set TIME_THU = TIME_THU -4;
Thanks

Given your data, the best option seems to be to convert to a date, subtract the hours, then convert back.
Update ABC set TIME_THU = to_char(to_date(TIME_THU,'hh24:mi')-(4/24),'hh24:mi')
The caveat here is that if any of your time are less than 4:00, you may get incorrect results. I don't know enough of you requirements to know what the correct result would be in that scenario, so I can't solve it.
You may also want to investigate the interval data type, which is better suited to storing times disconnected from a date.

Related

SQL Server adding two time columns in a single table and putting result into a third column

I have a table containing two time columns like this:
Time1 Time2
07:34:33 08:22:44
I want to add the time in both these columns and put the result of addition into a third column may be Time3
Any help would be appreciated..Thanks
If the value you expect as the result is 15:57:17 then you can get it by calculating for instance the number of seconds from midnight for Time1 and add that value to Time2:
select dateadd(second,datediff(second,0,time1),time2) as Time3
from your_table
I'm not sure how meaningful adding two discrete time values together is though, unless they are meant to represent duration in which case the time datatype might not be the best as it is meant for time of day data and only has a range of 00:00:00.0000000 through 23:59:59.9999999 and an addition could overflow (and hence wrap around).
If the result you want isn't 15:57:17 then you should clarify the question and add the desired output.
The engine doesn't understand addition of two time values, because it thinks you can't add two times of day. You get:
Msg 8117, Level 16, State 1, Line 8
Operand data type time is invalid for add operator.
If these are elapsed times, not times of day, you could take them apart with DATEPART, but in SQL Server 2008 you will have to use a CONVERT to put the value back together, plus have all the gymnastics to do base 60 addition.
If you have the option, it would be best to store the time values as NUMERIC with a positive scale, where the unit of measure is hours, and then break them down when finally reporting them. Something like this:
DECLARE
#r NUMERIC(7, 5);
SET #r = 8.856;
SELECT FLOOR(#r) AS Hours, FLOOR(60 * (#r - FLOOR(#r))) AS Minutes, 60 * ((60 * #r) - FLOOR(60 * #r)) AS Seconds
Returns
Hours Minutes Seconds
8 51 21.60000
There is an advantage to writing a user-defined function to do this, to eliminate the repeated 60 * #r calculations.

SQL Like function Broken? or Limited?

I am trying to use the LIKE function to get data with similar names. Everything looks fine but the data I get in return is missing some values when I get back more than ~20 rows of data.
I have a very basic query. I just want data that starts with Lab, ideally for the whole day, or at least 12 hours. The code below misses some data and I cannot discern a pattern for what it picks to skip.
SELECT History.TagName, DateTime, Value FROM History
WHERE History.TagName like ('Lab%')
AND Quality = 0
AND wwRetrievalMode = 'Full'
AND DateTime >= '20150811 6:00'
AND DateTime <= '20150811 18:00'
To give you an idea of the data I am pulling, I have Lab.Raw.NTU, Lab.Raw.Alk, Lab.Sett.NTU, etc. Most of the data should have values at 6am/pm, 10am/pm, and 2am/pm. Some have more, few have less, not important. When I change the query to be more specific (i.e. only 1 hour window or LIKE "Lab.Raw.NTU") I get all of my data. Currently, this will spit out data for all tags and I get both 6am data and 6pm data, but certain values will be missing such as Lab.Raw.NTU at 6pm. There seem to be other data that is missing if I change the window for the previous day or the night shift, so I don't think it has to be with the data itself. Something weird is going on with the LIKE function but I have no idea what.
Is there another way to get the tagnames that I want besides like? Such as Tagname > Lab and Tagname <= Labz? (that gives me an error, so I am thinking not)
Please help.
It appears that you are using the Like operator correctly; that could be a red herring. Check the data type of the DateTime field. If it is character based such as varchar you are doing string comparisons instead of date comparisons, which could cause unexpected results. Try doing an explicit cast to ensure they are compared as dates:
DateTime >= convert(datetime, '20150811 6:00')

Using BETWEEN clause

Whenever you write a query where you need to filter out rows on a range of values - then should I use the BETWEEN clause or <= and >= ?
Which one is better in performance?
Neither. They create exactly the same execution plan.
The times where I use them depends not on performance, but on the data.
If the data are Discrete Values, then I use BETWEEN...
x BETWEEN 0 AND 9
But if the data are Continuous Values, then that doesn't work so well...
x BETWEEN 0.000 AND 9.999999999999999999
Instead, I use >= AND <...
x >= 0 AND x < 10
Interestingly, however, the >= AND < technique actually works for both Continuous and Discrete data types. So, in general, I rarely use BETWEEN at all.
Also, don't use BETWEEN for date/time range queries.
What does the following really mean?
BETWEEN '20120201' AND '20120229'
Some people think that means get me the all the data from February, including all of the data anytime on February 29th. The above gets translated to:
BETWEEN '20120201 00:00:00.000' AND '20120229 00:00:00.000'
So if there is data on the 29th any time after midnight, your report is going to be incomplete.
People also try to be clever and pick the "end" of the day:
BETWEEN '00:00:00.000' AND '23:59:59.997'
That works if the data type is datetime. If it is smalldatetime the end of the range gets rounded up, and you may include data from the next day that you didn't mean to. If it's datetime2 you might actually miss a small portion of data that happened in the last 2+ milliseconds of the day. In most cases statistically irrelevant, but if the query is wrong, the query is wrong.
So for date range queries I always strongly recommend using an open-ended range, e.g. to report on the month of February the WHERE clause would say "on or after February 1st, and before March 1st" as follows:
WHERE date_col >= '20120201' AND date_col < '20120301'
BETWEEN can work as expected using the date type only, but I still prefer an open-ended range in queries because later someone may change that underlying data type to allow it to include time.
I blogged a lot more details here:
What do BETWEEN and the devil have in common?

Mysql Datetime queries

I'm new to this forum. I've been having trouble constructing a MySQL query. Basically I want to select data and use some sort of function to output the timestamp field in a certain way. I know that dateformat can do this by minute, day, hour, etc. But consider the following:
Say it is 12:59pm. I want to be able to select data from the past day, and have the data be placed into two hour wide time 'bins' based on it's timestamp.
So these bins would be: 10:00am, 8:00am, 6:00am, 4:00am, etc, and the query would convert the data's timestamp in one of these bins.
E.G.
data converted
4:45am becomes 4:00am,
6:30am becomes 6:00am,
9:55am becomes 8:00am,
10:03am becomes 10:00am,
11:00am becomes 10:00am
Make sense? The width of the bins needs to be dynamic as well. I hope I described the problem clearly, and any help is appreciated.
Examples:
Monthly buckets:
GROUP BY YEAR(datestampfield) desc, MONTH(datestampfield) desc
Hourly buckets, with number of hours configurable:
set #rangehrs = 2; select *,FLOOR(HOUR(dateadded)/#rangehrs )*#rangehrs as x from mytable GROUP BY FLOOR(HOUR(dateadded)/#rangehrs )*#rangehrs limit 5;
Sounds like you're looking for a histogram of time. Not sure that's a real thing, but the term histogram might get you in a good place....like this related question:
Getting data for histogram plot

How to design SQL tables when column data arrives in multiple types/margins of error?

I've been given a stack of data where a particular value has been collected sometimes as a date (YYYY-MM-DD) and sometimes as just a year.
Depending on how you look at it, this is either a variance in type or margin of error.
This is a subprime situation, but I can't afford to recover or discard any data.
What's the optimal (eg. least worst :) ) SQL table design that will accept either form while avoiding monstrous queries and allowing maximum use of database features like constraints and keys*?
*i.e. Entity-Attribute-Value is out.
You could store the year, month and day components in separate columns. That way, you only need to populate the columns for which you have data.
if it comes in as just a year make it default to 01 for month and date, YYYY-01-01
This way you can still use a date/datetime datatype and don't have to worry about invalid dates
Either bring it in as a string unmolested, and modify it so it's consistent in another step, or modify the year-only values during the import like SQLMenace recommends.
I'd store the value in a DATETIME type and another value (just an integer will do, or some kind of enumerated type) that signifies its precision.
It would be easier to give more information if you mentioned what kind of queries you will be doing on the data.
Either fix it, then store it (OK, not an option)
Or store it broken with a fixed computed columns
Something like this
CREATE TABLE ...
...
Broken varchar(20),
Fixed AS CAST(CASE WHEN Broken LIKE '[12][0-9][0-9][0-9]' THEN Broken + '0101' ELSE Broken END AS datetime)
This also allows you to detect good from bad source data
If you don't always have a full date, what sort of keys and constraints would you need? Perhaps store two columns of data; a full date, and a year. For data that has only year, the year is stored and date is null. For items with full info, both are populated.
I'd put three columns in the table:
The provided value (YYYY-MM-DD or YYYY)
A date column, Date or DateTime data type, which is nullable
A year column, as an integer or char(4) depending upon your needs.
I'd always populate the year column, populate the date column only when the provided value is a date.
And, because you've kept the provided value, you can always re-process down the road if needs change.
An alternative solution would be to that of a date mask (like in IP). Store the date in a regular datetime field, and insert an additional field of type smallint or something, where you could indicate which is present (could go even binary here):
If you have YYYY-MM-DD, you would have 3 bits of data, which will have the values 1 if data is present and 0 if not.
Example:
Date Mask
2009-12-05 7 (111)
2009-12-01 6 (110, only year and month are know, and day is set to default 1)
2009-01-20 5 (101, for some strange reason, only the year and the date is known. January has 31 days, so it will never generate an error)
Which solution is better depends on what you will do with it.
This is better when you want to select those with full dates, which are between a certain period (less to write). Also this way it's easier to compare any dates which have masks like 7,6,4. It may also take up less memory (date + smallint may be smaller than int+int+int, and only if datetime uses 64 bit, and smallint uses up as much as int, it will be the same).
I was going to suggest the same solution as #ninesided did above. Additionally, you could have a date field and a field that quantitatively represents your uncertainty. This offers the advantage of being able to represent things like "on or about Sept 23, 2010". The problem is that to represent the case where you only know the year, you'd have to set your date to be the middle of the year, with 182.5 days' uncertainty (assuming non-leap year), which seems ugly.
You could use a similar but distinct approach with a mask that represents what date parts you're confident about - that's what SQLMenace offered in his answer above.
+1 each to recommendations from ninesided, Nikki9696 and Jeff Siver - I support all those answers though none was exactly what I decided upon.
My solution:
a date column used only for complete dates
an int column used for years
a constraint to ensure integrity between the two
a trigger to populate the year if only date is supplied
Advantages:
can run simple (one-column) queries on the date column with missing data ignored (by using NULL for what it was designed for)
can run simple (one-column) queries on the year column for any row with a date (because year is automatically populated)
insert either year or date or both (provided they agree)
no fear of disagreement between columns
self explanatory, intuitive
I would argue that methods using YYYY-01-01 to signify missing data (when flagged as such with a second explanatory column) fail seriously on points 1 and 5.
Example code for Sqlite 3:
create table events
(
rowid integer primary key,
event_year integer,
event_date date,
check (event_year = cast(strftime("%Y", event_date) as integer))
);
create trigger year_trigger after insert on events
begin
update events set event_year = cast(strftime("%Y", event_date) as integer)
where rowid = new.rowid and event_date is not null;
end;
-- various methods to insert
insert into events (event_year, event_date) values (2008, "2008-02-23");
insert into events (event_year) values (2009);
insert into events (event_date) values ("2010-01-19");
-- select events in January without expressions on supplementary columns
select rowid, event_date from events where strftime("%m", event_date) = "01";