Oracle: How query where datetime column not null and null values? - sql

I want query datetime column not null and null values.
But i query have not null values now. i want query both values.
Query:
select l.com_code,
l.p_code,
to_char(l.effdate,'dd/mm/yyyy') effdate,to_char(l.expdate,'dd/mm/yyyy') expdate
from RATE_BILL l
where ( to_date('02/06/2016','dd/mm/yyyy') <= to_date(l.effdate,'dd/mm/yyyy')
or to_date('02/06/2016','dd/mm/yyyy') <= to_date(l.expdate,'dd/mm/yyyy') )
Data Sample
com_code | p_code | effdate | expdate
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Query Result:
com_code | p_code | effdate | expdate
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Column expdate If null = '31/12/9998' but show in DB is null
when query datetime = '02/06/2016' is between should result this
com_code | p_code | effdate | expdate
A | Test01 | 01/06/2016 |
But where query is
where ( to_date('31/05/2016','dd/mm/yyyy') <= to_date(l.effdate,'dd/mm/yyyy') or to_date('31/05/2016','dd/mm/yyyy') <= to_date(l.expdate,'dd/mm/yyyy') )
Result Should
A | TEST01 | 01/01/2016 | 31/05/2016
A | Test01 | 01/06/2016 |
Values Datetime is "Now Datetime"

First of all I must admit that I am not sure to understand the meaning of your text from your wording [no offence intended]. Feel free to comment if this answer does not address your needs.
The where condition of a query is built on the columns of the table/view and their sql data types. There is no need to convert datetime columns to the datetime data type.
Moreover, it is potentially harmful here since it implies an implicit conversion:
date column
-> char /* implicit, default format */
-> date /* express format;
in general will differ from the format the argument
string follows
*/
So change the where condition to:
where to_date('02/06/2016','dd/mm/yyyy') <= l.effdate
or to_date('02/06/2016','dd/mm/yyyy') <= l.expdate
To cater for null values, complement the where condition with 'sufficiently large' datetime to compare against in case of null values in the db columns:
where to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.effdate, to_date('12/31/9998','dd/mm/yyyy'))
or to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.expdate, to_date('12/31/9998','dd/mm/yyyy'))
You are free to use different cutoff dates. For example you might wish to use expdate from rate_bill when it is not null and the current datetime otherwise:
where to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.effdate, to_date('12/31/9998','dd/mm/yyyy'))
or to_date('02/06/2016','dd/mm/yyyy') <= nvl(l.expdate, sysdate)

I don't understand the details of your problem, but I think you have got problems with comparison of null values.
Null values are ignored by comparison. To select these columns, you should explicitly check l.effdate is null
e.g.
-- select with expdate < today or with no expdate
select *
from RATE_BILL l
where l.expdate is null or
l.expdate <= trunc(sysdate)

Related

SQL MIN of multiple columns handle null values

I'm trying to use MIN() aggregate function and fetch the minimum date from two columns and I was able to write the SQL query for this. But If one of the columns is having NULL values my query below is taking default date as '1900-01-01T00:00:00Z'. It should take the date from either Column1 or Column2 whichever has a value.
Here is the schema and the data SQLFiddle
+----+--------------+---------------+
| ID | ObservedDate | SubmittedDate |
+----+--------------+---------------+
| 1 | '2017-02-14' | '2017-02-15' |
| 1 | '2017-01-21' | '2017-01-22' |
| 2 | '2017-01-21' | |
+----+--------------+---------------+
Query
SELECT [ID],
CASE WHEN MIN(ObservedDate)<=MIN(SubmittedDate)
THEN COALESCE(MIN(ObservedDate),MIN(SubmittedDate))
ELSE COALESCE(MIN(SubmittedDate),MIN(ObservedDate)) end as RiskReferenceDate
FROM Measurements
group by ID
The reason I used COALESCE is because I want my query to consider the data from the column which has the value and ignore the column which has null value
Expected Result
+----+-------------------+
| ID | RiskReferenceDate |
+----+-------------------+
| 1 | '2017-01-21' |
| 2 | '2017-01-21' |
+----+-------------------+
Your problem is not NULL values. Your problem is empty strings. This is inserted as date 0.
The simplest solution is to fix your code to insert the correct value, as shown in this SQL Fiddle.
You can enforce this by adding a check constraint:
alter table Measurements add constraint chk_measurements_ObservedDate check (ObservedDate > '2000-01-01'); -- or whatever date
alter table Measurements add constraint chk_measurements_SubmittedDate check (SubmittedDate > '2000-01-01'); -- or whatever date
If you have existing data in the table, you can do:
update Measurements
set ObservedDate = NULLIF(ObservedDate, 0),
SubmittedDate = NULLIF(SubmittedDate, 0)
where ObservedDate = 0 or SubmittedDate = 0;
You can fix this in place with a bit more complexity in the query:
SELECT [ID],
(CASE WHEN MIN(NULLIF(ObservedDate, 0)) <= MIN(NULLIF(SubmittedDate, 0))
THEN COALESCE(MIN(NULLIF(ObservedDate, 0)), MIN(NULLIF(SubmittedDate, 0)))
ELSE COALESCE(MIN(NULLIF(SubmittedDate, 0)), MIN(NULLIF(ObservedDate, 0)))
END) as RiskReferenceDate
FROM Measurements
GROUP BY ID;
But I strongly urge you to fix the data.
I think the problem is being caused by the empty string you have inserted into one of your date columns, you should fix that really.
Anyway, this seems to work:
with a as (
select ObservedDate Dt
from Measurements
where ObservedDate <> ''
union all
select SubmittedDate
from Measurements
where SubmittedDate <> ''
)
select min(Dt)
from a

SQL : Getting data as well as count from a single table for a month

I am working on a SQL query where I have a rather huge data-set. I have the table data as mentioned below.
Existing table :
+---------+----------+----------------------+
| id(!PK) | name | Date |
+---------+----------+----------------------+
| 1 | abc | 21.03.2015 |
| 1 | def | 22.04.2015 |
| 1 | ajk | 22.03.2015 |
| 3 | ghi | 23.03.2015 |
+-------------------------------------------+
What I am looking for is an insert query into an empty table. The condition is like this :
Insert in an empty table where id is common, count of names common to an id for march.
Output for above table would be like
+---------+----------+------------------------+
| some_id | count | Date |
+---------+----------+----------------------+
| 1 | 2 | 21.03.2015 |
| 3 | 1 | 23.03.2015 |
+-------------------------------------------+
All I have is :
insert into empty_table values (some_id,count,date)
select id,count(*),date from existing_table where id=1;
Unfortunately above basic query doesn't suit this complex requirement.
Any suggestions or ideas? Thank you.
Udpated query
insert into empty_table
select id,count(*),min(date)
from existing_table where
date >= '2015-03-01' and
date < '2015-04-01'
group by id;
Seems you want the number of unique names per id:
insert into empty_table
select id
,count(distinct name)
,min(date)
from existing_table
where date >= DATE '2015-03-01'
and date < DATE '2015-04-01'
group by id;
If I understand correctly, you just need a date condition:
insert into empty_table(some_id, count, date)
select id, count(*), min(date)
from existing_table
where id = 1 and
date >= date '2015-03-01' and
date < date '2015-04-01'
group by id;
Note: the list after the table name contains the columns being inserted. There is no values keyword when using insert . . . select.
insert into empty_table
select id, count(*) as mycnt, min(date) as mydate
from existing_table
group by id, year_month(date);
Please use function provided by your RDBMS obtaining date part containing only year and month as far as you did not provide the RDBMS version and the date processing functionality varies wildly between them.

Count and name content from a SQL Server table

I have a table which is structured like this:
+-----+-------------+-------------------------+
| id | name | timestamp |
+-----+-------------+-------------------------+
| 1 | someName | 2016-04-20 09:41:41.213 |
| 2 | someName | 2016-04-20 09:42:41.213 |
| 3 | anotherName | 2016-04-20 09:43:41.213 |
| ... | ... | ... |
+-----+-------------+-------------------------+
Now, I am trying to create a query, which selects all timestamps since time x and count the amount of times the same name occurs in the result.
As an example, if we would apply this query to the table above, with 2016-04-20 09:40:41.213 as the date from which on it should be counted, the result should look like this:
+-------------+-------+
| name | count |
+-------------+-------+
| someName | 2 |
| anotherName | 1 |
+-------------+-------+
What I have accomplished so far is the following query, which gives me the the names, but not their count:
WITH screenshots AS
(
SELECT * FROM SavedScreenshotsLog
WHERE timestamp > '2016-04-20 09:40:241.213'
)
SELECT s.name
FROM SavedScreenshotsLog s
INNER JOIN screenshots sc ON sc.name = s.name AND sc.timestamp = s.timestamp
ORDER BY s.name
I have browsed through stackoverflow but was not able to find a solution which fits my needs and as I am not very experienced with SQL, I am out of ideas.
You mention one table in your question, and then show a query with two tables. That makes it hard to follow the question.
What you are asking for is a simple aggregation:
SELECT name, COUNT(*)
FROM SavedScreenshotsLog
WHERE timestamp > '2016-04-20 09:40:241.213'
GROUP BY name
ORDER BY COUNT(*) DESC;
EDIT:
If you want "0" values, you can use conditional aggregation:
SELECT name,
SUM(CASE WHEN timestamp > '2016-04-20 09:40:241.213' THEN 1 ELSE 0 END) as cnt
FROM SavedScreenshotsLog
GROUP BY name
ORDER BY cnt DESC;
Note that this will run slower because there is no filter on the dates prior to aggregation.
CREATE TABLE #TEST (name varchar(100), dt datetime)
INSERT INTO #TEST VALUES ('someName','2016-04-20 09:41:41.213')
INSERT INTO #TEST VALUES ('someName','2016-04-20 09:41:41.213')
INSERT INTO #TEST VALUES ('anotherName','2016-04-20 09:43:41.213')
declare #YourDatetime datetime = '2016-04-20 09:41:41.213'
SELECT name, count(dt)
FROM #TEST
WHERE dt >= #YourDatetime
GROUP BY name
I've posted the answer, because using the above query can generate errors in converting the string in where clause into a datetime, it depends on the format of the datetime.

Append a zero to value if necessary in SQL statement DB2

I have a complex SQL statement that I need to match up two table based on a join. The the intial part of the complex query has a location number that is stored in a table as a Smallint and the second table has the Store number stored as a CHAR(4). I have been able to cast the smallint to a char(4) like this:
CAST(STR_NBR AS CHAR(4)) AND LOCN_NBR
The issue is that because the Smallint suppresses the leading '0' the join returns null values from the right hand side of the LEFT OUTER JOIN.
Example
Table set A(Smallint) Table Set B (Char(4))
| 96 | | 096 |
| 97 | | 097 |
| 99 | | 099 |
| 100 | <- These return -> | 100 |
| 101 | <- These return -> | 101 |
| 102 | <- These return -> | 102 |
I need to add make it so that they all return, but since it is in a join statement how do you append a zero to the beginning and in certain conditions and not in others?
SELECT RIGHT('0000' || STR_NBR, 4)
FROM TABLE_A
Casting Table B's CHAR to tinyint would work as well:
SELECT ...
FROM TABLE_A A
JOIN TABLE_B B
ON A.num = CAST(B.txt AS TINYINT)
Try LPAD function:
LPAD(col,3,'0' )
I was able to successfully match it out to obtain a 3 digit location number at all times by doing the following:
STR_NBR was originally defined as a SmallINT(2)
LOCN_NO was originally defined as a Char(4)
SELECT ...
FROM TABLE_A AS A
JOIN TABLE_B AS B
ON CAST(SUBSTR(DIGITS(A.STR_NBR),3,3)AS CHAR(4)) = B.LOCN_NO

SQL: earliest date from set of date fields

I have a series of dates associated with a unique identifier in a table. For example:
1 | 1999-04-01 | 0000-00-00 | 0000-00-00 | 0000-00-00 | 2008-12-01 |
2 | 1999-04-06 | 2000-04-01 | 0000-00-00 | 0000-00-00 | 2010-04-03 |
3 | 1999-01-09 | 0000-00-00 | 0000-00-00 | 0000-00-00 | 2007-09-03 |
4 | 1999-01-01 | 0000-00-00 | 1997-01-01 | 0000-00-00 | 2002-01-04 |
Is there a way, to select the earliest date from the predefined list of DATE fields using a straightforward SQL command?
So the expected output would be:
1 | 1999-04-01
2 | 1999-04-06
3 | 1998-01-09
4 | 1997-01-01
I am guessing this is not possible but I wanted to ask and make sure. My current solution in mind involves putting all the dates in a temporary table and then using that to get the MIN()
thanks
Edit: The problem with using LEAST() as stated is that the new behaviour is to return NULL if any of the columns in NULL. In a series of dates like the dataset in question, any date might be NULL. I would like to obtain the earliest actual date from the set of dates.
SOLUTION: Used a combination of LEAST() and IF() in order to filter out NULL dates.
SELECT LEAST( IF(date1=0,NOW(),date1), IF(date2=0,NOW(),date2), [...] );
Lessons learnt a) COALESCE does not treat '0000-00-00' as a NULL date, b) LEAST will return '0000-00-00' as the smallest value - I would guess this is due to internal integer comparison(?)
select id, least(date_col_a, date_col_b, date_col_c) from table
upd
select id, least (
case when date_col_a = '0000-00-00' then now() + interval 100 year else date_col_a end,
case when date_col_b = '0000-00-00' then now() + interval 100 year else date_col_b end) from table
Actually you can do it like bellow or using a large case structure... or with least(date1, date2, dateN) but with that null could be the minimum value...
select rowid, min(date)
from
( select rowid, date1 from table
union all
select rowid, date2 from table
union all
select rowid, date3 from table
/* and so on */
)
group by rowid;
HTH
select
id,
least(coalesce(date1, '9999-12-31'), ....)
from
table