I have a simple select query that has this result:
first_date | last_date | outstanding
14/01/2015 | 14/04/2015 | 100000
I want to split it to be
first_date | last_date | period | outstanding
14/01/2015 | 31/01/2015 | 31/01/2015 | 100000
01/02/2015 | 28/02/2015 | 28/02/2015 | 100000
01/03/2015 | 31/03/2015 | 31/03/2015 | 100000
01/04/2015 | 14/04/2015 | 31/04/2015 | 100000
Please show me how to do it simply, without using function/procedure, object and cursor.
Try:
WITH my_query_result AS(
SELECT date '2015-01-14' as first_date , date '2015-04-14' as last_date,
10000 as outstanding
FROM dual
)
SELECT greatest( trunc( add_months( first_date, level - 1 ),'MM'), first_date )
as first_date,
least( trunc( add_months( first_date, level ),'MM')-1, last_date )
as last_date,
trunc( add_months( first_date, level ),'MM')-1 as period,
outstanding
FROM my_query_result t
connect by level <= months_between( trunc(last_date,'MM'), trunc(first_date,'MM') ) + 1;
A side note: April has only 30 days, so a date 31/04/2015 in your question is wrong.
Related
In my transaction table has id Number(11), name Varchar2(25) , transactiondate number(22).
Need to write SQL query to fetch the transaction details. transactiondate should be return as date & time format instead of number.
transaction table
ID Name transactiondate
1 AAA 2458010
2 BBB 2458351
3 CCC 2458712
I got the below result when i execute the below query
Select * from transaction where transactiondate <= TOCHAR(todate('2019/09/17 00:00:00', 'YYYY/MM/DD hh24:mi:ss') , 'J');
ID Name transactiondate
1 AAA 2458010
2 BBB 2458351
I got the query syntax error when i tried execute the below query
Select name, convert(datetime, convert(varchar(10), transactiondate)) as txndateformat
from transaction;
Expecting query that has to be return name and transactiondate as date format instead of number.
I got below result when i execute the below query
Desc transaction;
Name Null? Type
Id Not Null Number(19)
Name Not Null VarChar2(100)
transactiondate Not Null Number(22)
It all depends on when you are measuring time zero from and what your units are.
Here are some typical solutions:
Oracle Setup:
CREATE TABLE transaction ( ID, Name, transactiondate ) AS
SELECT 1, 'AAA', 2456702 FROM DUAL UNION ALL
SELECT 2, 'BBB', 2456703 FROM DUAL
Query:
SELECT name,
TO_DATE( transactiondate, 'J' )
AS julian_date,
DATE '1970-01-01' + NUMTODSINTERVAL( transactiondate / 1000, 'SECOND' )
AS unix_timestamp,
DATE '1970-01-01' + NUMTODSINTERVAL( transactiondate, 'SECOND' )
AS seconds_since_1970,
DATE '1970-01-01' + NUMTODSINTERVAL( transactiondate, 'MINUTE' )
AS minutes_since_1970,
DATE '1970-01-01' + NUMTODSINTERVAL( transactiondate, 'HOUR' )
AS hours_since_1970,
DATE '1900-01-01' + NUMTODSINTERVAL( transactiondate, 'HOUR' )
AS hours_since_1900,
DATE '1899-12-30' + transactiondate
AS excel_date
FROM transaction
Output:
NAME | JULIAN_DATE | UNIX_TIMESTAMP | SECONDS_SINCE_1970 | MINUTES_SINCE_1970 | HOURS_SINCE_1970 | HOURS_SINCE_1900 | EXCEL_DATE
:--- | :------------------ | :------------------ | :------------------ | :------------------ | :------------------ | :------------------ | :------------------
AAA | 2014-02-13 00:00:00 | 1970-01-01 00:40:56 | 1970-01-29 10:25:02 | 1974-09-03 01:02:00 | 2250-04-05 14:00:00 | 2180-04-04 14:00:00 | 8626-03-21 00:00:00
BBB | 2014-02-14 00:00:00 | 1970-01-01 00:40:56 | 1970-01-29 10:25:03 | 1974-09-03 01:03:00 | 2250-04-05 15:00:00 | 2180-04-04 15:00:00 | 8626-03-22 00:00:00
db<>fiddle here
(Note: Excel dates are slightly more complicated if you want to support values before 1900-03-01 but most people do not need this so there is only the simplified version included above.)
I assume that numbers are epoch numbers.
For SQL Server:
SELECT DATEADD(ss, 2456702, '19700101') --ss means interval = seconds
For Oracle:
select to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60) * 2456702
from dual;
I have a database table with a start date and a number of months. How can I transform that into multiple rows based on the number of months?
I want to transform this
Into this:
We can try using a calendar table here, which includes all possible start of month dates which might appear in the expected output:
with calendar as (
select '2017-09-01'::date as dt union all
select '2017-10-01'::date union all
select '2017-11-01'::date union all
select '2017-12-01'::date union all
select '2018-01-01'::date union all
select '2018-02-01'::date union all
select '2018-03-01'::date union all
select '2018-04-01'::date union all
select '2018-05-01'::date union all
select '2018-06-01'::date union all
select '2018-07-01'::date union all
select '2018-08-01'::date
)
select
t.id as subscription_id,
c.dt,
t.amount_monthly
from calendar c
inner join your_table t
on c.dt >= t.start_date and
c.dt < t.start_date + (t.month_count::text || ' month')::interval
order by
t.id,
c.dt;
Demo
This can easily be done using generate_series() in Postgres
select t.id,
g.dt::date,
t.amount_monthly
from the_table t
cross join generate_series(t.start_date,
t.start_date + interval '1' month * (t.month_count - 1),
interval '1' month) as g(dt);
OK, it's very easy to implement this in PostgreSQL, just use generate_series, as below:
select * from month_table ;
id | start_date | month_count | amount | amount_monthly
------+------------+-------------+--------+----------------
1382 | 2017-09-01 | 3 | 38 | 1267
1383 | 2018-02-01 | 6 | 50 | 833
(2 rows)
select
id,
generate_series(start_date,start_date + (month_count || ' month') :: interval - '1 month'::interval, '1 month'::interval)::date as date,
amount_monthly
from
month_table ;
id | date | amount_monthly
------+------------+----------------
1382 | 2017-09-01 | 1267
1382 | 2017-10-01 | 1267
1382 | 2017-11-01 | 1267
1383 | 2018-02-01 | 833
1383 | 2018-03-01 | 833
1383 | 2018-04-01 | 833
1383 | 2018-05-01 | 833
1383 | 2018-06-01 | 833
1383 | 2018-07-01 | 833
(9 rows)
You may not need so many subqueries but this should help you understand how it can be broken down
WITH date_minmax AS(
SELECT
min(start_date) as date_first,
(max(start_date) + (month_count::text || ' months')::interval)::date AS date_last
FROM "your_table"
GROUP BY month_count
), series AS (
SELECT generate_series(
date_first,
date_last,
'1 month'::interval
)::date as list_date
FROM date_minmax
)
SELECT
id as subscription_id,
list_date as date,
amount_monthly as amount
FROM series
JOIN "your_table"
ON list_date <# daterange(
start_date,
(start_date + (month_count::text || ' months')::interval)::date
)
ORDER BY list_date
This should achieve the desired result http://www.sqlfiddle.com/#!17/7d943/1
I want to query data from oracle and sort it by week, but the result is begin with week 52 and now is week 44 actually.
this is my sql :
SELECT *
FROM (SELECT
to_char(contract.MIGRATION_SUCCESS_DATE,'yyyy-iw') metric,
sum(contract.BLIS_MRR) mrr,
count(contract.CONTRACT_ID) count
FROM (SELECT DISTINCT
CONTRACT_ID,
BLIS_MRR,
MIGRATION_SUCCESS_DATE
FROM MR_MIGRATION_SITE) contract WHERE MIGRATION_SUCCESS_DATE < sysdate
GROUP BY to_char(contract.MIGRATION_SUCCESS_DATE,'yyyy-iw'))
ORDER BY metric DESC;
and the following picture is result:
I think you have to use format to_char(contract.MIGRATION_SUCCESS_DATE,'iyyy-iw')
The year of ISO week can be different to actual year, for example January 1st 2017 was week 52 of 2016, i.e. 2016-W52 according ISO definition!
I recommend format 'iyyy-"W"iw' which is compliant to ISO 8601
And perhaps change your GROUP BY clause to GROUP BY TRUNC(contract.MIGRATION_SUCCESS_DATE,'iw')
You get the problem when the date and the Monday of that date's week are in different years.
To fix it you can use #Wernfried Domscheit's solution and the iyyy-iw format to get the ISO year and week:
TO_CHAR( contract.MIGRATION_SUCCESS_DATE, 'IYYY-IW' )
Initial (Incorrect) Solution:
To fix it you can truncate the date to the start of the ISO week and then convert it to yyyy-iw format:
TO_CHAR(
TRUNC( contract.MIGRATION_SUCCESS_DATE, 'IW' )
'yyyy-iw'
)
For example:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE test_data( dt ) AS
SELECT DATE '2016-12-31' FROM DUAL UNION ALL
SELECT DATE '2017-01-01' FROM DUAL UNION ALL
SELECT DATE '2017-01-02' FROM DUAL UNION ALL
SELECT DATE '2014-12-31' FROM DUAL;
Query 1:
SELECT dt,
TO_CHAR( dt, 'iyyy-iw' ) AS trunc_iw,
TO_CHAR( TRUNC( dt, 'IW' ), 'yyyy-iw' ) AS trunc_iw2,
TO_CHAR( dt, 'yyyy-iw' ) AS non_trunc_iw
FROM test_data
Results:
| DT | TRUNC_IW | TRUNC_IW2 | NON_TRUNC_IW |
|----------------------|----------|-----------|--------------|
| 2016-12-31T00:00:00Z | 2016-52 | 2016-52 | 2016-52 |
| 2017-01-01T00:00:00Z | 2016-52 | 2016-52 | 2017-52 |
| 2017-01-02T00:00:00Z | 2017-01 | 2017-01 | 2017-01 |
| 2014-12-31T00:00:00Z | 2015-01 | 2014-01 | 2014-01 | -- initial version is incorrect for this date
I have orders data for all items sold in my store for the past year. The table looks like this:
order_date item_id item_name order_quantity unit_price
--------------------------------------------------------------------
01-01-2017 a123 a234 2 10
04-02-2017 b123 b234 3 12
04-09-2017 c123 c234 1 15
04-10-2017 b123 b234 2 12
I need to pull data for number of unique items sold by week, month, and quarter. The query output should look like this:
timeline number unique_item_count
week 1 1
week 14 1
week 15 2
month 1 1
month 4 2
quarter 2 2
I've tried the following:
SELECT
TO_CHAR(c.ORDER_DAY, 'Q') AS QTR,
TO_CHAR(c.ORDER_DAY, 'MM') AS MNTH,
TO_CHAR(c.ORDER_DAY, 'WW') AS WK,
COUNT(DISTINCT CASE WHEN (c.QUANTITY*c.OUR_PRICE) > 0 THEN ITEM_ID ELSE NULL END) AS SALES_CNT
FROM TABLE c
GROUP BY TO_CHAR(c.ORDER_DAY, 'Q'), TO_CHAR(c.ORDER_DAY, 'MM'), TO_CHAR(c.ORDER_DAY, 'WW');
This works for weekly data, however, monthly and quarterly is just a sum of weekly numbers, which is incorrect in this case, since same items might be ordered in two different weeks, so the monthly number should be lower.
Is there a way to pull number of unique items purchased in each week, month, quarter?
Thanks!
You can do it without using the union of multiple statements by using UNPIVOT:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_c (
order_date,
item_id,
item_name,
order_quantity,
order_price
) AS
SELECT DATE '2017-09-01' + LEVEL * 3 - 3, 1, 'aaa', 1, 1 FROM DUAL CONNECT BY LEVEL <= 10
UNION ALL
SELECT DATE '2017-09-01' + LEVEL * 4 - 4, 2, 'bbb', 1, 1 FROM DUAL CONNECT BY LEVEL <= 10
UNION ALL
SELECT DATE '2017-09-01' + LEVEL * 5 - 5, 3, 'ccc', 1, 1 FROM DUAL CONNECT BY LEVEL <= 10
Query 1:
SELECT year,
timeline,
"number",
COUNT( DISTINCT item_id )
FROM (
SELECT TO_CHAR( order_date, 'WW' ) AS week,
TO_CHAR( order_date, 'MM' ) AS month,
TO_CHAR( order_date, 'Q' ) AS quarter,
EXTRACT( YEAR from order_date ) AS year,
item_id
FROM table_c
WHERE order_quantity > 0
AND order_price > 0
)
UNPIVOT ( "number" FOR timeline IN ( week AS 'week', month AS 'month', quarter AS 'quarter' ) )
GROUP BY year, timeline, "number"
ORDER BY year, timeline, "number"
Results:
| YEAR | TIMELINE | number | COUNT(DISTINCTITEM_ID) |
|------|----------|--------|------------------------|
| 2017 | month | 09 | 3 |
| 2017 | month | 10 | 2 |
| 2017 | quarter | 3 | 3 |
| 2017 | quarter | 4 | 2 |
| 2017 | week | 35 | 3 |
| 2017 | week | 36 | 3 |
| 2017 | week | 37 | 3 |
| 2017 | week | 38 | 3 |
| 2017 | week | 39 | 3 |
| 2017 | week | 40 | 2 |
| 2017 | week | 41 | 1 |
| 2017 | week | 42 | 1 |
You can do this with union all:
SELECT 'Q' as timeline, TO_CHAR(c.ORDER_DAY, 'Q') AS number,
COUNT(DISTINCT CASE WHEN (c.QUANTITY*c.OUR_PRICE) > 0 THEN ITEM_ID END) AS SALES_CNT
FROM TABLE c
GROUP BY TO_CHAR(c.ORDER_DAY, 'Q')
UNION ALL
SELECT 'MM' as timeline, TO_CHAR(c.ORDER_DAY, 'MM') AS number,
COUNT(DISTINCT CASE WHEN (c.QUANTITY*c.OUR_PRICE) > 0 THEN ITEM_ID END) AS SALES_CNT
FROM TABLE c
GROUP BY TO_CHAR(c.ORDER_DAY, 'MM')
UNION ALL
SELECT 'WW' as timeline, TO_CHAR(c.ORDER_DAY, 'WW') AS number,
COUNT(DISTINCT CASE WHEN (c.QUANTITY*c.OUR_PRICE) > 0 THEN ITEM_ID END) AS SALES_CNT
FROM TABLE c
GROUP BY TO_CHAR(c.ORDER_DAY, 'WW');
You can use GROUPING SETS to aggregate at various levels in a single query. Like so:
SELECT CASE
WHEN GROUPING (TO_CHAR (c.order_day, 'Q')) = 0 THEN 'Quarter'
WHEN GROUPING (TO_CHAR (c.order_day, 'MM')) = 0 THEN 'Month'
WHEN GROUPING (TO_CHAR (c.order_day, 'WW')) = 0 THEN 'Week'
ELSE '??'
END
timeline,
CASE
WHEN GROUPING (TO_CHAR (c.order_day, 'Q')) = 0 THEN TO_CHAR (c.order_day, 'Q')
WHEN GROUPING (TO_CHAR (c.order_day, 'MM')) = 0 THEN TO_CHAR (c.order_day, 'MM')
WHEN GROUPING (TO_CHAR (c.order_day, 'WW')) = 0 THEN TO_CHAR (c.order_day, 'WW')
ELSE '??'
END
"NUMBER",
COUNT (DISTINCT CASE WHEN (c.quantity * c.our_price) > 0 THEN item_id ELSE NULL END) unique_item_count
FROM c
GROUP BY GROUPING SETS (
(TO_CHAR (c.order_day, 'Q')),
(TO_CHAR (c.order_day, 'MM')),
(TO_CHAR (c.order_day, 'WW')))
I have list of items that have start and end date. Items belong to user. For one item the period can range from 1-5 years. I want to find the count of days that are between the given date range which I would pass from query. Period start is always sysdate and end sysdate - 5 years
The count of days returned must also be in the period range.
Example:
I initiate a query as of 15.05.2015) as me being user, so I need to find all days between 15.05.2010 and 15.05.2015
During that period 2 items have belong to me:
Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010 = ~195 days
Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015 = ~170 days
I need a sum of these days that are exactly in that period.
For query right now I just have the count which takes the full range of an item (making it simple):
SELECT SUM(i.end_date - i.start_date) AS total_days
FROM items i
WHERE i.start_date >= to_date('2010-15-05', 'yyyy-mm-dd')
AND i.end_date <= to_date('2015-15-05', 'yyyy-mm-dd')
AND i.user = 'me'
So right now this would give me about count of 2 year period dates which is wrong, how should I update my select sum to include the dates that are in the period? Correct result would be 195 + 170. Currently I would get like 365 + 365 or something.
Period start is always sysdate and end sysdate - 5 years
You can get this using: SYSDATE and SYSDATE - INTERVAL '5' YEAR
Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010
= ~195 days
Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015
= ~170 days
Assuming these examples show start_date - end_date and the valid range is your expected answer for that particular SYSDATE then you can use:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE items ( "user", start_date, end_date ) AS
SELECT 'me', DATE '2010-01-01', DATE '2010-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2015-01-01', DATE '2015-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2009-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2016-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2012-01-01', DATE '2012-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2013-01-01', DATE '2013-01-01' FROM DUAL;
Query 1:
SELECT "user",
TO_CHAR( start_date, 'YYYY-MM-DD' ) AS start_date,
TO_CHAR( end_date, 'YYYY-MM-DD' ) AS end_date,
TO_CHAR( GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR), 'YYYY-MM-DD' ) AS valid_start,
TO_CHAR( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE)), 'YYYY-MM-DD' ) AS valid_end,
LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
- GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
+ 1
AS total_days
FROM items i
WHERE i."user" = 'me'
AND TRUNC(i.start_date) <= TRUNC(SYSDATE)
AND TRUNC(i.end_date) >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
Results:
| user | START_DATE | END_DATE | VALID_START | VALID_END | TOTAL_DAYS |
|------|------------|------------|-------------|------------|------------|
| me | 2010-01-01 | 2010-12-31 | 2010-05-21 | 2010-12-31 | 225 |
| me | 2015-01-01 | 2015-12-31 | 2015-01-01 | 2015-05-21 | 141 |
| me | 2009-01-01 | 2016-12-31 | 2010-05-21 | 2015-05-21 | 1827 |
| me | 2012-01-01 | 2012-12-31 | 2012-01-01 | 2012-12-31 | 366 |
| me | 2013-01-01 | 2013-01-01 | 2013-01-01 | 2013-01-01 | 1 |
This assumes that the start date is at the beginning of the day (00:00) and the end date is at the end of the day (24:00) - so, if the start and end dates are the same then you are expecting the result to be 1 total day (i.e. the period 00:00 - 24:00). If you are, instead, expecting the result to be 0 then remove the +1 from the calculation of the total days value.
Query 2:
If you want the sum of all these valid ranges and are happy to count dates in overlapping ranges multiple times then just wrap it in the SUM aggregate function:
SELECT SUM( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
- GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
+ 1 )
AS total_days
FROM items i
WHERE i."user" = 'me'
AND TRUNC(i.start_date) <= TRUNC(SYSDATE)
AND TRUNC(i.end_date) >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
Results:
| TOTAL_DAYS |
|------------|
| 2560 |
Query 3:
Now if you want to get a count of all the valid days in the range and not count overlap in ranges multiple times then you can do:
WITH ALL_DATES_IN_RANGE AS (
SELECT TRUNC(SYSDATE) - LEVEL + 1 AS valid_date
FROM DUAL
CONNECT BY LEVEL <= SYSDATE - (SYSDATE - INTERVAL '5' YEAR) + 1
)
SELECT COUNT(1) AS TOTAL_DAYS
FROM ALL_DATES_IN_RANGE a
WHERE EXISTS ( SELECT 'X'
FROM items i
WHERE a.valid_date BETWEEN i.start_date AND i.end_date
AND i."user" = 'me' )
Results:
| TOTAL_DAYS |
|------------|
| 1827 |
Assuming the time periods have no overlaps:
SELECT SUM(LEAST(i.end_date, DATE '2015-05-15') -
GREATEST(i.start_date, DATE '2010-05-15')
) AS total_days
FROM items i
WHERE i.start_date >= DATE '2010-05-15' AND
i.end_date <= DATE '2015-05-15' AND
i.user = 'me';
Use a case statement to evaluate the dates set start and end dates based on the case.
Select SUM(
(case when i.end_date > to_date('2015-15-05','yyyy-mm-dd') then
to_date('2015-15-05','yyyy-mm-dd') else
i.end_date end) -
(case when i.start_date< to_date('2010-15-05','yyyy-mm-dd') then
to_date('2010-15-05','yyyy-mm-dd') else
i.end_date end)) as total_days
FROM items i
WHERE i.start_date >= to_date('2010-15-05', 'yyyy-mm-dd')
AND i.end_date <= to_date('2015-15-05', 'yyyy-mm-dd')
AND i.user = 'me'