I have an Oracle 10g table containing 2 date columns, DATE_VALID_FROM from and DATE_VALID_TO.
MY_TABLE:
DATE_VALID_FROM | DATE_VALID_TO | VALUE
15-FEB-13 | 17-FEB-13 | 1.833
14-FEB-13 | 14-FEB-13 | 1.836
13-FEB-13 | 13-FEB-13 | 1.824
12-FEB-13 | 12-FEB-13 | 1.82
11-FEB-13 | 11-FEB-13 | 1.822
08-FEB-13 | 10-FEB-13 | 1.826
07-FEB-13 | 07-FEB-13 | 1.814
06-FEB-13 | 06-FEB-13 | 1.806
05-FEB-13 | 05-FEB-13 | 1.804
04-FEB-13 | 04-FEB-13 | 1.796
01-FEB-13 | 03-FEB-13 | 1.801
The range on the date columns isn’t always one day (weekends).
I can retrieve the value for a single date like this,
select DATE_VALID_FROM, DATE_VALID_TO, VALUE
from MY_TABLE
where DATE_VALID_FROM <= TO_DATE('16-FEB-13', 'dd-MON-yy')
and DATE_VALID_TO >= TO_DATE('16-FEB-13', 'dd-MON-yy')
Is it possible to retrieve the values for multiple random dates in a single query?
e.g. Values for the 1st, 5th, 6th, 11th and 16th Feb.
Producing this result set:
DATE_VALID_FROM | DATE_VALID_TO | VALUE
15-FEB-13 | 17-FEB-13 | 1.833
11-FEB-13 | 11-FEB-13 | 1.822
06-FEB-13 | 06-FEB-13 | 1.806
05-FEB-13 | 05-FEB-13 | 1.804
01-FEB-13 | 03-FEB-13 | 1.801
Try:
select DATE_VALID_FROM, DATE_VALID_TO, VALUE
from MY_TABLE M
JOIN (SELECT TO_DATE('01-FEB-2013') DATE_PARAM FROM DUAL UNION ALL
SELECT TO_DATE('05-FEB-2013') DATE_PARAM FROM DUAL UNION ALL
SELECT TO_DATE('06-FEB-2013') DATE_PARAM FROM DUAL UNION ALL
SELECT TO_DATE('11-FEB-2013') DATE_PARAM FROM DUAL UNION ALL
SELECT TO_DATE('16-FEB-2013') DATE_PARAM FROM DUAL) D
ON M.DATE_VALID_FROM <= D.DATE_PARAM and M.DATE_VALID_TO >= D.DATE_PARAM
SQLFiddle here
you can use a collection for this:
SQL> create type mydatetab as table of date;
2 /
Type created.
SQL> with dates as (select /*+ cardinality(t, 5) */ t.column_value thedate
2 from table(mydatetab(TO_DATE('16-FEB-13', 'dd-mon-rr'),
3 TO_DATE('13-FEB-13', 'dd-mon-rr'))) t)
4 select DATE_VALID_FROM, DATE_VALID_TO, VALUE
5 from MY_TABLE, dates
6 where dates.thedate between DATE_VALID_FROM and DATE_VALID_TO;
DATE_VALI DATE_VALI VALUE
--------- --------- ----------
13-FEB-13 13-FEB-13 1.824
15-FEB-13 17-FEB-13 1.833
if you don't have privs to create one (ie this is just an adhoc thing). there may be some public ones you can use. check select * from all_coll_types where elem_type_name = 'DATE' for these.
p.s. you should always specify the format when you use dates. i.e. dont do :
TO_DATE('16-FEB-13')
but rather:
TO_DATE('16-FEB-13', 'dd-MON-rr')
Related
I'm pretty new with SQL, and I'm struggling to figure out a seemingly simple task.
Here's the situation:
I'm working with two data sets
Data Set A, which is the most accurate but only refreshes every quarter
Data Set B, which has all the date, including the most recent data, but is overall less accurate
My goal is to combine both data sets where I would have Data Set A for all data up to the most recent quarter and Data Set B for anything after (i.e., all recent data not captured in Data Set A)
For example:
Data Set A captures anything from Q1 2020 (January to March)
Let's say we are April 15th
Data Set B captures anything from Q1 2020 to the most current date, April 15th
My goal is to use Data Set A for all data from January to March 2020 (Q1) and then Data Set B for all data from April 1 to 15
Any thoughts or advice on how to do this? Potentially a join function along with a date one?
Any help would be much appreciated.
Thanks in advance for the help.
I hope I got your question right.
I put in some sample data that might match your description: a date and an amount. To keep it simple, one row per any month. You can extract the quarter from a date, and keep that as an additional column, and then filter by that down the line.
WITH
-- some sample data: date and amount ...
indata(dt,amount) AS (
SELECT DATE '2020-01-15', 234.45
UNION ALL SELECT DATE '2020-02-15', 344.45
UNION ALL SELECT DATE '2020-03-15', 345.45
UNION ALL SELECT DATE '2020-04-15', 346.45
UNION ALL SELECT DATE '2020-05-15', 347.45
UNION ALL SELECT DATE '2020-06-15', 348.45
UNION ALL SELECT DATE '2020-07-15', 349.45
UNION ALL SELECT DATE '2020-08-15', 350.45
UNION ALL SELECT DATE '2020-09-15', 351.45
UNION ALL SELECT DATE '2020-10-15', 352.45
UNION ALL SELECT DATE '2020-11-15', 353.45
UNION ALL SELECT DATE '2020-12-15', 354.45
)
-- real query starts here ...
SELECT
EXTRACT(QUARTER FROM dt) AS the_quarter
, CAST(
TIMESTAMPADD(
QUARTER
, CAST(EXTRACT(QUARTER FROM dt) AS INTEGER)-1
, TRUNC(dt,'YEAR')
)
AS DATE
) AS qtr_start
, *
FROM indata;
-- out the_quarter | qtr_start | dt | amount
-- out -------------+------------+------------+--------
-- out 1 | 2020-01-01 | 2020-01-15 | 234.45
-- out 1 | 2020-01-01 | 2020-02-15 | 344.45
-- out 1 | 2020-01-01 | 2020-03-15 | 345.45
-- out 2 | 2020-04-01 | 2020-04-15 | 346.45
-- out 2 | 2020-04-01 | 2020-05-15 | 347.45
-- out 2 | 2020-04-01 | 2020-06-15 | 348.45
-- out 3 | 2020-07-01 | 2020-07-15 | 349.45
-- out 3 | 2020-07-01 | 2020-08-15 | 350.45
-- out 3 | 2020-07-01 | 2020-09-15 | 351.45
-- out 4 | 2020-10-01 | 2020-10-15 | 352.45
-- out 4 | 2020-10-01 | 2020-11-15 | 353.45
-- out 4 | 2020-10-01 | 2020-12-15 | 354.45
If you filter by quarter, you can group your data by that column ...
I have millions of IDs and I need to find the max date from 3 different dates for each ID.
Then, I need the start date of the month of the max date.
Here's a reference:
+---------+-----------+---------------+--------------------+
| ID | SETUP_DT | REINSTATE_DT | LOCAL_REINSTATE_DT |
+---------+-----------+---------------+--------------------+
| C111111 | 2018/1/1 | Null | Null |
| C111112 | 2015/12/9 | 2018/10/25 | 2018/10/25 |
| C111113 | 2018/10/1 | Null | Null |
| C111114 | 2018/10/6 | 2018/12/14 | 2018/12/14 |
+---------+-----------+---------------+--------------------+
And what I want is below:
+---------+-----------+
| ID | APP_MON |
+---------+-----------+
| C111111 | 2018/1/1 |
| C111112 | 2018/10/1 |
| C111113 | 2018/10/1 |
| C111114 | 2018/12/1 |
+---------+-----------+
I try different code to get the result.
When I used case and unpivot to find some specific IDs, the result looks all fine.
/* case */
SELECT DIST_ID as ID,
trunc(
case
when REINSTATE_DT is not null and LOCAL_REINSTATE_DT is not null then greatest(LOCAL_REINSTATE_DT, REINSTATE_DT)
when REINSTATE_DT is null and LOCAL_REINSTATE_DT is not null then LOCAL_REINSTATE_DT
when REINSTATE_DT is not null and LOCAL_REINSTATE_DT is null then REINSTATE_DT
else SETUP_DT
end, 'MM') AS CN_APP_MON
FROM DISTRIBUTOR
where DIST_ID in ('CN111111','CN111112','CN111113','CN111114');
/* unpivot */
SELECT DIST_ID as ID,
trunc(MAX(Date_value),'MM') AS CN_APP_MON
FROM DISTRIBUTOR
UNPIVOT (Date_value FOR Date_type IN (SETUP_DT, REINSTATE_DT, LOCAL_REINSTATE_DT))
where DIST_ID in ('CN111111','CN111112','CN111113','CN111114')
GROUP BY DIST_ID;
However, when I change the condition and tried to use the date period to pull out the data, the result is weird.
To be more specific, I tried to replace
where DIST_ID in ('CN111111','CN111112','CN111113','CN111114')` <br>
by
where REINSTATE_DT
between TO_DATE('2018/01/01','yyyy/mm/dd') and TO_DATE('2018/01/02','yyyy/mm/dd')`
But the unpivot function was not work. It showed:
ORA-00904: "REINSTATE_DT": invalid identifier
00904. 00000 - "%s: invalid identifier"
I want to know:
Which method is more efficient, or what else more efficient way to do that?
Why the unpivot method didn't work? What difference is between the 2 methods?
Thank you so much!
Assuming your dates are stored as dates, you can do this using greatest(). I'm not a fan of "magic" values in queries, so I like coalesce() for this purpose.
All your rows seem to have a setup_dt it can be used as a "default" using coalesce(). So:
select id,
trunc(greatest(setup_dt,
coalesce(reinstate_dt, setup_dt,
coalesce(local_reinstate_dt, setup_dt)
),
'mm') as app_mon
from distributor;
You don't need such daunting tasks, greatest with nvl function resolves your problem.
with distributor( ID, setup_dt, reinstate_dt, local_reinstate_dt ) as
(
select 'C111111',date'2018-01-01', Null, Null from dual union all
select 'C111112',date'2015-12-09',date'2018-10-25',date'2018-10-25' from dual union all
select 'C111113',date'2018-10-01',Null,Null from dual union all
select 'C111114',date'2018-10-06',date'2018-12-14',date'2018-12-14' from dual
)
select id, trunc(greatest(nvl(setup_dt,date'1900-01-01'),
nvl(reinstate_dt,date'1900-01-01'),
nvl(local_reinstate_dt,date'1900-01-01')),'mm')
as app_mon
from distributor;
ID APP_MON
------- ----------
C111111 01.01.2018
C111112 01.10.2018
C111113 01.10.2018
C111114 01.12.2018
Rextester Demo
P.S.: Using SETUP_DT, REINSTATE_DT or LOCAL_REINSTATE_DT columns can not be allowed In your query's where clause, because they are converted to Date_type in the unpivot part.
I am trying to track the usage of material with my SQL. There is no way in our database to link when a part is used to the order it originally came from. A part simply ends up in a bin after an order arrives, and then usage of parts basically just creates a record for the number of parts used at a time of transaction. I am attempting to, as best I can, link usage to an order number by summing over the data and sequentially assigning it to order numbers.
My sub queries have gotten me this far. Each order number is received on a date. I then join the usage table records based on the USEDATE needing to be equal to or greater than the RECEIVEDATE of the order. The data produced by this is as such:
| ORDERNUM | PARTNUM | RECEIVEDATE | ORDERQTY | USEQTY | USEDATE |
|----------|----------|-------------------------|-----------|---------|------------------------|
| 4412 | E1125 | 10/26/2016 1:32:25 PM | 1 | 1 | 11/18/2016 1:40:55 PM |
| 4412 | E1125 | 10/26/2016 1:32:25 PM | 1 | 3 | 12/26/2016 2:19:32 PM |
| 4412 | E1125 | 10/26/2016 1:32:25 PM | 1 | 1 | 1/3/2017 8:31:21 AM |
| 4111 | E1125 | 10/28/2016 2:54:13 PM | 1 | 1 | 11/18/2016 1:40:55 PM |
| 4111 | E1125 | 10/28/2016 2:54:13 PM | 1 | 3 | 12/26/2016 2:19:32 PM |
| 4111 | E1125 | 10/28/2016 2:54:13 PM | 1 | 1 | 1/3/2017 8:31:21 AM |
| 0393 | E1125 | 12/22/2016 11:52:04 AM | 3 | 3 | 12/26/2016 2:19:32 PM |
| 0393 | E1125 | 12/22/2016 11:52:04 AM | 3 | 1 | 1/3/2017 8:31:21 AM |
| 7812 | E1125 | 12/27/2016 10:56:01 AM | 1 | 1 | 1/3/2017 8:31:21 AM |
| 1191 | E1125 | 1/5/2017 1:12:01 PM | 2 | 0 | null |
The query for the above section looks as such:
SELECT
B.*,
NVL(B2.QTY, ‘0’) USEQTY
B2.USEDATE USEDATE
FROM <<Sub Query B>>
LEFT JOIN USETABLE B2 ON B.PARTNUM = B2.PARTNUM AND B2.USEDATE >= B.RECEIVEDATE
My ultimate goal here is to join USEQTY records sequentially until they have filled enough ORDERQTY’s. I also need to add an ORDERUSE column that represents what QTY from the USEQTY column was actually applied to that record. Not really sure how to word this any better so here is example of what I need to happen based on the table above:
| ORDERNUM | PARTNUM | RECEIVEDATE | ORDERQTY | USEQTY | USEDATE | ORDERUSE |
|----------|----------|-------------------------|-----------|---------|------------------------|-----------|
| 4412 | E1125 | 10/26/2016 1:32:25 PM | 1 | 1 | 11/18/2016 1:40:55 PM | 1 |
| 4111 | E1125 | 10/28/2016 2:54:13 PM | 1 | 3 | 12/26/2016 2:19:32 PM | 1 |
| 0393 | E1125 | 12/22/2016 11:52:04 AM | 3 | 2 | 12/26/2016 2:19:32 PM | 2 |
| 0393 | E1125 | 12/22/2016 11:52:04 AM | 3 | 1 | 1/3/2017 8:31:21 AM | 1 |
| 7812 | E1125 | 12/27/2016 10:56:01 AM | 1 | 0 | null | 0 |
| 1191 | E1125 | 1/5/2017 1:12:01 PM | 2 | 0 | null | 0 |
If I can get the query to pull the information like above, I will then be able to group the records together and sum the ORDERUSE column which would get me the information I need to know what orders have been used and which have not been fully used. So in the example above, if I were to sum the ORDERUSE column for each of the ORDERNUMs, orders 4412, 4111, 0393 would all show full usage. Orders 7812, 1191 would show not being fully used.
If i am reading this correctly you want to determine how many parts have been used. In your example it looks like you have 5 usages and with 5 orders coming to a total of 8 parts with the following orders having been used.
4412 - one part - one used
4111 - one part - one used
7812 - one part - one used
0393 - three
parts - two used
After a bit of hacking away I came up with the following SQL. Not sure if this works outside of your sample data since thats the only thing I used to test and I am no expert.
WITH data
AS (SELECT *
FROM (SELECT *
FROM sub_b1
join (SELECT ROWNUM rn
FROM dual
CONNECT BY LEVEL < 15) a
ON a.rn <= sub_b1.orderqty
ORDER BY receivedate)
WHERE ROWNUM <= (SELECT SUM(useqty)
FROM sub_b2))
SELECT sub_b1.ordernum,
partnum,
receivedate,
orderqty,
usage
FROM sub_b1
join (SELECT ordernum,
Max(rn) AS usage
FROM data
GROUP BY ordernum) b
ON sub_b1.ordernum = b.ordernum
You are looking for "FIFO" inventory accounting.
The proper data model should have two tables, one for "received" parts and the other for "delivered" or "used". Each table should show an order number, a part number and quantity (received or used) for that order, and a timestamp or date-time. I model both in CTE's in my query below, but in your business they should be two separate table. Also, a trigger or similar should enforce the constraint that a part cannot be used until it is available in stock (that is: for each part id, the total quantity used since inception, at any point in time, should not exceed the total quantity received since inception, also at the same point in time). I assume that the two input tables do, in fact, satisfy this condition, and I don't check it in the solution.
The output shows a timeline of quantity used, by timestamp, matching "received" and "delivered" (used) quantities for each part_id. In the sample data I illustrate a single part_id, but the query will work with multiple part_id's, and orders (both for received and for delivered or used) that include multiple parts (part id's) with different quantities.
with
received ( order_id, part_id, ts, qty ) as (
select '0030', '11A4', timestamp '2015-03-18 15:00:33', 20 from dual union all
select '0032', '11A4', timestamp '2015-03-22 15:00:33', 13 from dual union all
select '0034', '11A4', timestamp '2015-03-24 10:00:33', 18 from dual union all
select '0036', '11A4', timestamp '2015-04-01 15:00:33', 25 from dual
),
delivered ( order_id, part_id, ts, qty ) as (
select '1200', '11A4', timestamp '2015-03-18 16:30:00', 14 from dual union all
select '1210', '11A4', timestamp '2015-03-23 10:30:00', 8 from dual union all
select '1220', '11A4', timestamp '2015-03-23 11:30:00', 7 from dual union all
select '1230', '11A4', timestamp '2015-03-23 11:30:00', 4 from dual union all
select '1240', '11A4', timestamp '2015-03-26 15:00:33', 1 from dual union all
select '1250', '11A4', timestamp '2015-03-26 16:45:11', 3 from dual union all
select '1260', '11A4', timestamp '2015-03-27 10:00:33', 2 from dual union all
select '1270', '11A4', timestamp '2015-04-03 15:00:33', 16 from dual
),
(end of test data; the SQL query begins below - just add the word WITH at the top)
-- with
combined ( part_id, rec_ord, rec_ts, rec_sum, del_ord, del_ts, del_sum) as (
select part_id, order_id, ts,
sum(qty) over (partition by part_id order by ts, order_id),
null, cast(null as date), cast(null as number)
from received
union all
select part_id, null, cast(null as date), cast(null as number),
order_id, ts,
sum(qty) over (partition by part_id order by ts, order_id)
from delivered
),
prep ( part_id, rec_ord, del_ord, del_ts, qty_sum ) as (
select part_id, rec_ord, del_ord, del_ts, coalesce(rec_sum, del_sum)
from combined
)
select part_id,
last_value(rec_ord ignore nulls) over (partition by part_id
order by qty_sum desc) as rec_ord,
last_value(del_ord ignore nulls) over (partition by part_id
order by qty_sum desc) as del_ord,
last_value(del_ts ignore nulls) over (partition by part_id
order by qty_sum desc) as used_date,
qty_sum - lag(qty_sum, 1, 0) over (partition by part_id
order by qty_sum, del_ts) as used_qty
from prep
order by qty_sum
;
Output:
PART_ID REC_ORD DEL_ORD USED_DATE USED_QTY
------- ------- ------- ----------------------------------- ----------
11A4 0030 1200 18-MAR-15 04.30.00.000000000 PM 14
11A4 0030 1210 23-MAR-15 10.30.00.000000000 AM 6
11A4 0032 1210 23-MAR-15 10.30.00.000000000 AM 2
11A4 0032 1220 23-MAR-15 11.30.00.000000000 AM 7
11A4 0032 1230 23-MAR-15 11.30.00.000000000 AM 4
11A4 0032 1230 23-MAR-15 11.30.00.000000000 AM 0
11A4 0034 1240 26-MAR-15 03.00.33.000000000 PM 1
11A4 0034 1250 26-MAR-15 04.45.11.000000000 PM 3
11A4 0034 1260 27-MAR-15 10.00.33.000000000 AM 2
11A4 0034 1270 03-APR-15 03.00.33.000000000 PM 12
11A4 0036 1270 03-APR-15 03.00.33.000000000 PM 4
11A4 0036 21
12 rows selected.
Notes: (1) One needs to be careful if at one moment the cumulative used quantity exactly matches cumulative received quantity. All rows must be include in all the intermediate results, otherwise there will be bad data in the output; but this may result (as you can see in the output above) in a few rows with a "used quantity" of 0. Depending on how this output is consumed (for further processing, for reporting, etc.) these rows may be left as they are, or they may be discarded in a further outer-query with the condition where used_qty > 0.
(2) The last row shows a quantity of 21 with no used_date and no del_ord. This is, in fact, the "current" quantity in stock for that part_id as of the last date in both tables - available for future use. Again, if this is not needed, it can be removed in an outer query. There may be one or more rows like this at the end of the table.
I'm building a complex query but I have a problem...
Pratically, I retrieve a dates range from recursive function in sqlite:
WITH RECURSIVE dates(d)
AS (VALUES('2014-05-01')
UNION ALL
SELECT date(d, '+1 day')
FROM dates
WHERE d < '2014-05-5')
SELECT d AS date FROM dates
This is the result:
2014-05-01
2014-05-02
2014-05-03
2014-05-04
2014-05-05
I would join this query on other query, about this:
select date_column, column1, column2 from table
This is the result:
2014-05-03 column_value1 column_value2
Finally, I would like to see a similar result in output (join first query and date_column of second query):
2014-05-01 | | |
2014-05-02 | | |
2014-05-03 | column_value1 | column_value2 |
2014-05-04 | | |
2014-05-05 | | |
How can I obtain this result?
Thanks!!!
Why don't you simply do something like this ?
WITH RECURSIVE dates(d)
AS (VALUES('2014-05-01')
UNION ALL
SELECT date(d, '+1 day')
FROM dates
WHERE d < '2014-05-5')
SELECT
dates.d AS date
,table.column1
,table.column2
FROM dates
left join table
ON strftime("%Y-%m-%d", table.date_column) = dates.date
Perhaps you will need to convert your date...
I have two tables that have identical columns. I would like to join these two tables together into a third one that contains all the rows from the first one and from the second one all the rows that have a date that doesn't exist in the first table for the same location.
Example:
transactions:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | ABC | 123 | -20
2013-01-23 | ABC | 123 | -13.158
2013-02-04 | BCD | 234 | -4.063
transactions2:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | BDE | 123 | -30
2013-01-23 | DCF | 123 | -2
2013-02-05 | UXJ | 234 | -6
Desired result:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | ABC | 123 | -20
2013-01-23 | ABC | 123 | -13.158
2013-01-23 | DCF | 123 | -2
2013-02-04 | BCD | 234 | -4.063
2013-02-05 | UXJ | 234 | -6
How would I go about this? I tried for example this:
SELECT date, location_code, product_code, type, quantity, location_type, updated_at
,period_start_date, period_end_date
INTO transactions_combined
FROM ( SELECT * FROM transactions_kitchen k
UNION ALL
SELECT *
FROM transactions_admin h
WHERE h.date NOT IN (SELECT k.date FROM k)
) AS t;
but that doesn't take into account that I'd like to include the rows that have the same date, but different location. I have Postgresql 9.2 in use.
UNION simply doesn't do what you describe. This query should:
CREATE TABLE AS
SELECT date, location_code, product_code, quantity
FROM transactions_kitchen k
UNION ALL
SELECT h.date, h.location_code, h.product_code, h.quantity
FROM transactions_admin h
LEFT JOIN transactions_kitchen k USING (location_code, date)
WHERE k.location_code IS NULL;
LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. See:
Select rows which are not present in other table
Use CREATE TABLE AS instead of SELECT INTO. The manual:
CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS is the recommended syntax, since this form of SELECT INTO
is not available in ECPG or PL/pgSQL, because they interpret the
INTO clause differently. Furthermore, CREATE TABLE AS offers a
superset of the functionality provided by SELECT INTO.
Or, if the target table already exists:
INSERT INTO transactions_combined (<list names of target column here!>)
SELECT ...
Aside: I would not use date as column name. It's a reserved word in every SQL standard and a function and data type name in Postgres.
Change UNION ALL to just UNION and it should return only unique rows from each table.