So, I'm trying to run a SQL Statement to select and entire DB for upload in an ETL process, but I want to create a calculated column for the number of days between a ticket opening and being closed.
The IF-THEN logic is like this:
IF the department is Grounds Maintenance, and there's a foreign key match with a second table, and there's specific task type, then use formula A
ELSE
IF INCIDENT_RESOLVED_DATE IS NULL, then use formula B
ELSE use formula C
I think my CASE logic is solid, but it keeps bringing me back the same row over and over again. This tells me I'm missing something. I'm almost positive it's something to do with the WHEN statement on the first part of the CASE statement, but if I knew, I wouldn't be asking.
SELECT
a.*
, a.REPORTED_DATE
, a.CLOSE_DATE
, a.INCIDENT_RESOLVED_DATE
, CASE
WHEN DEPARTMENT = 'Grounds Maintenance'
AND a.INCIDENT_ID = b.SOURCE_OBJECT_ID
AND b.TASK_TYPE_ID = '11501'
THEN (to_date(b.ACTUAL_END_DATE, 'DD-MM-YYYY') - to_date(a.REPORTED_DATE, 'DD-MM-YYYY'))
ELSE CASE
WHEN a.INCIDENT_RESOLVED_DATE IS NULL THEN (to_date(a.CLOSE_DATE, 'DD-MM-YYYY') - to_date(a.REPORTED_DATE, 'DD-MM-YYYY'))
ELSE (to_date(a.INCIDENT_RESOLVED_DATE, 'DD-MM-YYYY') - to_date(a.REPORTED_DATE, 'DD-MM-YYYY'))END
END
AS
DAYS_TO_RESOLVE
FROM
CMEM_CS_SERVICE_REQUESTS a, jtf_tasks_b b
WHERE
EXTRACT(YEAR FROM a.REPORTED_DATE) > 2009;
Thoughts?
Related
I'm after some ideas on how I can write some Oracle SQL code to show a different field value depending on what the job_type_code is.
If the job_type_code is either EN11 or EN12 the only jobs that need to be returned are those where the target_comp_date is in the past.
If the job_type_code is either EN90 or EN91 all jobs should be displayed with the target_comp_date.
Code I've tried putting in is below.
select
case
when job_type.job_type_code in ('EN11','EN12') and job.target_comp_date < SYSDATE then
job.target_comp_date
when job_type.job_type_code in ('EN90','EN91') then job.target_comp_date
else 'check'
end as Test
from
job
inner join job_type on job.job_type_key = job_type.job_type_key
I think you need a WHERE clause here, not a CASE expression:
SELECT *
FROM job j
INNER JOIN job_type jt
ON j.job_type_key = jt.job_type_key
WHERE
(jt.job_type_code IN ('EN11', 'EN12') AND j.target_comp_date < SYSDATE) OR
jt.job_type_code IN ('EN90', 'EN91');
A case expression in the select is not going to filter any data. This sounds like a where clause:
where (job_type_code in ('EN11', 'EN12') and target_comp_date < sysdate
) or
(job_type_code in ('EN90', 'EN91') and target_comp_date > sysdate
)
It is not clear if you want other job_type_codes. If so, add or (job_type_code not in ('EN11', 'EN12', 'EN90', 'EN91').
GOAL: DETECT any difference between yesterday's table loads and today's loads. Each load loads values of data that are associated with bank accounts. So I need a query that returns each individual account that has a difference, with the value in the column name.
I need data from several columns that are located from two different tables. AEI_GFXAccounts and AEI_GFXAccountSTP. Each time the table is loaded, it has a "run_ID" that is incremented by one. So it needs to be compared to MAX(run_id) and MAX(run_id) -1.
I have tried the following queries. All this query does is return all the columns I need. I now need to implement logic that runs these queries WHERE runID = MAX(runID). Then run it again where run_ID = Max(runID) -1. Compare the two tables, show the differences that can be displayed under columns like SELECT AccountBranch WHERE MAX(Run_ID) -1 AS WAS. etc. and another custom named column as 'IS NOW' etc for each column.
SELECT AEI_GFXAccounts.AccountNumber,
AccountBranch,
AccountName,
AccountType,
CostCenter,
TransactionLimit,
ClientName,
DailyCumulativeLimit
FROM AEI_GFXAccounts
JOIN AEI_GFXAccountSTP
ON (AEI_GFXAccounts.feed_id = AEI_GFXAccountSTP.feed_id
and AEI_GFXAccounts.run_id = AEI_GFXAccountSTP.run_id)
I use something similar to this to detect changes for a logging system:
WITH data AS (
SELECT
a.run_id,
a.AccountNumber,
?.AccountBranch,
?.AccountName,
?.AccountType,
?.CostCenter,
?.TransactionLimit,
?.ClientName,
?.DailyCumulativeLimit
FROM
AEI_GFXAccounts a
INNER JOIN AEI_GFXAccountSTP b
ON
a.feed_id = b.feed_id and
a.run_id = b.run_id
),
yest AS (
SELECT * FROM data WHERE run_id = (SELECT MAX(run_id)-1 FROM AEI_GFXAccounts)
),
toda AS (
SELECT * FROM data WHERE run_id = (SELECT MAX(run_id) FROM AEI_GFXAccounts)
)
SELECT
CASE WHEN COALESCE(yest.AccountBranch, 'x') <> COALESCE(toda.AccountBranch, 'x') THEN yest.AccountBranch END as yest_AccountBranch,
CASE WHEN COALESCE(yest.AccountBranch, 'x') <> COALESCE(toda.AccountBranch, 'x') THEN toda.AccountBranch END as toda_AccountBranch,
CASE WHEN COALESCE(yest.AccountName, 'x') <> COALESCE(toda.AccountName, 'x') THEN yest.AccountName END as yest_AccountName,
CASE WHEN COALESCE(yest.AccountName, 'x') <> COALESCE(toda.AccountName, 'x') THEN toda.AccountName END as toda_AccountName,
...
FROM
toda INNER JOIN yest ON toda.accountNumber = yestaccountNumber
Notes:
You didn't say which table some of your columns are from. I've prefixed them with ?. - replace these with a. or as. respectively (always good practice to fully qualify all your column aliases)
When you're repeating out the pattern in the bottom select (above ...) choose data for the COALESCE that will not appear in the column. I'm using COALESCE as a quick way to avoid having to write CASE WHEN a is null and b is not null or b is null and a is not null or a != b, but the comparison fails if accountname (for example) was 'x' yesterday and today it is null, because the null becomes 'x'. If you pick data that will never appear in the column then the check will work out because nulls will be coalesced to something that can never appear in the real data, and hence the <> comparison will work out
If you don't care when a column goes to null today from a value yesterday, or was null yesterday but is a value today, you can ditch the coalesce and literally just do toda.X <> yest.X
New accounts today won't show up until tomorrow. If you want them to show up do toda LEFT JOIN yest .... Of course all their properties will show as new ;)
This query returns all the accounts regardless of whether any changes have been made. If you only want a list of accounts with changes you'll need a where clause that is similar to your case whens:
WHERE
COALESCE(toda.AccountBranch, 'x') <> COALESCE(yest.AccountBranch, 'x') OR
COALESCE(toda.AccountName, 'x') <> COALESCE(yest.AccountName, 'x') OR
...
Do you have a date field? If so you can use Row_Number partitioned by your accounts. Exclude all accounts that have a max of 1 row 'New accounts", and then subtract the Max(rownumber) of each account's load by the Max(rownumber)-1's load. Only return accounts where this returned load is >0.You can also use the lag function to grab the previous accounts load instead of Max(rownumber)-1
Firstly, I know ORA-000937 is a common issue, with an obvious answer, but I am yet to find any results that could point to a possible solution.
Quick Spec;
National TB/HIV report, based on patient medical records/encounters/visits and drug's provided. This is only a tiny portion of the report, which loops all patient drugs, and calculates most of it's figures off date calculations, we do not store historic/aggregated data, everything is aggregated when requested. I mention this because I expect a few suggestions to move away from GTT's and to rather use MVIEW's - I hear you, but no, not a solution.
Here is my problem, this is one of my queries populating a GTT, within a function, which stores aggregated results. I have structured my data collection in such away as to reduce server load as the medical table exceeds 12 million records. (Each patient has 3 per default).
Here is the GTT
CREATE GLOBAL TEMPORARY TABLE EKAPAII.TEMP_ART_VISIT_MEDS
(
EPISODE_ID NUMBER,
LAST_MEDS_DATE DATE
)
ON COMMIT DELETE ROWS
RESULT_CACHE (MODE DEFAULT)
NOCACHE;
CREATE UNIQUE INDEX EKAPAII.TEMP_ART_VISIT_MEDS_PK ON EKAPAII.TEMP_ART_VISIT_MEDS
(EPISODE_ID);
ALTER TABLE EKAPAII.TEMP_ART_VISIT_MEDS ADD (
CONSTRAINT TEMP_ART_VISIT_MEDS_PK
PRIMARY KEY
(EPISODE_ID)
USING INDEX EKAPAII.TEMP_ART_VISIT_MEDS_PK
ENABLE VALIDATE);
And my simple insert query
INSERT INTO temp_art_visit_meds (EPISODE_ID, LAST_MEDS_DATE)
SELECT episode_id, encounter_date + number_of_days
FROM ( SELECT enc_meds.episode_id,
MAX (enc_meds.encounter_date) encounter_date,
MAX (
CASE
WHEN (NVL (meds.number_of_days, 0) > 150)
THEN
90
ELSE
NVL (meds.number_of_days, 0)
END)
number_of_days
FROM temp_art_visit enc_meds,
vd_medication meds,
dl_drugs_episode_class dlc,
( SELECT latest_meds_visit.episode_id,
MAX (latest_meds_visit.encounter_date)
encounter_date
FROM temp_art_visit latest_meds_visit,
vd_medication latest_meds,
dl_drugs_episode_class dc
WHERE latest_meds_visit.encounter_id =
latest_meds.encounter_id
AND latest_meds.drug_id = dc.drug_id
AND dc.sd_drug_application_id = 8401
GROUP BY latest_meds_visit.episode_id) latest_meds
WHERE enc_meds.encounter_id = meds.encounter_id
AND enc_meds.episode_id = latest_meds.episode_id
AND enc_meds.encounter_date =
latest_meds.encounter_date
AND meds.drug_id = dlc.drug_id
AND dlc.sd_drug_application_id = 8401
AND meds.active_flag = 'Y'
GROUP BY enc_meds.episode_id);
Now my error, is ORA-000937 not a single-group group function, but if I run this query in a normal editor window it works, but I get ORA-000937 when executing the select query in the package body itself, calling the function does not return any error, even though I have an exception block to handle any errors.
Any help will do, I do understand that this errors could occur only at runtime, and not at compile time? Or is it the fact that I am running the query from the pl/sql block?
Toad for Oracle version 12.5 - in all it's glory. (sarcasm)
Again, pardon me if this has already been asked/answered.
EDIT - SOLUTION
So, after a few hours of trouble shooting, I was able to understand why this error is being generated. Firstly, the fixed query;
INSERT INTO temp_art_visit_meds (EPISODE_ID, LAST_MEDS_DATE)
SELECT enc_meds.episode_id ,
TRUNC( MAX (enc_meds.encounter_date)) + MAX (CASE WHEN (NVL (meds.number_of_days, 0) > 150) THEN 90 ELSE NVL (meds.number_of_days, 0) END) last_meds_date
FROM temp_art_visit enc_meds,
vd_medication meds,
dl_drugs_episode_class dlc,
( SELECT latest_meds_visit.episode_id,
MAX (latest_meds_visit.encounter_date) encounter_date
FROM temp_art_visit latest_meds_visit,
vd_medication latest_meds,dl_drugs_episode_class dc
WHERE latest_meds_visit.encounter_id = latest_meds.encounter_id
AND latest_meds.drug_id = dc.drug_id
AND dc.sd_drug_application_id = 8401
GROUP BY latest_meds_visit.episode_id) latest_meds
WHERE enc_meds.encounter_id = meds.encounter_id
AND enc_meds.episode_id = latest_meds.episode_id
AND enc_meds.encounter_date = latest_meds.encounter_date
AND meds.drug_id = dlc.drug_id
AND dlc.sd_drug_application_id = 8401
AND meds.active_flag = 'Y'
GROUP BY enc_meds.episode_id, meds.number_of_days, enc_meds.encounter_date;
It would appear that problem was due to the amount of sub-queries, I attempted to use different hint optimizers with no avail. If you look closely at the first query, you will notice I am basically aggregating results from aggregated results, so the column's encounter_date, episode_id, number_of_days are no longer 'available', so even if I added the appropriate GROUP BY clause to my last (outer) subquery, Oracle would not be able to group on those column names/identifiers.
I am not sure why this would fail only in a PACKAGE BODY, and did not return any SQLERR or SQLCODE when executed.
Happy days.
I want to create a graph that pulls data from 2 user questions generated from within an SQL database.
The issue is that the user questions are stored in the same table, as are the answers. The only connection is that the question string includes a year value, which I extract using the LEFT command so that I output a column called 'YEAR' with a list of integer values running from 2013 to 2038 (25 year period).
I then want to pull the corresponding answers ('forecast' and 'actual') from each 'YEAR' so that I can plot a graph with a couple of values from each year (sorry if this isn't making any sense). The graph should show a forecast line covering the 25 year period with a second line (or column) showing the actual value as it gets populated over the years. I'll then be able to visualise if our actual value is close to our original forecast figures (long term goal!)
CODE BELOW
SELECT CAST((LEFT(F_TASK_ANS.TA_ANS_QUESTION,4)) AS INTEGER) AS YEAR,
-- first select takes left 4 characters of question and outputs value as string then coverts value to whole number.
CAST((CASE WHEN F_TASK_ANS.TA_ANS_QUESTION LIKE '%forecast' THEN F_TASK_ANS.TA_ANS_ANSWER END) AS NUMERIC(9,2)) AS 'FORECAST',
CAST((CASE WHEN F_TASK_ANS.TA_ANS_QUESTION LIKE '%actual' THEN ISNULL(F_TASK_ANS.TA_ANS_ANSWER,0) END) AS NUMERIC(9,2)) AS 'ACTUAL'
-- actual value will be null until filled in each year therefore ISNULL added to replace null with 0.00.
FROM F_TASK_ANS INNER JOIN F_TASKS ON F_TASK_ANS.TA_ANS_FKEY_TA_SEQ = F_TASKS.TA_SEQ
WHERE TA_ANS_ANSWER <> ''
AND (TA_TASK_ID LIKE '%6051' OR TA_TASK_ID LIKE '%6052')
-- The two numbers above refer to separate PPM questions that the user enters a value into
I tried GROUP BY 'YEAR' but I get an
Error: Each GROUP BY expression must contain at least one column that
is not an outer reference - which I assume is because I haven't linked
the 2 tables in any way...
Should I be adding a UNION so the tables are joined?
What I want to see is something like the following output (which I'll graph up later)
YEAR FORECAST ACTUAL
2013 135000 127331
2014 143000 145102
2015 149000 0
2016 158000 0
2017 161000 0
2018... etc
Any help or guidance would be hugely appreciated.
Thanks
Although the syntax is pretty hairy, this seems like a fairly simple query. You are in fact linking your two tables (with the JOIN statement) and you don't need a UNION.
Try something like this (using a common table expression, or CTE, to make the grouping clearer, and changing the syntax for slightly greater clarity):
WITH data
AS (
SELECT YEAR = CAST((LEFT(A.TA_ANS_QUESTION,4)) AS INTEGER)
, FORECAST = CASE WHEN A.TA_ANS_QUESTION LIKE '%forecast'
THEN CONVERT(NUMERIC(9,2), A.TA_ANS_ANSWER)
ELSE CONVERT(NUMERIC(9,2), 0)
END
, ACTUAL = CASE WHEN A.TA_ANS_QUESTION LIKE '%actual'
THEN CONVERT(NUMERIC(9,2), ISNULL(A.TA_ANS_ANSWER,0) )
ELSE CONVERT(NUMERIC(9,2), 0)
END
FROM F_TASK_ANS A
INNER JOIN F_TASKS T
ON A.TA_ANS_FKEY_TA_SEQ = T.TA_SEQ
-- It sounded like you wanted to include the ones where the answer was null. If
-- that's wrong, get rid of the test for NULL.
WHERE (A.TA_ANS_ANSWER <> '' OR A.TA_ANS_ANSWER IS NULL)
AND (TA_TASK_ID LIKE '%6051' OR TA_TASK_ID LIKE '%6052')
)
SELECT YEAR
, FORECAST = SUM(data.Forecast)
, ACTUAL = SUM(data.Actual)
FROM data
GROUP BY YEAR
ORDER BY YEAR
Try something like this ...
SELECT CAST((LEFT(F_TASK_ANS.TA_ANS_QUESTION,4)) AS INT) AS [YEAR]
,SUM(CAST((CASE WHEN F_TASK_ANS.TA_ANS_QUESTION LIKE '%forecast'
THEN F_TASK_ANS.TA_ANS_ANSWER ELSE 0 END) AS NUMERIC(9,2))) AS [FORECAST]
,SUM(CAST((CASE WHEN F_TASK_ANS.TA_ANS_QUESTION LIKE '%actual'
THEN F_TASK_ANS.TA_ANS_ANSWER ELSE 0 END) AS NUMERIC(9,2))) AS [ACTUAL]
FROM F_TASK_ANS INNER JOIN F_TASKS
ON F_TASK_ANS.TA_ANS_FKEY_TA_SEQ = F_TASKS.TA_SEQ
WHERE TA_ANS_ANSWER <> ''
AND (TA_TASK_ID LIKE '%6051' OR TA_TASK_ID LIKE '%6052')
GROUP BY CAST((LEFT(F_TASK_ANS.TA_ANS_QUESTION,4)) AS INT)
I get an error unless I remove one of the count(distinct ...). Can someone tell me why and how to fix it?
I'm in vfp. iif([condition],[if true],[else]) is equivalent to case when
SELECT * FROM dpgift where !nocalc AND rectype = "G" AND sol = "EM112" INTO CURSOR cGift
SELECT
list_code,
count(distinct iif(language != 'F' AND renew = '0' AND type = 'IN',donor,0)) as d_Count_E_New_Indiv,
count(distinct iif(language = 'F' AND renew = '0' AND type = 'IN',donor,0)) as d_Count_F_New_Indiv /*it works if i remove this*/
FROM cGift gift
LEFT JOIN
(select didnumb, language, type from dp) d
on cast(gift.donor as i) = cast(d.didnumb as i)
GROUP BY list_code
ORDER by list_code
edit:
apparently, you can't use multiple distinct commands on the same level. Any way around this?
VFP does NOT support two "DISTINCT" clauses in the same query... PERIOD... I've even tested on a simple table of my own, DIRECTLY from within VFP such as
select count( distinct Col1 ) as Cnt1, count( distinct col2 ) as Cnt2 from MyTable
causes a crash. I don't know why you are trying to do DISTINCT as you are just testing a condition... I more accurately appears you just want a COUNT of entries per each category of criteria instead of actually DISTINCT
Because you are not "alias.field" referencing your columns in your query, I don't know which column is the basis of what. However, to help handle your DISTINCT, and it appears you are running from WITHIN a VFP app as you are using the "INTO CURSOR" clause (which would not be associated with any OleDB .net development), I would pre-query and group those criteria, something like...
select list_code,
donor,
max( iif( language != 'F' and renew = '0' and type = 'IN', 1, 0 )) as EQualified,
max( iif( language = 'F' and renew = '0' and type = 'IN', 1, 0 )) as FQualified
from
list_code
group by
list_code,
donor
into
cursor cGroupedByDonor
so the above will ONLY get a count of 1 per donor per list code, no matter how many records that qualify. In addition, if one record as an "F" and another does NOT, then you'll have a value of 1 in EACH of the columns... Then you can do something like...
select
list_code,
sum( EQualified ) as DistEQualified,
sum( FQualified ) as DistFQualified
from
cGroupedByDonor
group by
list_code
into
cursor cDistinctByListCode
then run from that...
You can try using either another derived table or two to do the calculations you need, or using projections (queries in the field list). Without seeing the schema, it's hard to know which one will work for you.